halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 0
36
| timestamp
stringclasses 652
values | year
stringclasses 55
values | url
stringlengths 43
370
| text
stringlengths 16
2.18M
|
---|---|---|---|---|---|---|
00410054 | en | [
"shs.psy",
"shs.info"
] | 2024/03/04 16:41:20 | 2009 | https://shs.hal.science/halshs-00410054/file/Communication_Research.pdf | Vincent Coppola
Odile Camus
Keywords: Persuasive communication, Pragmatics, Social advertising, Health communication Persuasive communication, Pragmatics, Social advertising, Health communication
research documents, whether they are published or not. The documents may come
Persuasive communication and advertising efficacy for
public health policy : A critical approach.
Introduction
Generally speaking, each project of communication in an advertising context aims at catching the recipient's attention and impacting on his attitude and, above all, his behaviour. The progress of this process of influence has been carefully studied in the field of consumer psychology where the construct of attitude is now a central object of study (Kassarjian, 1982;[START_REF] Mcguire | Some internal psychological factors influencing consumer choice[END_REF]. On this particular subject, dual process models of persuasion progressively became models of reference as regards the topic of attitude change with easily found applications in the advertising communication domain [START_REF] Cacioppo | Central and peripheral routes to persuasion: The role of message repetition[END_REF][START_REF] Petty | Effects of issue involvement on attitudes in an advertising context[END_REF], 1983, 1984b;[START_REF] Petty | Central and peripheral routes to advertising effectiveness: The moderating role of involvement[END_REF][START_REF] Rucker | Understanding advertising effectiveness from a psychological perspective: The importance of attitudes and attitude strength[END_REF]. 1999) and the "Heuristic-Systematic Model" (Chaiken, 1980[START_REF] Chaiken | The heuristic model of persuasion[END_REF][START_REF] Chaiken | Heuristic and systematic information processing within and beyond the persuasion context[END_REF]. Let us present their main assumptions.
Two routes to persuasion and attitude change
Two modes of processing that the recipient employs when presented with a persuasive message are distinguished in these models. These two modes of thinking bound a cognitive elaboration continuum which ranges from low elaboration (i.e. low thought) to high elaboration (i.e. high thought), elaboration being defined as "the extent to which a person thinks about the issue-relevant arguments contained in the message" (Petty & Cacioppo, 1986, p. 128). The first one, called the "central route" (Petty & Cacioppo, 1986) or the "systematic processing" [START_REF] Chaiken | The heuristic model of persuasion[END_REF] can be defined as a thoughtful process, involving a sustained and careful scrutiny of the arguments presented in favour of the advocated position (often indicated in the conclusion of the message). In this case, the quality of the arguments is the main determinant of the attitude change, a message with "strong" arguments producing a higher attitude change than a message with "weak" arguments, via the cognitive responses that these arguments have elicited during the processing.
The possibility for the recipient to judge the validity of the advocated position indicated in the message and thus to have an attitude toward a particular object without involving an extensive cognitive processing of the actual argument presented characterizes essentially the second mode of processing, namely the "peripheral route" (Petty & Cacioppo, 1986) or the "heuristic processing" [START_REF] Chaiken | The heuristic model of persuasion[END_REF]. In that case, the value assigned by the recipient to some particular cues, which are typically dissociated from the informative contents of the message, is determinant in the attitude change. For example, the recipient can focus his attention primarily on the source of the message, in particular when this one is salient in the context of the communication, and react in function of her credibility or attractiveness, which will thus be the important factors of the attitude change.
The factors which influence these two routes to persuasion
These models are also original in specifying the main variables involved in the activation of these two modes of the processing of the persuasive message (i.e. central/systematic vs. peripheral/heuristic). First, the motivation to process: experimental studies have shown that when the recipient is in a condition of "low motivation", he tends to divert his attention from the argumentative contents and form his judgment about the advocated position much more in reference to the peripheral cues available in the context, whereas when he is in a condition of "high motivation", he will be more likely to involve a cognitive effort in the evaluation of the argumentative contents and thus to form his judgment in function of its quality (Chaiken, 1980;[START_REF] Petty | Issue involvement can increase or decrease persuasion by enhancing message-relevant cognitive responses[END_REF], 1984c;[START_REF] Petty | Personal involvement as a determinant of argument-based persuasion[END_REF]. Some motivational factors can be viewed as transitional conditions insofar as the recipient is experimentally and temporarily put in a state of high or less motivation. Among these ones, the personal relevance of the message topic which made the recipient more or less "involved", the consequences and repercussions of the decision and judgment which made the recipient more or less "accountable", have until now been the most experimentally manipulated factors, in particular when these models were applied to an advertising context [START_REF] Burnkrant | Effects of involvement and message content on information processing intensity[END_REF]Kokkinaki & Lunt, 1999;[START_REF] Laczniak | Toward a better understanding of the role of advertising message involvement in ad processing[END_REF][START_REF] Park | Types and levels of involvement and brand attitude formation[END_REF][START_REF] Petty | Effects of issue involvement on attitudes in an advertising context[END_REF], 1983, 1984a). Other motivational factors correspond to some personality variables, psychological characteristics, and cognitive styles which are more stable, whatever the moment and the situation. Among the most studied of these have been the "need for cognition" factor, reflecting the extent to which people engage in and enjoy effortful cognitive activities [START_REF] Cacioppo | The need for cognition[END_REF][START_REF] Cacioppo | Effects of need for cognition on message evaluation, argument recall, and persuasion[END_REF][START_REF] Cacioppo | Central and peripheral routes to persuasion: An individual difference perspective[END_REF], and more recently, the "need for closure" factor, reflecting an individual's desire for a clear, definite, or unambiguous knowledge and thus a motivation to draw a conclusion quickly and terminate cognitive information processing (Klein & Webster, 2000;Kruglansky, Webster & Klem, 1993;[START_REF] Webster | Individual differences in Need for Cognitive Closure[END_REF].
The ability to process is the second factor of activation. Just a single presentation or too fast a presentation of the message, too complicated contents or the unavailability of the relevant knowledge needed to scrutinize the arguments, the necessity to react immediately and to express the attitude more quickly, a state of stress, distraction or tiredness, are conditions which can stop the recipient from highly elaborating the informative contents (i.e. arguments) and orientate him towards peripheral cues, in particular when these one are made salient in the context. Motivational factors and ability factors are not similar, referring to different variables, so that when the recipient for instance is motivated to process but does not have the capacity or the possibility to process, he will tend to take the peripheral route.
The consequences of these two routes on attitude and attitude change
The studies have shown that the mode of processing is not without consequence on attitude strength, in other words that the strength of any attitude change depends on where it was changed along the cognitive elaboration continuum. An attitude which results from the "central route" or the "systematic processing" (i.e. high amount of thinking) is stronger than an attitude which results from the "peripheral route" or the "heuristic processing" (i.e. low amount of thinking) and the strength of the attitude impacts on its other properties such as persistency, resistance, and predictability. Once formed or newly changed, an attitude tends to persist longer over time, to resist better to a counter persuasion and to predict behaviour better when changed under high than under low thinking conditions [START_REF] Chaiken | Structural consistency and attitude strenght[END_REF]Haugtvedt & Petty, 1992;Petty, Haugtvedt & Smith, 1995).
Considering this ultimate assumption, it is important for advertisers (and marketers) to know how extensive the target's processing was during the exposure to the message and its evaluation, in others words to know if the recipient directed it towards the "central route" or the "peripheral route". Indeed, whether the advertising is strictly commercial with the objective to change the recipient's attitude toward consumer goods or strictly preventive, with the objective to make people sensitive to diverse social problems, the advertiser aims at yielding strong attitudes which will be followed by effective behaviours and practices. Let us end this presentation of the dual models of persuasion by recalling that these models have been empirically validated and applied in commercial advertising contexts [START_REF] Areni | The role of argument quality in the Elaboration Likelihood Model[END_REF]Areni & Sparks, 2005;Brock & Shavitt, 1983;[START_REF] Chaiken | Heuristic processing can bias systematic processing: Effects of source credibility, argument ambiguity, and task importance on attitude judgment[END_REF]Haugtvedt, Petty & Cacioppo, 1992;Haugtvedt & Priester, 1997;Haugtvedt & Strathman, 1990;Wu & Shaffer, 1987) just as well in preventive advertising contexts (Baker, Petty & Gleicher, 1991;Briňol & Petty, 2006;[START_REF] Petty | The Elaboration Likelihood Model of Persuasion[END_REF][START_REF] Petty | Persuasion theory and AIDS prevention[END_REF][START_REF] Rucker | Increasing the effectiveness of communications to consumers: Recommendations based on the Elaboration Likelihood and attitude certainty perspectives[END_REF].
A pragmatic approach of the communication
At this point of our presentation, if we consider that persuasive communication, whatever its topic and the social problem for which it is designed, is above all a process of communication between a locutor (i.e. source of the persuasive message) and an interlocutor (i.e. recipient of the persuasive message), then we can envisage revisiting these models of persuasion making reference to some tenets of pragmatics. As we have pointed out in the introduction, studies of communication and language undertaken during the last three decades, have emphasized the necessity of putting away the "code model" of the communication in favour of the pragmatic approach [START_REF] Ghiglione | Où va la pragmatique ? De la pragmatique à la psychologie sociale. Grenoble: Presses Universitaires de Grenoble. to persuasion in the presence or absence of prior information[END_REF]Meunier & Perraya, 2004;Reboul & Moeschler, 1998a[START_REF] Reboul | Pragmatique du discours. De l'interprétation des énoncés à l'interprétation du discours[END_REF].
Code and inference
A rather large consensus exists today in the field of communication and language sciences, according to which the inferential processes lie at the heart of the pragmatic conception of communication [START_REF] Bracops | Introduction à la pragmatique. Les théories fondatrices : actes de langage, pragmatique cognitive, pragmatique intégrée[END_REF][START_REF] Sperber | Relevance: Communication and cognition[END_REF][START_REF] Wilson | Relevance theory[END_REF]. These processes, which are superimposed on the code to make possible a complete interpretation of the sentences, are precisely the main subject of the pragmatics: "The study of the interpretation of the sentences recovers from what is called today pragmatics" (Sperber & Wilson, 1986, p.22). The pragmatic conception of communication is thus characterized by the notion that the linguistic communication has to be viewed as semantically under determined and completed by deductive mechanisms which are fundamentally cognitive to be completely interpreted, that is with some degree of exactitude.
Generally speaking, we will point out that the pragmatic conception of communication emphasizes the following notions which are a part of Grice's central claims (1975,1989): the intentionality present in the produced sentences, that is the distinction between "sentence meaning" and "speaker's meaning", and the presence of an implicit dimension, that is the distinction between the "message's literal meaning" and the "message's non literal meaning".
Let us point out that these distinctions are central in what Krauss and Chiu (1997) have called the "Intentionalist Paradigm". An essential feature of most human communication is the expression and recognition of intentions. Indeed, according to Grice, the sentence, characterized by its syntactic structure and semantic value, and like that, subject of the linguistic, has to be considered like a cue of the speaker's intended meaning; and the determination of this meaning depends on the recipient's capacity to make a deduction (i.e. inference) or much more in accordance with the Grice's terminology, to find some "implicatures" (Grice, 1975).
In developing this claim, Grice laid the foundations for an "inferential model" of communication, a model which constitutes an alternative to the classical "code model". The partisans of this alternative model, which has been largely developed in the Sperber and Wilson's works (1986), claim that the words in a sentence and the meanings those words are understood to convey do not bear a fixed relationship -that the communicative use of language requires participants to go beyond the words in extracting (i.e. deducting) what the speaker intend to mean. In other words, the recipient of the message takes an interest in the message's literal meanings for deducing the speaker's intended meaning and this "speaker's meaning" can not be mapped onto word strings in one-to-one fashion. An utterance is, of course, a linguistically coded piece of evidence, so that verbal comprehension involves an element of decoding. However, the linguistic meaning recovered by decoding is just one of the inputs to a non-demonstrative inference process which yields an interpretation of the "speaker's meaning". From then on, successful communication entails the exchange of "communicative intentions" and messages are the simple vehicles of these exchanges.
Situation and communication contract
In this alternative model, if the function of the language is to represent some thoughts originally located in the speaker's brain, it is assumed that this representation is only in relative adequacy with these original thoughts, generally more complex. The recipient will have to do reasoning and attempt some rational bets on the speaker's meaning. This reasoning can be viewed in a way as a calculation of the implied meaning, in which some extra linguistic information will be considered. These extra linguistic elements constitute what [START_REF] Sperber | Relevance: Communication and cognition[END_REF] have called the "cognitive environment" and what [START_REF] Clark | Context for comprehension[END_REF] have called the "common ground" and constitute a part of the context of the sentence.
Some implicatures are sufficiently generalised to be automatically generated or activated by the recipient in reaction to a particular sentence. The inferential interpretation of this sentence thus consists in drawing a plausible "communicative intention" from linguistic forms and/or linguistic fragments perceived by the recipient as a cue. But why and how have some implicatures a strong degree of determination? Why are some pragmatic interpretations realized with a very high regularity in front of some produced sentences? It seems reasonable to argue that because the interlocutors have a clear perception and a manifest representation of the situation in which the communication process is accomplished, and/or are sufficiently informed as regards some parameters of this situation, they succeed to find what the speaker intends to mean.
The situation in which the communication and thus the language takes place is defined by some particulars parameters, among which the identity and the status of the interlocutors ("who are the interlocutors?"), the central theme of the message ("what is the topic?"), the main objective of the communication ("what is the aim of the exchange?"). These particular pieces of information about the identity, the topic and the aim are useful to the recipient for deducing with some degree of confidence the underlying meaning and to reconstruct the communicative intention. They constitute a structured set of information which is likely to make some deductions more or less accessible and available. These parameters constitute what some authors call the "communication contract" [START_REF] Heath | Contrat de communication et co-construction du sens[END_REF]Charaudeau, 2004;[START_REF] Ghiglione | La psychologie sociale cognitive de la communication[END_REF]. For instance, Charaudeau (2004) argued that the "contract" is "that which speaks before one has spoken, that which is understood before one has read it, that which gives the text meaning through its conditions of communication …One part of the meaning is constructed before one enters the particularity of a text, and it is the communication contract that puts it in place, conditioning in part the actors of the exchange" (p. 112).
Toward a critique of the dual models of persuasion
If the notions developed by Grice (1975) and by [START_REF] Sperber | Relevance: Communication and cognition[END_REF] reflect correctly what is fundamentally a communication process between a source (speaker or writer) and a recipient (hearer or reader), then we may wonder whether the dual models of persuasion described above have sufficiently considered them and thus portrayed realistically the recipient's processing of the message. Let us consider now the following remarks.
Transparency and immanence of the language
The first remark regards the theoretical frames in which these studies on persuasion have been undertaken, and especially the conception of the language and the meaning which underlie them. As productive as they were, it seems to us that social psychological studies on persuasive communication have until now largely neglected the pragmatics views of language and meaning. Broadly speaking, we could say that language is endowed with no pragmatic properties, which means that it is viewed only from the point of view of its descriptive function. In other words, language is a simple inert vehicle of pieces of information (i.e. arguments) about the object of attitude; and its unique function is to convey semantic content (i.e. the message) with the greatest transparency, such content being sufficiently convincing and sound to produce an attitude change (i.e. persuasive effect). As a consequence, the meaning of the (persuasive) message is fully specified by its elements, the words which compose the sentences of the message being necessary and sufficient for the recipient to construct the meaning. So we argue here that these models of persuasion have been conceptualized and empirically validated in a narrow Encoding/Decoding paradigm.
In these studies on persuasive communication there is no consideration given to the idea that in the message there are some linguistics fragments which constitute information about the speaker's intended meaning, that is a cue on which the recipient can focus and from which he can activate more or less spontaneously and with sufficient confidence an inference about the "speaker's meaning" and once this "communicative intention" constructed, distracting his attention and reflexion from the strict informative content of the message. We can reasonably argue that also in a persuasive situation as defined in the Petty et al.'s paradigm, the recipient has some expectations, which means that he not only elaborates the issue-relevant information in the message but also attempts to foresee and anticipate what the author of the message intends to say, to what conclusion he wants to lead him. And this particular deduction will be all the more easier to make since some linguistic fragments are present in the text. The "strength" of the argument As we have seen, the quality of the argumentation is a determining factor of the attitude change in the models of persuasion (Petty & Cacioppo, 1984c;[START_REF] Petty | Personal involvement as a determinant of argument-based persuasion[END_REF]. Indeed, provided that the recipient goes into the "central route", arguments of good quality will increase persuasion whereas poor arguments produce the opposite effect. But it appears clearly that the "argument" here is mainly defined as a linguistic and semantic item, carrying a literal meaning and telling (intended here as describing and portraying) something about the "real", and it is to this particular dimension of language that the recipient is supposed to react when he processes (i.e. elaborates) the message. Some lexical components exist in language which, by virtue of their literal meaning when only linguistically considered, give to the statement a superiority as argument. In other words, added to a particular statement, they make it a better candidate for argumentation so that they somehow contribute to increase the "force" of the argument (as intended in Petty et al.'s works). To make clearer our assumption, let us consider the following statements extracted from the French dailies. They will also serve as a preamble to the empirical study that we have announced in our introduction and which concerns preventive advertising communication.
These particular statements are: "In New York, three hundred persons are already dead, victims of AIDS (…) AIDS have already killed three hundred patients"; "In December only, ninety one deaths have been registered"; "According to the last World Health Organisation's data, seven million people have already been infected by HIV"; "Since it emerged, the virus has infected more than sixty million people worldwide, killing more than a third"; "Already twenty million deaths (…) The infection is spreading at the rate of more than fourteen thousand new cases per day". Such statements, precisely because of the particular adverbs they incorporate, quantify the number of people deceased, ill and infected in such a way that this number is seen as having exceeded an acceptable and tolerable threshold. By virtue of their literal meaning, they put the indicated quantity into relief and make them more salient and thus are likely to orientate the recipient's judgment as regards the epidemiological situation portrayed in the message. For instance, the linguist [START_REF] Charaudeau | Grammaire du sens et de l'expression[END_REF] argued that the adverb "already" indicates that "the moment the event occurs is deemed premature compared to its expected occurrence" and signals that a "certain reference point, considered as a maximum not to be exceeded, has been overshot" (p. 265). So considering the notion of "argument strength" as defined in Petty et al.'s studies, we can consider here that these statements are "stronger" arguments than are the same statements devoid of these particular adverbs.
However, if we refer to the "theory of the argumentation within language" [START_REF] Anscombre | L'argumentation dans la langue[END_REF], 1983) and some psycholinguistic studies on the argumentative function of language markers [START_REF] Bassano | Opérateurs et connecteurs argumentatifs: Une approche psycholinguistique[END_REF]Bassano & Champaud, 1987;Champaud & Bassano, 1987), we have to consider that the informative and descriptive properties are not the main properties of these morphemes, or at less do not exhaust their role in the sentence. These linguistic fragments have also an argumentative function, which means that they endow the statement which incorporates them with an "argumentative force" or "argumentative orientation" defined by [START_REF] Anscombre | L'argumentation dans la langue[END_REF] as "the type of conclusions suggested to the recipient, the conclusions that the statement offers as one of the discursive aims" (p. 149). To view these particular linguistic items in the statement as more "argumentative" than "informative", it implies that in addition to its informative content, the statement comprises of several lexical items that endow it with an argumentative orientation, leading the recipient in this or that direction and making another conclusion discursively impossible. Thus, the statements discussed above are descriptive, which means that they portray the epidemiological situation but they also indicate something about the principal aim of the message and the speaker's "communicative intention", precisely via the adverbs located within the message. This last proposal amounts to saying that which was considered as a stronger argument in the encoding/decoding paradigm of persuasive communication is likely to be viewed primarily as a cue of the speaker's intended meaning once the central claims of pragmatics are considered. How will these particular statements be processed and elaborated in reception? What are their effects in reception on attitude change? The following empirical study will attempt to answer these questions.
Method
Objective and hypothesis
The experimental study we propose now is at the juncture of the two themes presented above: persuasive communication and pragmatics. Its objective is to give some support to the critique we have proposed regarding the lack of a pragmatic approach of the language and communication in the studies on persuasion. Its general hypothesis is that the presence of linguistic markers referring to the argumentative orientation in a persuasive message will generate inferences centred on the communicative intention and that these will be matched with a lesser elaboration of its informative contents. In its empirical aspects strictly speaking, this study also examines an advertising context since the subjects are asked to express their attitude toward a new feminine condom called "Preservatex", presented in the conclusion of the message as a newly commercialised product, and their intention to purchase and use it.
Before presenting the methodology, let us present its specific hypothesis.
Hypothesis 1. The intention to purchase the condom referred to in the message will be more important when the argumentative orientation of the message will be marked.
Hypothesis 2. The intention to use the condom referred to in the message will be more important when the argumentative orientation of the message will be marked.
Hypothesis 3. The general attitude towards the marketing of the condom will be more favourable when the argumentative orientation of the message will be marked.
Hypothesis 4. The intentional aspects of the message (i.e. "communicative intention") will be more considered by the recipients when the argumentative orientation of the message will be marked.
Hypothesis 5. The recipients' cognitive elaboration of the message informative contents will be lesser when the argumentative orientation of the message will be marked.
Independent variables and experimental plan
In a text presented as an epidemiological information message, we vary 1) the marking of the argumentative orientation ("high marking" vs. "low marking") by endowing some statements of the message with argumentative markers, and 2) the known character of the disease ("known disease" vs. "unknown disease"), by presenting a message which informs participants about a disease whose existence is and is not known to them1 . More precisely, this text consists of following three parts. First, an introduction which resembles more a title "Sexually Transmitted Infections: Let us take stock of the situation".
Second, the main contents of the message inform about the current epidemiological situation of a particular sexually transmitted disease. This part consists of ten statements and within it are introduced the two independent variables indicated above (high vs. low marking and known vs. unknown disease). Third, there is a conclusion which invites the participants to consider a new feminine condom, the marketing of which is imminent "Preservatex, to consume without moderation". Some of the experimental manipulations introduced in the second part are presented below.
Low marking
Dependent variables
The intention to purchase the condom: participants are asked to indicate this intention on a scale from 1 "not at all" to 7 "absolutely".
The intention to use the condom during the next sexual intercourse: participants are asked to indicate this intention on a scale from 1 "not at all" to 7 "absolutely".
The attitude towards the marketing of the condom: participants are asked to express some judgements as regards this commercialization on three seven-point semantic differential scales anchored at -3 and +3 ("unnecessarynecessary"; "inessentialessential"; "irrelevant relevant").
The perceived communicative intention: participants are asked to indicate on a scale from 1 "not at all" to 7 "absolutely" to what extent they consider that by elaborating this message, the communicator intended to "alert the women about the risks they incur";
"emphasize the seriousness of the situation for the women"; "to give the women a sense of responsibility" 2 .
The cognitive elaboration of the message contents: participants are faced with six multiple choice items and asked to indicate for each of them the option which tally with the information presented in the text. All the participants receive a global score from 0 to 6.
Population
2 Let us clarify that among the ten statements which constitute the message, only three concern women. unknown) experimental design defined above. They declared having had three sexual intercourses and more during the last six months. 43,5% of them indicated that they "always" used a condom during these sexual encounters, 34,8% indicated that they used it "occasionally" and 21,7% indicated that they "never" used it. No participant had practised a screening test during the last year and no participant had used a feminine condom until now.
Results
Intention to purchase and to use the condom (hypothesis 1 & 2)
The results indicate that the marking of the argumentative orientation is not without effect on the intentions to purchase and to use the condom referred to in the conclusion of the message (see Table 1). An ANOVA shows that the subjects' purchase intention is significantly more important in "high marking" condition (M = 4.67 vs. 3.93; F(1,85) = 9.41; p <.003), whatever their knowledge of the disease (Gr.3 vs. Gr.1; F(1,83) = 5.05; p <.03 & Gr.4 vs. Gr.2; F(1,83) = 4.37; p <.04). As regards the intention to use it, it is also more important in "high marking" condition (M = 4.0 vs. 3.47; F(1,85) = 5.32; p <.03), but a more detailed analysis shows that the difference does not approach significance in the "unknown disease" condition (Gr.3 vs. Table 1 here
Attitude towards the marketing of the condom (hypothesis 3)
The results reflect that the judgments regarding the marketing of the condom depend on the marking of the argumentative orientation (see Table 2). The ANOVA reveals that the participants in "high marking" condition judge this marketing more "necessary" (M = 1.35 vs. 0.66; F(1,85) = 4.79; p <.04), but a more detailed analysis shows that the difference is not significant in the "unknown disease" condition and only marginal in the "known disease" condition (Gr.4 vs. Gr.2; F(1,83) = 1.76 & Gr.3 vs. Gr.1; F(1,83) = 3.14; p = .07). In the same "high marking" condition, the marketing is also considered as more "relevant" (M = 1.43 vs. 0.71; F(1,85) = 5.88; p <.02), a more detailed analysis showing that the difference is only marginal whatever the knowledge of the disease (Gr.3 vs. Gr.1; F(1,83) = 2.76; p = .09 & Gr.4 vs. Gr.2; F(1,83) = 3,28; p = .07). As regards the third judgment (inessential vs.
essential), the same pattern of results emerges (M = 1.61 vs. 0.90; F(1,85) = 6.78; p <.02 / Gr.3 vs. Gr.1; F(1,83) = 3.57; p = .06 & Gr.4 vs. Gr.2; F(1,83) = 3.22; p = .07). A general attitude more favourable in "high marking" condition (M = 1.46 vs. 0.76; F(1,85) = 6,06; p <.02) results from these judgments, a more detailed analysis showing however that the difference is only marginal whatever the "knowledge of the disease" (Gr.3 vs. Gr.1; F(1,83) = 3.24; p = .07 & Gr.4 vs. Gr.2; F(1,83) = 2.83; p = .09). The participants' judgments about the commercialization of the condom do not differ significantly whether they know the disease or not (F<1) and from this pattern results a general attitude which is sensibly the same (F<1). The perceived communicative intention (hypothesis 4)
The results show that participants in "high marking" condition express a higher level of approbation of the propositions regarding the speaker's communication intention (see Table 3). The ANOVA reveals that they are significantly more in agreement with the idea that the author of the message intended to "alert the women about he risks they incur" (M = 5.59 vs. Table 3 here
The processing of the message contents (hypothesis 5)
The participants informed with a message characterized by the high marking of argumentative orientation show a lower score of memorization of the message contents (see Table 4). The ANOVA reveals that the difference is significant (M = 2.52 vs. 3.39; F(1,85) = 20.31; p <.0001), whatever the knowledge of the disease (Gr.3 vs. Gr.1; F(1,83) = 9.94; p <.01 & Gr.4 vs. Gr.2; F(1,83) = 10.37; p <.01). On the other hand, this score does not differ whether the participants know the disease or not (F<1). Table 4 here Within each experimental condition and for each of the three propositions regarding the author's communicative intention, we have calculated a correlation coefficient (i.e. Pearson's r) between the level of approbation for this proposition and the score of memorization of the message contents (see Table 5). The statistical analysis reveals a strong relationship between these two measures only in the "high marking" condition, which means that in this condition of argumentative orientation the higher the approbation of the proposition is, the weaker the score of memorization is.
Table 5 here
Discussion
The main objective of this study was to support our claim that studies on persuasive communication have been undertaken until now within an Encoding/Decoding paradigm of the communication and that a pragmatic approach of the persuasive communication is not only possible but maybe also necessary, given that persuasion within the dual model is a matter of "communication". The main hypothesis was that a message characterized by the presence of some linguistic markers referring to the argumentative orientation would be less elaborated inasmuch as these argumentative markers would be perceived and processed in reception as some cues of what the speaker intended to mean through his message, which means that the recipient of message was supposed to generate some inferences centred on this communication intention and in so doing, to elaborate to a lesser extent the strict informative contents of the message. The empirical data support this general hypothesis. Indeed, the participants who were exposed to the message within which these linguistic markers were disseminated are significantly more in agreement with the propositions relative to the author's communicative intention. Moreover their memorization score of the message contents was lesser and finally this score was strongly correlated with their level of approbation for the items of the author's intended meaning so that the more they were in agreement with these particular items, the less they were in a position to restore correctly the informative contents of the message.
We suppose that in this "high marking" condition, while they were reading the message, recipients have focused their attention on the argumentative markers and made use of what the cognitivist [START_REF] Dennet | The intentional stance[END_REF][START_REF] Dennet | True believers: The intentional strategy and why it works[END_REF]) called the "intentional strategy", reconstructing what the linguist Anscombre (1995) called the "deep meaning" of the sentence. Let us recall here that studies on reading and interpretation of sentences have shown that comprehension results from two kind of strategies which work together to ensure an accurate and rapid processing of information. These cognitive processes are the "bottom-up" processing and the "top-down" processing. Generally speaking, when the recipient activates a "bottom-up" processing, he is supposed to extract the information from the message and to deal with this one in a relatively complete and systematic fashion, with little recourse to higher-level knowledge. On the other hand, with a "top-down" processing, the uptake of information is guided by the individual's prior knowledge, expectations and hypothesis and the recipient (i.e. reader) is supposed to take in only just enough visual stimuli to test and comfort these ones [START_REF] Denhière | Comprendre un texte: Construire quoi? Avec quoi? Comment?[END_REF]Goodman, 1967;Smith, 1971). Referring to this distinction, we can hypothesize that participants who were asked to process the message with argumentative markers were more likely to activate a "top-down" processing and this very early in their understanding of the message. Let us recall too that theories that stress "top-down" processing hold that in most situations, the reader's activity consists in reconstructing the meaning from the text with the least possible time and effort, selectively using the least number of cues possible and the most productive cues [START_REF] Adams | Les modèles de lecture[END_REF]Goodman & Gollasch, 1980;[START_REF] Smith | The role of prediction in reading[END_REF]. We can thus hypothesize that the linguistic markers were relevant for the recipient in his attempt to reconstruct what the author intended to mean. Besides, we can hypothesize that the activation of these deductions (i.e. inferences) about the author's intended meaning and the confirmation of these given the other linguistic markers which appear subsequently have as consequence stopped or at least reduced the cognitive effort the recipient is willing to dedicate to the processing of the informative contents conveyed by the message. In other words, once he has collected the cues which enable him to make an inference about the author's intended meaning and once he is sufficiently confident as regards the relevance of his inference about this communicative intention, the recipient would head towards a lesser elaboration of the informative contents (i.e. a processing which is less "central/systematic").
This study has also shown that by stressing the argumentative orientation of the message, one can increase the subjects' intention to purchase a new condom referred to in the conclusion of the persuasive message, to act favourably on the intention to use it during the next sexual intercourse, and finally produce a more favourable attitude toward its commercialization. To some extent, our study entails some social policy goals in terms of prevention and thus concerns social and public service advertising more than commercial advertising [START_REF] Cossette | La publicité, déchet culturel[END_REF]Kotler, Roberto & Lee, 2002;Kotler & Zaltman, 1971;[START_REF] Sublet | Use of health communication and social marketing principles in planning occupational safety and health interventions[END_REF]. However, to assign such social importance to this experiment supposes to question the real efficacy of this linguistic strategy, in this case its capacity to transform such intentions and attitudes into effective behaviours and practices. Let us recall that according to the Elaboration Likelihood Model of persuasion, the more important the amount of thinking about the relevant information contained in the message is (i.e. high cognitive elaboration), the more stable, the more resistant, and (above all) the more predictive the behaviour and attitude which result from this (high) cognitive elaboration (Haugtvedt & Petty, 1992;Krosnick & Petty, 1995;Petty et al., 1995). Now let us recall too that in our study, the stronger intentions as regards the purchase and the utilisation of the condom on the one hand, and the more favourable attitude toward the marketing of the condom on the other hand have been registered in the condition where the elaboration of the informative contents of the message was deliberately less. Consequently, we can reasonably hypothesize that the discursive strategy tested here has only short time effects, moreover only on the attitude and (but) not on the behaviours.
Conclusion
We hope that this study supports our claim that a pragmatic approach of communication and language in studies on persuasive communication is appropriate. We are in agreement with [START_REF] Sperber | Relevance: Communication and cognition[END_REF] when they claimed that "to describe communication in terms of intentions and inferences seems appropriate from a psychological point of view" (p. 42).
This "Intentionalist paradigm" seems to us psychologically and cognitively more realistic than the one which has dominated until now studies on persuasion, that is the Encoding/Decoding paradigm. To refer to the pragmatic approach of communication is all the more relevant since the "code model" has underlain until now most studies on persuasion applied to the prevention of risky behaviour [START_REF] Baker | Persuasion theory and drug abuse prevention[END_REF]Petty, Baker & Gleicher, 1991;[START_REF] Petty | Persuasion theory and AIDS prevention[END_REF][START_REF] Verplanken | Persuasive communication of risk information: A test of cue versus message processing effects in a field experiment[END_REF]. Broadly speaking, public service advertising is a leading source of information about important health issues and therefore is targeted by those who aim to influence perceptions and behaviours. The main challenge for the designers of the communication is how best to compose messages in order to increase their efficacy [START_REF] Bertrand | Systematic review of the effectiveness of mass communication programs to change HIV/AIDS-related behaviors in developing countries[END_REF][START_REF] Myrhe | HIV/AIDS communication campaigns: Progress and prospects[END_REF]Palmgreen, Noar, & Zimmerman, 2007). And it is all the more a current challenge since the latest surveys in France have showed an "AIDS normalization process" resulting in a relaxation of preventive behaviors and a degradation of the image of the condom [START_REF] Beltzer | Les connaissances, attitudes, croyances et comportements face au VIH/SIDA en France[END_REF]. Given this context, this study may be somewhat useful for the designers of preventive ads who are concerned about discursive strategies which facilitate the cognitive elaboration of the message.
F
Participants were 87 female students aged 20 to 22 years old, and were randomly divided up in the 2 (argumentative orientation: low making vs. high marking) x 2 (disease: known vs.
Gr. 1 ;
1 F(1,83) = 3.81; p <.06 & Gr.4 vs. Gr.2; F(1,83) = 1.73). The participants' intention to purchase the condom does not differ significantly whether they know the disease referred to in the message or not (M = 4.51 vs. 4.10; F(1,85) = 2,71; p = .11), and the same pattern of results is observed as regards the intention to use it (M = 3,91 vs. 3,55; F(1,85) = 2,42; p = .12).
4.05; F(1,85) = 87.09; p <.0001), whatever their knowledge of the disease [Gr.3 vs. Gr.1; F(1,83) = 40.23; p <.0001 & Gr.4 vs. Gr.2; F(1,83) = 46.95; p <.0001]; "emphasize the seriousness of the current situation for the women" (M = 5.61 vs. 3.95; F(1,85) = 84.46; p <.0001), whatever their knowledge of the disease (Gr.3 vs. Gr.1; F(1,83) = 38.35; p <.0001 & Gr.4 vs. Gr.2; F(1,83) = 46.24; p <.0001); "give the women a more sense of responsibility" (M = 5.89 vs. 3.88; F(1,85) = 112.74; p <.00001), again whatever the knowledge of the disease (Gr.3 vs. Gr.1; F(1,83) = 54.65; p <.0001 & Gr.4 vs. Gr.2; F(1,83) = 58.10; p <.0001). The participants approve each of these propositions to the same extent whether they know the disease or not (F<1).
Table 2
2
here
Table 1
1 Goodman, K. S.(1967). Reading: A psycholinguistic guessing game. Journal of the ReadingSpecialist, 6, 126-135. Goodman, K. S., & Gollasch, F. V. (1980). Word omission: Deliberate and non-deliberate.Reading ResearchQuarterly, 16, 6-31. Grice, H. P. (1975). Logic and conversation. In P. Cole & J. Morgan (Eds.), Syntax and semantics 3: Speech acts (pp. 41-58). New York: Academic Press. Grice, H. P. (1989). Studies in the Way of Words. Cambridge MA: Harvard University Press. Haugtvedt, C. P., & Petty, R. E. (1992). Personality and persuasion: Need for cognition moderates the persistence and resistance of attitude changes. Journal of Personality and Social Psychology, 63(2), 308-319. Haugtvedt, C. P., Petty R. E., & Cacioppo, J. T. (1992). Need for cognition and advertising: Understanding the role of personality variables in consumer behaviour. Journal of Consumer Psychology, 1(3), 239-260. Haugtvedt, C. P., & Priester, J. R. (1997). Conceptual and methodological issues in advertising effectiveness: An attitude strength perspective. In W. D. Wells (Ed.), Measuring advertising effectiveness. London: Lawrence Erlbaum Associates. Smith, F. (1971). Understanding reading: A psycholinguistic analysis of reading and learning to read. New York: Holt, Rinehart & Winston. Mean (M) & standard deviation (SD) in function of the marking of argumentative orientation and the knowledge of the disease
o r
P
e
e r
R
e
v i e
w
Haugtvedt, C. P., & Strathman, A. J. (1990)
. Situational product relevance and attitude persistence. Advances in Consumer
Research, 17,[766][767][768][769]
Table 3
3 Mean (M) & standard deviation (SD) in function of the marking of argumentative orientation and the knowledge of the disease
Low marking High marking
Known Unknown Known Unknown
disease disease disease disease
To alert women
about the risks
they incur (a)
Table 5
5 Author's communicative intention
To alert women To emphasize the To give women a
about the risks … seriousness … sense of responsibility
.12 -.19 -.09
Group 1 (n = 21) n.s. n.s. n.s.
-.07 .03 .05
Group 2 (n = 20) n.s. n.s. n.s.
-.67 -.44 -.46
Group 3 (n = 23) p <.001 p <.04 p <.04
-.56 -.61 -.44
Group 4 (n = 23) p <.01 p <.01
p <.04
http://mc.manuscriptcentral.com/commresearch Communication Research
The "unknown" disease was in fact a fictitious one and we have checked that all the subjects in this condition declared they did not know it.
Low marking
High marking
Known
The higher the mean is, the stronger the intention to purchase the condom is.
(b) The higher the mean is, the stronger the intention to use the condom is.
Table 2
Mean
The higher the mean is, the more "necessary", the more "relevant", and the more "essential" the marketing of the condom is judged.
(b) The higher the mean is, the more favourable the attitude towards the marketing of the condom is (after addition of the scores observed for each scale). |
04091668 | en | [
"info",
"math"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04091668v2/file/GBDe.pdf | Daniel Goossens
email: [email protected]
Graphs of Bipartitions with Dimensions : a tool for interacting with virtual Boolean lattices. Working document
This document describes a Boolean knowledge representation and reasoning formalism called Graphs of Bipartitions. A Graph of Bipartitions is a knowledge storage framework that interacts with a virtual Boolean hypercube. It groups equivalent expressions, removes contradictory expressions, and memorizes logical implications using polynomially bounded reasoning. It thus maintains a uniqueness principle to continuously growing substructures. This formalism and its management operations have been applied to a demonstration of a natural language interaction on a micro-universe.
This document formalizes this previously experimented proposition for managing Boolean knowledge. One important mechanism is the deduction of modulo 2 linear equations, in connection with the calculation of dimensions of the virtual hypercube. These equations gradually extend a linear system within Boolean knowledge. Gaussian elimination combined with propagations of Boolean values constitutes a polynomially bounded reasoning. This reasoning is protected from any combinatorial explosion. It is locally complete to intertwined substructures in constant expansion and in potentially exponential number, where it reduces to set operations on finite sets of dimensions.
Introduction
This working document deals with the problem of memorizing Boolean knowledge in a database while maintaining a principle of uniqueness: two different but equivalent Boolean expressions must be stored in the same place. We can also wish for a principle of complete classification: using Boolean implication as a primitive, two expressions x and y such as x ⇒ y must be stored so that there is a path of implications from x to y.
If these principles are maintained, the places where Boolean expressions are stored are vertices of a virtual hypercube. Two equivalent expressions are represented by the same vertex of the hypercube. The hypercube is partially represented by a directed graph whose arcs are implications. As the Boolean implication is insufficient to represent Boolean constraints between the vertices of this partial graph, it is completed by a system of linear equations modulo 2.
To maintain the principles, reasoning may be based on a complete Boolean satisfiability test. For example, when a Boolean expression is created, it must be compared for equivalence or implication with each stored expression. However, if the test is triggered too often, the storage time for new knowledge becomes prohibitive with the increase in the size of the database. We must either limit and compartmentalize the triggering of the complete test while maintaining the completeness of the reasoning, or do without it and accept that the principles of uniqueness and classification be complete only locally to substructures.
The incremental construction of the hypercube, by polynomially bounded operations, formalizes an intuitive notion of navigation in a virtual hypercube, and an intuitive notion of inter-classification of knowledge.
The context
The problem of maintaining a uniqueness principle on a knowledge base is ubiquitous in logic. Recognizing the same concept under different descriptions is a crucial understanding skill. Also, gathering the equivalent expressions in a single place of the virtual hypercube and maintaining the Hasse diagram of the implications between vertices, as one memorizes knowledge, simplifies all subsequent reasoning, unlike a solution which would content itself with accumulating the expressions without inter-classifying them.
In the Boolean world, the uniqueness principle concerns things as diverse and fundamental as distributive lattices, the hypercube, partial orders, propositional knowledge bases or the word problem on algebraic structures. It does not have a dedicated bibliography. On the other hand, we can cite the state of the art in generalist complete Boolean reasoning, namely SAT-solving.
Sat-solving [START_REF]Handbook of Satisfiability[END_REF] is currently the most efficient complete satisfiability test for Boolean reasoning. In the field of decision problem solving, in artificial intelligence, this general-purpose technique currently dominates other approaches, in particular local search strategies, classes of expressions of polynomial complexity, BDDs [START_REF] Bryant | Graph-based algorithms for boolean function manipulation[END_REF] and their successors, or order encodings by bit vectors (see for example [START_REF] Aït-Kaci | Efficient implementation of lattice operations[END_REF]). To manage the inescapable difficulty of reasoning at order 0, higher-order logical approaches often ultimately rely on a complete Boolean satisfiability test performed by a SAT-solver, for example [START_REF] Ramachandran | Compact propositional encoding of first-order theories[END_REF], [START_REF] Sebastiani | Handbook of Satisfiability, chapter 32. SAT Techniques for Modal and Description Logics[END_REF] or [START_REF] Giunchiglia | Sat-based answer set programming[END_REF]. However, this complete test is exponential in the worst case. In practice, it is too costly for the reasoning needed by the management of Boolean knowledge considered here. It should be used sparingly.
The inter-classification of Boolean expressions in a propositional knowledge base is analogous to the inter-classification of knowledge in [START_REF] Baader | The Description Logic Handbook: Theory, Implementation and Applications[END_REF] description logics, by subsumption links.
Main objectives
Knowledge storage and maintenance operations must be protected from any combinatorial explosion or too long calculation times. The main objective is to separate these operations, based on incomplete reasoning, from those oriented towards problem solving and based on a complete satisfiability test. It is then a question of determining conditions under which the Boolean reasoning is complete locally to substructures.
To obtain storing operations that verify the uniqueness principle via a complete Boolean reasoning, another objective is to reduce and compartmentalize the use of the complete satisfiability test.
A longer-term goal is the automation of understanding abilities based on symbolic reasoning, in micro-worlds that lend themselves to Boolean modeling.
Actual state
This document formalizes what was experimented in [START_REF] Goossens | Modélisation d'une forme simple de compréhension[END_REF], an incomplete and polynomially bounded Boolean reasoning, augmented here with a reasoning on the dimensions of a virtual hypercube. This reasoning aims to locally maintain the principles of uniqueness and complete classification on a Boolean knowledge base. This maintenance stores knowledge in Graphs of Bipartitions. A Graph of Bipartitions is a partial representation of a virtual hypercube. This maintenance of a partial representation of a virtual hypercube acts as a cache that memorizes the simplest and most frequent deductions.
A Graph of Bipartitions combines an implication graph and a system of modulo 2 linear equations on Boolean variables. The linear system is progressively enriched by deduction of new equations. These equations are deduced by reasoning on the dimensions of the virtual hypercube. This continually increases the deductive power of Gaussian elimination, an efficient polynomial reasoning and therefore immune to any combinatorial explosion. The particular role of the linear component in Boolean knowledge, since [START_REF] Stone | The representation of boolean algebras[END_REF], is evident in T.J. Schaefer's dichotomy theorem [START_REF] Thomas | The complexity of satisfiability problems[END_REF] and in its treatment by Gaussian elimination in SAT-solving [START_REF] Baumgartner | The taming of the (x)or[END_REF] .
The tools and reasoning described here are programmed in C++ and Objective-C under MAC OS. All the diagrams in this document were produced with the interactive editor. The Boolean reasoning presented here is a construction site that allows to study all kinds of combinations capable of providing a usable solution to manage Boolean knowledge.
The main result is the local completeness proof of a polynomially bounded Boolean reasoning, the propagation of valuations expressed as systems of modulo 2 linear equations. The Boolean reasoning that ensures the storage of knowledge in a Graph of Bipartitions gradually develops substructures, where the Boolean operators are reduced to set operations on finite sets of dimensions of the virtual hypercube. These substructures are intertwined and in potentially exponential number, therefore not enumerable. However, locally at each substructure, the Boolean reasoning is complete and polynomially bounded.
A demonstration of the use of Graphs of Bipartitions to model a simple understanding in the microuniverse of the intuitive geometry of elementary school level quadrilaterals, is presented in [START_REF] Goossens | Modélisation d'une forme simple de compréhension[END_REF]. This document formalizes the solutions experimented in this demonstration and adds the machinery related to the dimensions of the virtual hypercube.
Document outline
The 2 paragraph recalls some prerequisites and establishes a link between the dimensions of the hypercube and Boolean reasoning, which is used in the 5 paragraph. The 4 paragraph presents the Graphs of Bipartitions. The paragraph 5 presents the polynomial reasoning on Graphs of Bipartitions. It analyzes the spontaneous development of substructures where Boolean reasoning is locally complete. It describes the computation of dimensions from linear equations and linear equations from dimensions. The paragraph 6 presents propagation-like polynomial reasonings and demonstrates their completeness, locally to these substructures. The paragraph 7 cites some achievements and perspectives and the paragraph 9 concludes.
Preliminaries
Graphs of bipartitions combine a graph of implications and a system of modulo 2 linear equations over Boolean variables. They induce Boolean hypercubes. Therefore, some terminology from graph theory, modulo 2 linear equations and Boolean hypercubes is needed.
Graphs
A graph is a couple S, A where S is a set of vertices and A is a set of edges. Each edge of A is a pair of vertices of S. A chain from a vertex x to a vertex y is a finite sequence of consecutive edges connecting x to y. An directed graph is a graph where the edges, renamed arcs, are oriented. An arc from a vertex x to a vertex y is written x → y. A path from vertex s 1 to vertex s n is a finite sequence of arcs {s 1 → s 2 , s 2 → s 3 , . . . , s n-1 → s n }.
The Cartesian product [START_REF] Imrich | Topics in Graph Theory: Graphs and Their Cartesian Product[END_REF] of two graphs G = S g , A g and H = S h , A h is the graph G H = S, A , where:
S = S g × S h A = {{g 1 × h 1 , g 2 × h 2 } | g 1 ∈ S g ∧ g 2 ∈ S g ∧ h 1 ∈ S h ∧ h 2 ∈ S h ∧ (((g 1 = g 2 ) ∧ ({h 1 , h 2 } ∈ A h )) ∨ ((h 1 = h 2 ) ∧ ({g 1 , g 2 } ∈ A g )))}
Therefore, the vertices of G H are the elements of the Cartesian product of the vertices of G and H and two vertices of G H have an edge if and only if the vertices of which they are the products have an edge in G or H. The graph G H therefore contains as many copies of G as there are vertices in H and vice versa.
Modulo 2 Linear Equations
Graphs of Bipartitions use systems of linear equations over the two-element field F 2 . The addition on F 2 is denoted ⊕ and corresponds to the Boolean connector XOR. Each linear equation on F 2 can be put in the form ( n i=1 v i ) = constant, where the v i are Boolean variables and constant ∈ {0, 1}. A system of equations is homogeneous if and only if constant = 0 for all equations. The ⊕ operation is associative and commutative. The variables can thus be rearranged freely in a polynomial. It also verifies for all x the identity x ⊕ x = 0. Monomials in an equation can be moved to either side of the = sign. For example, the equation x ⊕ y = 0 is the same as x = y. Also, the equation x ⊕ y = 1 is x = y.
Adding two equations e1 = e2 and e3 = e4 gives the implied equation e1 ⊕ e2 = e3 ⊕ e4, which is also e1 ⊕ e2 ⊕ e3 ⊕ e4 =0. This addition is the basic operation of Gaussian elimination for solving a system of equations. Gaussian elimination is complete for the deduction of the equations 0 = 0 (tautology), 1 = 0 (contradiction), x = 0 or x = 1 (valuation). It is also complete for the deduction of the equations x = y (equivalent vertices), which can also be in the form (x = V ) ∧ (y = V ) and for the equations x = y (i.e.
x ⊕ y = 1), which can be in the form (x = V ) ∧ (y = (V ⊕ 1)).
The Boolean Hypercube Q n
The n-dimensional hypercube graph Q n has 2 n vertices. Each vertex represents one of the 2 n subsets of the set {1 . . . n}. The subset v x associated with a vertex x is its characteristic vector.
Each edge of the graph connects two vertices x and y such that v x and v y differ in a single element. Their symmetric difference is a singleton. This singleton is the atomic dimension of the edge. We write a ⊕ b for the symmetric difference of two sets a and b of dimensions, rather than a∆b, in accordance with modulo 2 linear equations. We orient the edge as the arc x → y if v x ⊂ v y and y → x otherwise. On the diagrams, the arc x → y is drawn by a line from the top of x's bounding rectangle to the bottom of y's bounding rectangle. The dimension of any pair of vertices {a, b} is the set v a ⊕ v b . We write it dim(a, b). The lower bound or inf of Q n is the vertex ⊥ such that v ⊥ = ∅. Its upper bound or sup is the vertex such that v = {1 . . . n}.
In the Boolean hypercube, all the chains between two vertices where each atomic dimension appears only once have the same length k and the same dimension. They correspond to the k! permutations of the sequence 1 . . . k . Consider three vertices a, b, c of
Q n . We have dim(a, c) = v a ⊕ v c = v a ⊕ v b ⊕ v b ⊕ v c = dim(a, b) ⊕ dim(b, c).
The dimension of a pair of vertices is therefore the sum ⊕ of the atomic dimensions of the edges on any chain between the two vertices. It is also the set of atomic dimensions of the edges on a chain where all the edges are of different dimensions. See figure 1. Two dimensions E and F are orthogonal, denoted E ⊥ F , if and only if E ∩ F = ∅. For more flexibility, we interpret in the following the n of Q n not as an integer but as a set of atomic dimensions. Q n is then the Boolean hypercube of dimension |n|, with 2 |n| vertices, whose arcs are labeled with the elements of the set {1 . . . n}, written n. For example, Q {1,3,5} represents a "copy" of Q 3 where arcs are labeled with the dimensions 1, 3 and 5. The Cartesian product of Q n and Q m is still written Q n+m but the operation + is the union of two sets n and m, which are necessarily disjoint.
By the definition of the dimension of a pair of vertices, two copies
Q dim(a,b) and Q dim(c,d) in Q n of a Boolean hypercube Q e , i.e. verifying dim(a, b) = dim(c, d) = e, also verify (v a ⊕ v b ) = (v c ⊕ v d ), and therefore ((v a = v b ) = (v c = v d ))
. This principle is implicit in the proof of proposition 1.
The vertices of Q n are Boolean variables and its arcs are implications between these variables. In a Boolean hypercube, the Boolean implication x → y between any two vertices x and y is equivalent to the fact that there is a path from x to y.
A Graph of Bipartitions, like any Boolean formalism, can be interpreted as a toolbox for navigating in a virtual Boolean hypercube. The objective here is to relate the dimensions of the virtual hypercube to a Boolean reasoning on a Graph of Bipartitions. To do this, we use this recursive definition of the Boolean hypercube Q n , which allows all possible decompositions into Cartesian products: Note: The hypercube Q 0 , or Q {} , reduced to 1 vertex, is not useful for defining Graphs of Bipartitions. The hypercube Q n+m contains 2 |m| copies of Q n and 2 |n| copies of Q m . The vertices of each copy of Q m are the 2 |m| copies of one of the 2 |n| vertices of Q n and symmetrically, the vertices of each copy of Q n are the 2 |n| copies of one of the 2 |m| vertices of Q m (Figure 2).
Dimensions and Boolean reasoning
The hypercube Q n is a Boolean lattice. That is, some subsets of its vertices are generators from which every other vertex may be represented as a Boolean expression (Figure 3). For any pair of vertices x and y, the equivalence "x and y have the same Boolean value", noted x = y, defines an equivalence relation on Q n : two vertices s and t are equivalent if and only if dim(s, t) ⊆ dim(x, y). The quotient hypercube by this equivalence relation is Q n\dim(x,y) (Figure 4). The proposition 1 uses these properties and conversely, from the inclusion or the disjunction of dimensions of paths, we deduce Boolean implications. The proof is a simple translation between the set-theoretic language of dimensions and the Boolean language. The set ∅ corresponds to the Boolean value 0 (false) and the set n of dimensions of Q n to the value 1 (true). To the characteristic vector v s of vertex s corresponds the Boolean variable s.
Proof. Let us prove the two properties separately:
To prove (1), By the definition of the dimension of a pair of vertices,
(dim(a, b) ⊆ dim(c, d)) ⇐⇒ ((v a ⊕ v b ) ⊆ (v c ⊕ v d )). As a → b and c → d, (v a ⊕ v b ) = (v b \ v a ) and (v c ⊕ v d ) = (v d \ v c ). In Boolean language, (dim(a, b) ⊆ dim(c, d)) ⇐⇒ ((v b \ v a ) ⊆ (v d \ v c )) is written: (dim(a, b) ⊆ dim(c, d)) ⇐⇒ ((a ∧ b) =⇒ (c ∧ d)). To prove (2), If dim(c, d) is disjoint of dim(a, b) then it is included in its complement : (dim(a, b) ⊥ dim(c, d)) ⇐⇒ (dim(c, d) ⊆ (n \ dim(a, b))) ⇐⇒ ((v c ⊕ v d ) ⊆ (n \ (v a ⊕ v b )))
We replace in (2):
((v c ⊕ v d ) ⊆ (n \ (v a ⊕ v b ))) ⇐⇒ ((a ∧ b) =⇒ (c = d)) (3)
If (a ∧ b) then v a = ∅, v b = n and (n \ (v a ⊕ v b )) = (n \ (∅ ⊕ n)) = (n \ n) = ∅. ((v c ⊕ v d ) ⊆ (n \ (v a ⊕ v b ))) becomes ((v c ⊕ v d ) ⊆ ∅) then ((v c ⊕ v d ) = ∅) and then (v c = v d ),
which is written in Boolean language c = d (c and d have the same Boolean value).
Similarly, (a
∧ b) is (v b \ v a ) = (n \ ∅) = n, which is written 1 (true) in Boolean language.
By replacement by these equivalences, the equivalence (3) becomes:
(c = d) ⇐⇒ (1 =⇒ (c = d)), i.e. (c = d) ⇐⇒ (c = d), which is a tautology. Otherwise, ((a → b) ∧ ¬(a ∧ b)) implies a = b, so dim(a, b) = ∅. (dim(a, b) ⊥ dim(c, d)) is true and (a ∧ b) is false. The equivalence (2) becomes (1 ⇐⇒ (0 =⇒ (c = d))), which simplifies to 1 ⇐⇒ 1.
From the equivalence (1) of the proposition 1, we deduce:
(dim(a, b) = dim(c, d)) ⇐⇒ ((a ∧ b) ⇐⇒ (c ∧ d)) ⇐⇒ ((a = b) ⇐⇒ (c = d))
This last equivalence allows to determine which implications have the same dimension in a Graph of Bipartitions. More generally, the proposition 1 links a Boolean reasoning on the vertices of a hypercube, to a set reasoning on the dimensions of the paths of this hypercube. It is exploited in the paragraph 5 in the computation of the dimensions of the implications of a Graph of Bipartitions and the deduction of linear equations modulo 2.
The Graphs of Bipartitions
A Graph of Bipartitions (GB) is a substructure of the graph of a hypercube Q n . The GB induces this Q n , i.e. is completed in this Q n using Boolean operators. This idea starts in [START_REF] Goossens | Automatic Node Recognition in a Partitioning Graph: Restricting the Search Space While Preserving Completeness[END_REF], where Boolean expressions are represented with bipartitions as Boolean operators. In [START_REF] Goossens | Automatic Node Recognition in a Partitioning Graph: Restricting the Search Space While Preserving Completeness[END_REF], each bipartition x = y + z is equivalent to the Boolean expression (x = (y ∨ z)) ∧ ¬(y ∧ z). The bipartitions used until [START_REF] Goossens | Boolean reasoning with graphs of partitions. version longue du papier court "A Dynamic Boolean Knowledge Base[END_REF] become more abstract in [START_REF] Goossens | Modélisation d'une forme simple de compréhension[END_REF]. This makes it possible to recover the symmetry of the dual operators ∧ and ∨ and to considerably simplify the graphs.
Each bipartition is now a quadruple x, y, x ∨ y, x ∧ y . A bipartition is built with four implications and a linear equation modulo 2. It is a representation of the Boolean hypercube Q 2 , or of the tautology (x ⊕ y ⊕ (x ∧ y) oplus(x ∨ y)) = 0. See figure 6.
The vertices of the GB, called nodes, represent vertices of Q n . The arcs of the GB represent paths of Q n . Each arc is therefore an implication between its two vertices and its dimension is that of the path it represents.
For the implication between two nodes x and y of a GB, we must distinguish three cases:
1. x → y is deducible from the GB without a path of implications from x to y. We write it x ⇒ y. This case is possible when reasoning is incomplete.
2. There is a path of implications from x to y. We write it x → y.
3. The GB contains an arc x → y. We explicitly mention the "arc" or the "implication" x → y.
The dimensions of the implications of a GB and the characteristic vectors of its vertices are theoretical objects, too expensive to calculate in extension. One can on the other hand calculate constraints of inclusion and disjunction between these dimensions. This calculation can be done from a Boolean propagation, using the two equivalences of proposition 1. If the valuation (a = 0 ∧ b = 1) propagates (c = 0 ∧ d = 1), we can note that (dim(a, b) ⊆ dim(c, d)), and if it propagates (c = d), we can note that (dim(a, b) ⊥ dim(c, d)). This collected information can in turn be used to extend a valuation.
Let a, b, c, d be four nodes of a GB. The constraint dim(a, b) Definition 3. A Graph of Bipartitions (GB) is a triple N, I, S , where N is a set of nodes, I a set of implications between these nodes and S is a set of linear equations modulo 2 on these nodes. It is also the conjunction I ∧ S , where I is the conjunction of the implications of I and S is the conjunction of the equations of S. It is constructed using the rules:
= dim(c, d) is also written (v a ⊕ v b ) = (v c ⊕ v d ). As the nodes a, b, c, d represent the characteristic vectors v a , v b , v c , v d ,
1. The implication ⊥ → is a GB. It is the triple {⊥, }, {⊥ → }, {} .
2. Given two nodes x and y of a GB G, such that x → y, then divide(x, y, G) is a GB.
3. Given two nodes x and y of a GB G, such that ¬(x → y), then cross(x, y, G) is a GB. 4. Given three nodes x, y and z of a GB G, such that x → y and y → z, then complement(x, y, z, G) is a GB.
The nodes ⊥ and of the base case 1 are respectively the inf and the sup of the GB. They are conserved by the other construction rules.
The operation divide(x, y, G) adds to G the path (x → z) ∧ (z → y), where z is a new node. In other words, divide(x, y, G) = G ∧ (x → z) ∧ (z → y). The node z is not a Boolean function of the nodes of G. There is no need to check if it already exists or if it has to be connected by implications to other nodes in the GB. This operation is used to construct a graph of implications between nodes.
The operation cross(x, y, G) adds to G the bipartition x, y, a, b . The two added nodes a and b are respectively x ∨ y and x ∧ y. The operation complement(x, y, z, G) adds to G the bipartition y, c, z, x . The created node c is the complement of y between x and z, ie c = ((z ∧ y) ∨ x). The operations cross and complement only add Boolean functions of the nodes of G.
The operations cross and complement
The operation cross constructs or recognizes the expressions x ∨ y and x ∧ y from two nodes x and y. The complement operation constructs or recognizes the local complement of a node x between two nodes y and z. These constructions are done by adding a bipartition to the GB. These operations eventually recognize the nodes already present and they classify the created nodes. They update the relations of inclusion and disjunction between the dimensions of the implications of GB. See figure 7.
Figure 7: The GB G2 is divide(0, 1, G1). G3 is divide(0, 1, G2). Despite appearances, G3 is not a representation of the square Q 2 because the node 1 is not x ∨ y and the node 0 is not x ∧ y. GBs are not partial order diagrams that statically indicate sups and infs. The GB G4 is cross(x, y, G3). The GB G5 is G4 with 6 local complements added. the GB G6 (whose 11 equations are hidden for more readability) is the completion of G5 by cross or complement. G6 is a representation of the hypercube Q 4 . G3 is therefore a partial representation of Q 4 . Definition 4 (cross). Let G be a GB and let x and y be two nodes of G such that there is no path from x to y. cross(x, y, G) = 1. Build the bipartition x, y, a, b , where a and b are two new nodes. The bipartition makes a = x ∨ y and b = x ∧ y.
2.
For each s such that (x → s) and (y → s), add the implication (a → s) to G For each s such that (s → x) and (s → y), add the implication (s → b) to G 3. Classify a and b in G.
Definition 5 (complement). Let G be a GB and let x y and z be three nodes of G such that x → y and y → z. complement(x, y, z, G) = 1. Build the bipartition c, y, z, x .
Classify c in G.
Classifying a node n in a GB G consists in connecting it with implications to its most general implicants and its most precise implicates already constructed : Definition 6. Classify a node n in a GB G = For any node x of G such that (n ⇒ x) with no path from n to x, add the implication (n → x) to G. For any node x of G such that (x ⇒ n) with no path from x to n, add the implication (x → n) to G.
Delete, Merge, Forget
Some linear equations deduced from a non-valuated GB transform it. The equations x = 0 make x a contradictory node, which must be deleted. The equations x = y merge the nodes x and y into a single node. Also, a node that can be rebuilt if needed, may be forgotten.
After these operations modify the GB, it is necessary to update the dimensions of its implications. The complexity of these operations may be significant since they trigger Gaussian eliminations. It remains polynomially bounded because the number of nodes to delete or pairs of nodes to merge is bounded by the number of nodes of the GB.
Removing a conflicting node
To remove a conflicting node x:
1. Add x and its descendants in the graph of implications, to a queue of nodes to be deleted. Remove implications whose nodes are both in the queue.
2. Remove queued nodes.
3. If nodes have been deleted, solve the linear system, which may add nodes to the queue, and return to 2.
To remove a queued node x:
1. Remove implications x → y.
2. Simplify linear equations containing x, by replacing x with 0.
3. Free isolated node x.
Merging two equivalent nodes
To merge two equivalent nodes x and y:
1. Remove the implication x → y or y → x if it exists.
2. Replace y with x in implications y → n and n → y. If it creates a circuit in the graph of implications, remove the nodes from the circuit.
3. Replace y by x in linear equations containing y, simplify them and solve the linear system.
4. Free isolated node y.
Forgetting a reconstructible node
GBs can become dense. A forget operation that deletes a node and its links, which can be rebuilt if necessary, is a useful tool for controlling their size.
To forget a node n, we temporarily remove the implications i → n and n → s by replacing them with implications i → s, but without deleting n, which appears in linear equations. Then, we check if each valuation (i = 1) ∧ (n = 0) or (n = 1) ∧ (s = 0) is contradictory despite the absence of the implication i → n or n → s. If a single one is no longer contradictory, it means that we cannot reconstruct n and the hypercube induced by the GB is different. The node n may not be forgotten. Otherwise, n is reconstructible and the induced hypercube remains the same. We delete n and its implications are replaced by all the necessary implications between its direct implicants and implicates. [START_REF] Baader | The Description Logic Handbook: Theory, Implementation and Applications[END_REF] The reasoning on Graphs of Bipartitions
The Boolean completion of a GB
The cross and complement operations constitute a complete set of Boolean operators. The Boolean completion of a GB G with these operators is unique. It is a GB denoted by G n , which represents the hypercube Q n . Definition 8. The Boolean completion of a GB G is a GB noted G n , obtained from G by a finite sequence of applications of the operations cross and complement. G n is the fixed point, if it exists, of the cross and complement operations :
∀x, y nodes of G n , cross(x, y, G n ) = G n ∀x, y, z nodes of G n such that x → y ∧ y → z, complement(y, x, z, G n ) = G n
The cross and complement operations therefore do not change G n if the added nodes are recognized as already existing. This is always the case when the reasoning is based on a complete satisfiability test. Otherwise, the existence of the fixed point and its uniqueness must be proven on a case-by-case basis.
The GB G n is isomorphic to Q n , therefore to the powerset of the set of atomic dimensions of Q n . The set of atomic dimensions of Q n is called n. Each node x of G n represents a subset v x of n. The set v x is the characteristic vector of the node x: Definition 9. [characteristic vector] The characteristic vector of a node x of G n is the characteristic vector v x of the vertex of Q n represented by x. The characteristic vector of a node x of a GB G is the characteristic vector of x in G n .
G n verifies the uniqueness principle : For all x and y nodes of G n , (x = y) ⇐⇒ (v x = v y ). G n contains the nodes ⊥ and , which represent the vertices ⊥ and of Q n . It contains an implication x → y for each arc x → y of Q n and 2 n -(n + 1) linear equations modulo 2, which define 2 n -(n + 1) nodes of G n from to a basis of n + 1 nodes of G n . Some bases of n + 1 nodes linearly express the remaining 2 n -(n + 1) nodes. The basis shown in figure 8 is an example. The other possible bases are those that can be obtained by transforming the linear system with the operation of adding two equations. These are for example the n + 1 nodes of each of the n! paths from ⊥ to . In each of these bases, the other nodes are expressed with the single operation complement.
The 2 n -(n + 1) linear equations which define G n suffice to guarantee the fact that for any subset {x 1 . . . x k } of the nodes of G n whose sum ⊕ of the characteristic vectors cancels, the linear equation ( k i=1 x i ) = 0 is deducible from the linear system of G n . The number of variables of an equation is always even because all the equations of the linear system of G n and of a GB in general have an even number of nodes and this property is preserved on deducible equations. In the case where k is odd, the deducible equation contains the additional node ⊥, and v ⊥ = ∅. Proposition 2. For any subset {x 1 . . . x k } of nodes of G n such that ( k i=1 v i ) = ∅, with k even and where the v i are the characteristic vectors of the x i , the equation ( k i=1 x i ) = 0 is deducible from the linear system of G n . Proof. Each node x i is defined in G n by a linear equation x i = E i , where E i is a set of nodes each representing an atomic dimension of G n . The sum ⊕ of these k equations defining the x i is deducible from the linear system of G n . This is the equation Any GB G and its Boolean completion G n are equivalent, as Boolean expressions. This follows from the fact that the operations cross and complement only add Boolean functions of the nodes of G.
( k i=1 x i ) = ( k i=1 E i ). As ( k i=1 v i ) = ∅,
The deduction of linear equations
The cross and complement operations each add an equation to the linear system. These equations added to the implications are sufficient to construct all the Boolean expressions of a basis of nodes. However, the linear system thus obtained does not imply all the linear equations derivable from the entire GB, as illustrated in the figure 9.
Implications with equal dimensions
The equality dim(a, b) = dim(c, d) is rewritten (a ⊕ b) = (c ⊕ d). When we infer that two implications a → b and c → d of a GB have the same dimension, then the GB implies the equation (a ⊕ b ⊕ c ⊕ d = 0), which can be added to the linear system if it is not deducible from it.
( k i=1 (a i ⊕ b i )) ⊕ (c ⊕ d) = 0. Proposition 3. Let G be a GB. Let {(a 1 → b 1 ) . . . (a k → b k )} be implications of G, with mutually orthogonal atomic dimensions. Let c → d be an implication of G whose dimension is the set of dimensions of the a i → b i : dim(c, d) = ( k i=1 dim(a i , b i )). Then G =⇒ ((c ⊕ d) = ( k i=1 (a i ⊕ b i ))).
Proof. By simple application of the definition of dim.
This deduction of linear equations progressively increases the expressive power of the linear system and its capacity of inferring new Boolean constraints via Gaussian elimination, which is polynomially bounded, and efficient in practice.
Removing dimensions
New equations can also be deduced by using an equivalence relation which reduces the dimension of the GB.
If we remove atomic dimensions from a GB, we obtain a GB whose Boolean completion represents the hypercube of the remaining dimensions. The set E of deleted atomic dimensions defines an equivalence relation on the nodes of the GB. Two nodes x and y are equivalent if and only if (v x \ E) = (v y \ E). The nodes which are different only because of dimensions from E become equivalent under this relation.
For all E ⊆ n, we have
G n = G E G n\E .
The equivalence relation defined by the set E makes the nodes of each copy of G E in G n an equivalence class. If each class is reduced to its representative node, the copies of G n\E in G n reduce to G n\E (In figure 4, the copies of G E are the 8 blue squares, of dimension {1, 3}. The copies of G n\E are the 4 cubes of dimension {2, 4, 5}. Removing the dimensions {1, 3} gives the cube on the right). Each of the 2 |E| copies of G n\E in G n has as inf one of the 2 |E| nodes of G E . Each copy contains 2 |n\E| nodes and is isomorphic to G n\E (see figure 10).
Definition 10 (copy). Let E ⊆ n be a subset of the atomic dimensions of G n . The copy of G n\E associated with the node x of characteristic vector v x ⊆ E is the GB whose nodes represent the characteristic vectors of the form v x + F , for each F ⊆ (n \ E). The graph of implications of this copy is the subgraph of the graph of implications of G n induced by the nodes of the copy. The linear equations of the copy are the simplifications of the linear equations of G n by deleting the dimensions of E. The equations of G n and those that can be deduced from it, simplified by the removal of some arbitrary atomic dimensions, are deducible from the linear system of G n . Each atomic dimension of G n has an even number of occurrences in each equation (see Figure 8). Unlike G n , the equivalence classes of this relation on G are not Boolean completions. However, if each node x i of e is between a and b (a → x i and x i → b), then these nodes belong to the copy of G dim(a,b) in G n associated with v a . The characteristic vectors of the nodes of e all contain v a and are contained in v b . As the nodes of e are an even number, the v a disappear in the sum ⊕ and the equation deals with dimensions of G dim(a,b) , so it is deducible from G without the equivalence relation. If e is not deducible from the linear system of G, it can be added to it.
The GB of the dimensions of a GB
To each implication x → y of a GB, we associate a variable which represents the dimension v x ⊕ v y of the implication. The domain of this variable is the powerset of atomic dimensions of the induced hypercube. We then use a Boolean reasoning on the GB to calculate the relations of inclusion, disjunction and equality between these variables.
These variables are the nodes of a GB of dimensions, which is built separately. This GB is used to expand and accelerate the Boolean reasoning. The GB of dimensions is initialized to the implication ⊥ → between two new nodes ⊥ and . Each new dimension variable v is inserted between ⊥ and with the two implications ⊥ → v and v → . If v and w are two variables representing dimensions, the inclusion v ⊆ w is stored with an implication v → w. The disjunction v ⊥ w is stored with a bipartition v, w, v + w, ⊥ . The equality v = w causes the nodes v and w to be merged into a single node.
The nodes of the GB of dimensions which are implied only by the node ⊥, i.e. the minimal elements of the partial order of inclusion, are the atomic dimensions of the GB. These atomic dimensions are not those of the induced hypercube. They represent sets of atomic dimensions of the induced hypercube.
We calculate the relations of inclusion, disjunction and equality between the dimension variables with a Boolean reasoning based on proposition 1. It is assumed that the GB contains at least one implication of dimension d for each atomic dimension d. Definition 11. To calculate the dimension of an implication α → β of a GB G :
1. For each atomic dimension d of G, let x → y be an implication whose dimension is d.
If (x ∧ y) =⇒ (α ∧ β) then d ∈ dim(α, β). Else, if (x ∧ y) =⇒ (α = β) then d ⊥ dim(α, β).
2. Let E = {1 . . . n} be the set of atomic dimensions in dim(α, β) calculated in 1. We must look for the subsets of E whose dimensions are mutually orthogonal and which are equal to dim(α, β) : Let {(a 1 → b 1 ) . . . (a n → b n )} be a set of implications, of respective dimensions 1 . . . n. For each subset F = {1 . . . k} of E of mutually orthogonal dimensions such that (
k i=1 (a i = b i )) ⇐⇒ (α = β), the equation ( k i=1 (a i ⊕ b i ) ⊕ (α ⊕ β) = 0) is deducible from G (proposition 3
). If it is not deducible from the linear system alone by Gaussian elimination, it must be added to the system. For each of these F , the equation obtained makes that dim(α, β) = F . All these F are different expressions of the dimension of the implication α → β.
The step 2 looks for subsets F of E, of atomic implications of mutually orthogonal dimensions. This amounts to searching maximal cliques in the graph of the relation ⊥ on the set {1 . . . n}. Finding a maximum clique in a graph is an NP-complete problem. The following incomplete method is based on Gaussian elimination : Definition 12 (Calculation of linear equations). Let E = {1 . . . n} be a set of atomic dimensions belonging to the dimension of an implication α → β. Let {(a 1 → b 1 ) . . . (a n → b n )} be a set of implications, of respective dimensions 1 . . . n. To calculate the linear equations of the subsets F of E whose dimensions are mutually orthogonal,
If n = 1, F = E = {1}. We add the equation (a 1 ⊕ b 1 ⊕ α ⊕ β = 0).
Otherwise, 1. Temporarily construct an implication of dimension i for each i ∈ E, between α and β, for example by constructing two implications α → x and x → β and forcing the dimension of x → β to be i with an equation relating x → β to an implication of G of dimension i, as in figure 11 .
f ocus(α, β, G)
3. Valuate α, Valuate β and all nodes x between α and β, i.e. such that α → x and x → β.
4. Solve the linear system and simplify the equations, in the presence of the equivalences added by f ocus but without simplifying via Boolean values.
5. Add to the linear system S of G any fully valued equation.
6. Remove any valuation and undo the construct.
7. Solve the linear system.
The operation f ocus(α, β, G) transforms into an equivalence each implication of G whose dimension is orthogonal to dim(α, β): and beta one implication per atomic dimension of the set E. For example, we copy the implication a 1 → b 1 of dimension 1 by constructing the two implications alpha → 1 and 1 → beta and by forcing the dimension of 1 → beta to be equal to that of a 1 → b 1 with the equation (a 1 ⊕ b 1 ⊕ 1 ⊕ beta = 0). If there is a subset of {1, 3, 5, 7} which equals the dimension of alpha → beta and which is detectable by linear reasoning, Gaussian elimination will detect it under the form of a linear equation.
Definition 13 (f ocus)
. Let G = N, I, S be a GB. Let {α, β} ⊆ N such that α → β. f ocus(α, β, G) = (G ∧ (∀(x → y) ∈ I, if (dim(x, y) ⊥ dim(α, β)) then x = y))
It is a way to cancel some dimensions which are not present between two nodes α and β such that α → β (that are not dimensions of G dim(α,β) ), in order to focus the linear reasoning on this area and deduce possibly new equations, to add to the linear system. This is an application of the reasoning described in paragraph 5.2.2.
Characterizable GBs
A characterizable GB is a GB on which a uniqueness principle can be guaranteed : it is possible to associate a unique characteristic vector with each node. The atomic dimensions of a characterizable GB must be mutually orthogonal. If the GB has n atomic dimensions, it induces Q n . Each characteristic vector is then a subset of the atomic dimensions of Q n .
Figure 12 shows three simple examples of characterizable GBs. The characterize operation (definition 14) associates vectors to the nodes of a GB. It is a nondeterministic operation. It can associate several different vectors to the same node. It associates a unique characteristic vector to each node only when the GB is characterizable. Definition 14 (characterize). Characterizing the nodes of a GB G defines as :
1. Pick a node n from G and set v n = ∅.
2. As long as there are modifications, (a) While it is possible, choose a node x such that v x is calculated and for any implication (x → y) or (y → x) of G such that dim(x, y) is a subset of the atomic dimensions of G, set v y = (v x ⊕ dim(x, y)).
(b) Solve the linear system using the characterized nodes as a basis. For each linear equation e : (x = ( k i=1 y i )) where only x is not characterized, set v x = ( k i=1 v yi ). 3. In order for the characteristic vectors to reflect the orientation of the GB, i.e. the set inclusion of the characteristic vectors, it is necessary to ensure that for any pair of nodes {x, y} such that there is a path from x to y, we have v x ⊂ v y and therefore dim(x, y) = (v y ⊕ v x ) = (v y \ v x ). For each implication (x → y), add the offset v x \ v y to a set E initially empty. Then replace v n by (v n ⊕ E), for each node n.
The step 2a propagates characteristic vectors wherever possible via implications whose dimensions are subsets of atomic dimensions. The step 2b is necessary because the graph of the implications alone of a characterizable GB can be unconnected (Figure 13).
The reasoning on characterizable GBs
A characterizable GB is a theoretical object. It needs a complete reasoning to detect all the characterizable GBs contained in a GB, for example based on a complete satisfiability test, exponential in the worst case. Moreover, any GB can contain an exponential number of interleaved characterizable GBs. It is therefore not possible to enumerate all those which can be detected and to construct the characteristic vectors of their nodes. What is possible is to define a polynomial reasoning on arbitrary GBs, which behaves as if the characteristic vectors were available locally to each characterizable GB satisfying certain properties. The Gaussian propagation, presented in the paragraph 6.3, is a complete reasoning locally to each characterizable GB, if this characterizable GB verifies the following 5 properties. In the rest of the paper, the term "characterizable GB" designates a GB whose graph of implications is connected and which has these 5 properties:
1. its atomic dimensions are mutually orthogonal, as calculated and stored in the GB of dimensions of G defined in paragraph 5.3.
2. its non-atomic dimensions, calculated by the definition 11, are sets of these atomic dimensions 3. G contains at least one implication of dimension d for each atomic dimension d of G
4. for each set of implications of G {a 1 → b 1 , . . . , a k → b k } such that ( k i=1 dim(a i , b i )) = ∅, the linear equation ( k i=1 (a i ⊕ b i )) =
0 is deducible from the linear system alone. 5. all chains of implications between two nodes have the same dimension.
A characterizable GB G of atomic dimensions D = {1, . . . , n} is completed in Q n by adding and classifying the missing vertices. To add the node s of characteristic vector v s ⊆ D, we check if it already exists. Otherwise, it is created and classified in such a way as to maintain the Hasse diagram structure of the inclusion relation on the nodes. This node s is a linear function of nodes already present in G since its characteristic vector is a set of atomic dimensions of G and there is at least one implication of each atomic dimension. If s is connected to a node t of G by an implication s → t or t → s by the operation classif y (Definition 6), its dimension v s ⊕ v t is always a set of atomic dimensions of G. This dimension is calculated by the operation which dimensions an implication (definition 11). By default, it retains the property 5. If this property is violated by a new construction, the propositions 4 and 5 indicate a method to restore it locally to all characterizable GBs.
Locally to a characterizable GB, Boolean reasoning is greatly simplified. Boolean operations are reduced to set operations on finite sets of atomic dimensions and they preserve the property of being characterizable.
Chains and dimensions
In a hypercube, the chains of implications between two vertices all have the same dimension. In a GB, on the other hand, the dimensions of the chains of implications between two nodes are possibly expressed differently.
To preserve the property 5 of a characterizable GB, a restorative equation (proposition 4) may be deduced, which removes the dimensions that differentiate two chains between two nodes (proposition 5). This restorative equation contains the nodes of a set of implications. If the dimensions of these implications are mutually orthogonal, the implications become equivalences.
Proposition 4 (restorative equation). Any linear equation
( n i=1 (a i ⊕ b i )) = 0 such that (∀i ∈ {1 . . . n}, a i → b i ) and (∀{i, j} ⊆ {1 . . . n}, (dim(a i , b i ) ⊥ dim(a j , b j ))) implies (∀i ∈ {1 . . . n}, a i = b i ).
Proof. The hypotheses :
h1 = (( n i=1 (a i ⊕ b i )) = 0), h2 = (∀i ∈ {1 . . . n}, a i → b i ), h3 = (∀{i, j} ⊆ {1 . . . n}, (dim(a i , b i ) ⊥ dim(a j , b j ))) Suppose a 1 = b 1 . ((a 1 → b 1 ) ∧ (a 1 = b 1 )) =⇒ (a 1 ∧ b 1 ).
By the equivalence (2) of the proposition 1, we have (∀i ∈ {2 . . . n}, a i = b i ).
By simplifying in the equation h1, we obtain a 1 = b 1 . Contradiction.
If n = 1, we have the equations x = 0 or x = 1, which value the node x. If n = 2, we have the equations x ⊕ y = 0 or x ⊕ y = 1, ie x = y or x = y. If x → y, the equation x = y automatically transforms into the valuation (x = 0) ∧ (y = 1).
The conjunction G ∧ val expresses the application of the valuation val on the GB G.
The valuation of a GB reduces the dimension of the hypercube it induces. For example, a linear equation x = y removes from the induced hypercube the dimensions included in dim(x, y). The valuation of a GB is a temporary modification that must possibly be undone.
Basic propagation
A valuation of a GB may imply linear equations that are not deducible from the GB itself. This is the role of propagation. We note α β the fact that β is deduced from α by propagation. The simplest propagation exploits implications and linear equations: Definition 16 (propagation). The propagation of a valuation on a GB consists of applying the following rules wherever possible, until there are no more modifications: where e is an expression ( n i=1 x i ) and the x i are valued variables. Rules 1 and 2 propagate along implication paths. Rule 3 is triggered when a linear equation has a single unvalued variable. This propagation is analogous to the propagation called BCP on the clausal representation of Boolean knowledge [START_REF] Apt | Some remarks on boolean constraint propagation[END_REF]. It therefore has a linear complexity.
The basic propagation adds to this simple propagation the exploitation of the information compiled in the GB of the dimensions of a GB G, in accordance with proposition 1.
Gaussian propagation
Let G be a GB. When a valuation val is propagated over G, its linear system S can have partially or totally valued equations. This partially valued linear system can be solved by Gaussian elimination. Gaussian elimination is complete, locally to S, for the detection of the equations 0 = 0 (tautology), 0 = 1 (contradiction), x = 0, x = 1, x = y, x = y. Also, it suffices to value a set of nodes to infer by Gaussian elimination any equation implied by S whose all nodes belong to the set (it simplifies to 0 = 0) or except one node (it simplifies to x = constant) or two (x = y or x = y). Its complexity is in O(n 2 m), for n equations and m variables.
Gaussian propagation is a loop that alternates basic propagation and Gaussian elimination until there are no more changes. Since there are a maximum of m nodes to value, its complexity is in O(n 2 m 2 ).
Local completeness of Gaussian propagation
For each implication a → b of atomic dimension of a GB G, the Gaussian propagation of the valuation (a = 0 ∧ b = 1) values all the nodes of the characterizable GBs in G which contain a and b. It is the local completeness of the Gaussian propagation. This completeness avoids having to enumerate and construct the characteristic vectors of these interleaved GBs, which are potentially exponential in number, in order to have a complete Boolean reasoning locally to each characterizable GB. The implication graph of G then contains only implications x → y where x and y are valued or where x = y. As this graph is connected, all the nodes are valued by the basic propagation.
The probing propagation
Locally to a characterizable GB, the Gaussian propagation of the valuation of an implication of atomic dimension is complete. It is possible, at a higher cost, to make the propagation of any valuation complete, locally to these characterizable GBs. It suffices, once any valuation has been propagated, to test all the implications a → b not entirely valuated and of atomic dimension. We propagate a = b, that is to say (a = 0 ∧ b = 1), by Gaussian propagation, which is complete. If it is contradictory, we add a = b to the valuation, otherwise, a = b remains possible. This probing propagation is complete locally to the characterizable GBs which contain the nodes of the propagated valuation.
Proposition 7. In a characterizable GB, the probing propagation of any valuation is complete.
Proof. Let G be a characterizable GB and V al a valuation of G. Let x be a node of G such that (G ∧ V al) =⇒ (x = v), where v ∈ {0, 1}, but not deduced by the Gaussian propagation of V al. Let y be a node of G such that (y = v) ∈ V al. There is at least one because only a valuation containing a valued node v can imply x = v. In G, x and y are two different nodes. In G∧V al, x = y is deducible, but not by Gaussian propagation.
In the connected graph of implications of G, there is a chain of implications between x and y, some of which are equivalences in G ∧ V al. The probing propagation infers all these equivalences because the Gaussian propagation is complete on G (proposition 6). The sum ⊕ of the dimensions of the remaining implications is empty because x = y is deducible from G ∧ V al. By simplifying the equation e with these equivalences, we obtain the equation a 1 = b k , ie x = y. This equation is therefore derivable from the simplified linear system of G ∧ V al. Gaussian elimination is complete to deduce these equations. See figure 14 for an illustration.
Figure 14: This GB is characterizable. If it is contained in a larger GB, the probing propagation is complete locally to it. The calculation of the dimensions infers that its atomic dimensions {1, 2, 3, 5} are mutually orthogonal and that the dimension 4 is equal to {2, 3}. Note that the chains between x and y all have the same dimension {3, 5}. The valuation (y = 1) ∧ (z = 0) (green for 1 and red for 0) implies x = 1. The Gaussian propagation of this valuation does not deduce it. To deduce it, it suffices to test each atomic dimension by Gaussian propagation of an implication of this dimension. The implications of dimensions 3 or 5 become equivalences (in blue). There is an even number of the remaining dimensions in the two chains between x and y and therefore, their ⊕ sum is empty. The linear system, simplified by the equivalences, deduces x = 1 by Gaussian elimination. Note: it also derives the linear equation a ⊕ b = 1 (in purple), i.e. a = b, which differentiates the two equivalence classes in blue.
The probing propagation performs as many tests each triggering a Gaussian propagation as there are not entirely valuated implications. Its complexity is therefore in O(n 2 m 2 p), where p is the number of implications of the GB.
The complete classification
The characteristic vector v x of a node x of a GB is the characteristic vector of the corresponding vertex of the hypercube induced by the GB. It is a theoretical object, which is not available for the probing propagation but which allows to define a complete classification: Definition 17 (complete classification). A GB is completely classified if and only if for any pair of nodes {x, y},
((v x = v y ) ⇐⇒ (x = y)) ∧ ((v x ⊂ v y ) ⇐⇒ (x → y))
If for the Boolean reasoning used in all construction operations of a GB, a complete satisfiability test is used, every GB is completely classified. If we use the probing propagation, we can only guarantee the complete classification locally to the characterizable GBs contained in a GB : Proposition 8. In a GB constructed using probing propagation, all characterizable GBs are completely classified.
Proof. By the proposition 7, the probing propagation is complete locally to each characterizable GB contained in a GB. Let x and y be two nodes of a characterizable GB contained in a GB G. If (v x ⊆ v y ) in the hypercube induced by the characterizable GB, whatever the order of construction of x and y, the classif y operation (definition 6) infers the implication x → y. If moreover (v x = v y ), the probing propagation of {x = 0, y = 1} by the operation which adds the implication x → y to G (definition 7) detects a contradiction, which produces the merging of the two nodes.
The complete propagation
The propagation can be based on the complete Boolean satisfiability test, performed by a SAT-solver. It is then complete.
To extend a partial valuation to all the nodes whose value it implies, we proceed as follows : for each node n not valued, we apply the complete satisfiability test on the conjunction of the graph, of the partial valuation and of n = 1 or n = 0, translated into conjunctive normal form. If the test detects a contradiction, the value of n is reversed. We take advantage of the graph of implications to limit these costly tests. If x → y and if x = 1 has been tested non-contradictory, it is useless to test y = 1, and dually for y = 0 and x = 0.
The complete propagation actually uses the SAT-solver Glucose [START_REF] Simon | The glucose sat-solver[END_REF], an extension of MiniSat [START_REF] Eén | An extensible sat-solver[END_REF]. In practice, the computation times of this complete propagation increase exponentially with the size of the graph.
Achievements and Prospects
The tools and reasoning described here are programmed in C++ and Objective-C under MAC OS. All the diagrams in this document were made with the interactive editor of GBs.
The GBs are a construction site that allows to study all kinds of combinations capable of providing a usable solution to manage Boolean knowledge. This project will still require a lot of experimentation to improve this management, optimize calculation times and increase the deductive power of polynomial reasonings.
The construction of the GB of the dimensions of a GB or the probing propagation, for example, are polynomially bounded but in practice, they are expensive. One solution is to restrict to the equalities between dimension variables and the graph of the relation ⊥. Only the equations (a ⊕ b ⊕ c ⊕ d = 0) are added whenever it is inferred that two implications a → b and c → d have the same dimension. This solution uses Gaussian propagation, which itself uses basic propagation, which is less expensive than probing propagation.
The complete satisfiability test is too powerful for knowledge management tasks oriented towards modeling basic understandings. Used carelessly, it quickly becomes too costly. The computation time of a single test depends more on the type of circuit than on its size (some logic circuits are known to be difficult for the SAT test, like CNF formulations of the Pigeonhole principle, for example), but the computation times of the complete propagation, on the other hand, are systematically too high when the GBs reach a certain size, even if they are free of difficult sub-circuits. It is always possible to obtain more efficient and polynomially bounded solutions, from Gaussian propagation and reasoning about dimensions.
Boolean knowledge management should restrict to incomplete operations that are more efficient than complete reasoning based on a complete satisfiability test. This management has neither the capacity nor the vocation to solve the decision problems dealt with by SAT-solving. To solve a decision problem beyond the scope of management operations, specified in a partially valued GB, the GB is translated with its valuation into conjunctive normal form and a SAT-solver is used (actually the Glucose solver [START_REF] Simon | The glucose sat-solver[END_REF]). The complete satisfiability test remains an essential tool, to be triggered sparingly. We can for example trigger a single complete satisfiability test each time a node x ∨ y is created by the operation cross, if a unique sup n of x and y is present, to check if n = x ∨ y, and dually for x ∧ y.
Out of this construction site emerged the natural language interaction program presented in [START_REF] Goossens | Modélisation d'une forme simple de compréhension[END_REF]. This program models a simple understanding capacity in the micro-world of elementary school level quadrilateral geometry. The geometric knowledge is entirely expressed in Boolean logic. The challenge behind this demonstration, partially accomplished, was to manage all the sentences expressible with the words of a mini-dictionary, as a human would. The cases of failure concern phenomena that are outside the scope of the intended understanding, such as credulity, revision of beliefs, metaphors, second degree, etc., and also the limits of the expressive capacities of Boolean logic.
Acknowledgements
I thank Vincent Lesbros for his proofreading and corrections.
Conclusion
This document has presented and formalized Graphs of bipartitions, to manage Boolean knowledge, with the general objective to model understanding abilities based on symbolic reasoning.
This management is made of incomplete but efficient polynomial Boolean reasonings. These reasonings ensure a function of storage or inter-classification of Boolean knowledge, as close as possible to the induced Boolean hypercube. While knowledge is stored in the graph, they grow substructures where Boolean reasoning is complete and efficient. These intertwined substructures are potentially exponential in number. The reasoning is reduced, locally to each of these substructures, to set operations on finite sets of dimensions of the induced hypercube.
These substructures grow by deducing linear equations modulo 2 from reasoning on the dimensions. The resulting linear system is particularly useful because the associated polynomial reasoning, Gaussian elimination, is complete and efficient.
These advances in automatic Boolean knowledge management will benefit the modeling of simple forms of understanding, in natural language interaction on micro-universes described with Boolean knowledge, as illustrated in [START_REF] Goossens | Modélisation d'une forme simple de compréhension[END_REF]. These forms of understanding are based on symbolic reasoning, thus precise and rigorous. It is expected that they will resist approaches solely based on Large Language Models. LLMs bring noticeable progress in understanding natural language. Where purely symbolic approaches are rigid, too rigorous and difficult to apply on large volumes of knowledge, LLMs deal with approximate syntax and adequately model flexible interactions. However, they may fail even on intuitive knowledge focussing on structured topics. They will need to be combined with symbolic approaches and will come up against the same problems.
The language of Boolean expressions is very poor. It remains to understand how to lift the polynomial reasonings presented here, to higher orders. These reasonings are not based on an enumeration of models, in finite number at order 0 but infinite otherwise, contrary to the complete Boolean satisfiability test. This transition to higher orders is still a speculative but possible objective.
Figure 1 :
1 Figure 1: The Boolean hypercube Q 3 . Each vertex displays its characteristic vector. Each arc displays its atomic dimension. The chains between any two vertices where each atomic dimension appears only once, are the permutations of the sequence of dimensions of their arcs.
Definition 1 .
1 The Boolean hypercube. 1. Q {1} is the one-dimensional Boolean hypercube named 1. It has 2 1 = 2 vertices ⊥ and and an arc ⊥ → . v ⊥ = ∅ and v = {1}. 2. If Q n and Q m are two Boolean hypercubes, with n = ∅ and m = ∅, the Cartesian product of Q n and Q m is the Boolean hypercube Q n+m . Each vertex s of Q n has 2 |m| copies in Q n+m . Each of these copies has as characteristic vector the union of v s with one of the 2 |m| subsets of the set of atomic dimensions of Q m . Each edge of Q n+m connects two vertices x and y such that v x ⊕ v y is a singleton. The arc is x → y if v x ⊂ v y and y → x otherwise.
Figure 2 :
2 Figure 2: The Boolean hypercube Q 5 is the Cartesian product of Q 3 and Q 2 . It contains 2 2 = 4 copies (in blue) of the cube Q 3 . These cubes are connected by 2 3 = 8 copies of the square Q 2 . Each square contains one vertex of each of the 4 cubes. The 4 vertices of a square are positioned identically in each of the cubes.
Figure 3 :
3 Figure 3: The vertices a and b of the square Q 2 on the left express the other two (c = a ∨ b and d = a ∧ b). The vertices d, a and c of the middle square Q 2 express the local complement b = ((c ∧ a) ∨ d) of a between d and c. The vertices a, b and c of the cube Q 3 (on the right) express the other vertices, including (a ∨ b ∨ c), under the constraint (a ∨ b) = (a ∨ c) = (b ∨ c) = (a ∨ b ∨ c).
Figure 4 :
4 Figure 4: The Boolean hypercube Q 5 (on the left) contains 8 copies (in blue) of the square induced by the vertices a and b. Each square is an equivalence class of the relation defined by a = b. The cube (on the right) is the quotient of Q 5 by this equivalence relation. The dimensions 1 and 3 have been removed from Q 5 , to obtain the cube of the remaining dimensions 2, 4 and 5.
Proposition 1 . 2 )Figure 5 :
125 Figure 5: The dimension of the pair {a, b} is {1, 3}. The valuation (a = 0) ∧ (b = 1) "cancels" the dimensions 2, 4 and 5 everywhere in Q 5 and therefore transforms into equivalences all the arcs having these dimensions. Q 5 is the Cartesian product of Q {2,4,5} and Q {1,3} . Each copy of Q {2,4,5} in Q 5 has all its vertices equivalent. Vertices valued 1 are in green and vertices valued 0 in red.
the constraint is also written as the modulo 2 linear equation (a ⊕ b) = (c ⊕ d), or (a ⊕ b ⊕ c ⊕ d) = 0. We express for example s = a ∨ b and i = a ∧ b from four implication paths, from a to s, from b to s, from i to a and from i to b, and with the equation (a ⊕ s) = (i ⊕ b). The complement d of b between a and c, assuming a → b and b → c, is expressed as d = (a ⊕ b ⊕ c). The graph of implications between nodes and the modulo 2 linear equations allow to construct all the Boolean expressions of a basis of nodes. The basic building block of a GB is the bipartition: Definition 2. A bipartition is a quadruple a, b, c, d , where a, b, c, d are Boolean variables verifying (a → c) ∧ (b → c) ∧ (d → a) ∧ (d → b) and a ⊕ b ⊕ c ⊕ d = 0. The constraints of the bipartition on its four variables a, b, c, d make that c = a ∨ b and d = a ∧ b. From two variables a and b, a bipartition thus expresses both a ∨ b and a ∧ b. Also, the variable b is the local complement of a between c and d. See figure 6.
Figure 6 :
6 Figure 6: Unlike hypercubes (figure 3), the implications between the nodes of a GB are not enough to represent Boolean expressions. Linear equations are needed. The equation a ⊕ b ⊕ c ⊕ d = 0 in the two squares on the left is represented by the link consisting of a circle connected to the four variables of the equation. The dotted connection designates the defined variable of the equation in the solved system. These two squares on the left represent the bipartition a, b, c, d . The cube on the right contains 4 equations, which define linearly the nodes a, b, c and a ∨ b according to the basis of the nodes a.b, a.c, b.c and (a.b).c. The basis {a, b, c} is insufficient to define the other nodes linearly but sufficient to define them as Boolean expressions.
Adding an implication to a GB G must preserve the Hasse diagram structure of the implication graph. This operation recognizes existing nodes and removes redundant implications : Definition 7. add the implication (a → b) to a GB G = If a ∧ b is contradictory, definitely impose the constraint a = b on G. Otherwise, if there is no path from a to b, add a → b to the set of implications of G and remove redundant implications to maintain the Hasse diagram.
the nodes representing the atomic dimensions have an even number of occurrences in the E i . So ( k i=1 E i ) = 0 and the deducible equation becomes ( k i=1 x i ) = 0.
Figure 8 :
8 Figure 8: The GB G 4 represents Q 4 . The basis {1, 2, 3, 4, ∅} is in red. The node ∅ is the inf of G 4 and its sup is the node 1234. The 2 4 -(4 + 1) = 11 nodes outside the basis are defined by the 11 linear equations. The node 12 is defined by the equation 12 = 1 ⊕ 2 ⊕ ∅. The node 124 is defined by the equation 124 = 1 ⊕ 2 ⊕ 4. All equations have a number of variables which is even and greater than or equal to 4.
Figure 9 :
9 Figure 9: The reasoning on dimensions deduces a new linear equation. The operation cross(b, M ) on the left GB produces the middle GB, which has 3 linear equations. We then have ((a ∨ M ) ∧ (b ∨ M )) = M and ((a ∧ M ) ∨ (b ∧ M )) = M but this is not deductible linearly. Calculating the dimensions deduces an additional linear equation and produces the right GB, where the reasoning based on Boolean value propagation and Gaussian elimination is complete.
Figure 10 :
10 Figure 10: The GB G {1,2,3,4} , or G 4 , is the Cartesian product of G {1,2} and G {3,4} . Each implication displays its dimension. Each node displays its characteristic vector, in red. The 4 copies of G {1,2} (in blue) are the equivalence classes of the relation defined by the dimension set {1, 2}. The 4 copies of G {3,4} , of dimensions 3 and 4, are associated with each of the 4 subsets of {1, 2}. The copy of G {3,4} in bold is associated with the singleton 2.
If we remove the atomic dimensions of a set E, we remove an even number of each dimension from each equation and the remaining dimensions are always an even number in each equation. Each simplified equation is therefore deducible from G n , by the proposition 2. Each implication a → b of a GB G defines an equivalence relation on the nodes of G. This relation removes the dimensions out of dim(a, b), i.e. the set of atomic dimensions of n \ dim(a, b), where n is the set of dimensions of G n . Let e : ( k i=1 x i ) = 0 be a linear equation deducible from G and simplified by this equivalence relation. Each x i belongs to a different equivalence class.
Figure 11 :
11 Figure 11: To calculate the dimension of the implication alpha → beta in the case n > 1, it suffices to copy between alpha
Figure 12 :
12 Figure 12: Three characterizable GBs, whose dimensions are mutually orthogonal. The GB on the left is interpreted as a graph which stores the subsets of the set {a, b, c}. This set is a basis of atoms : (a ∧ b) = (a ∧ c) = (b ∧ c) = ∅. The middle graph is the dual construction, with the co-atom basis {a, b, c} : (a ∨ b) = (a ∨ c) = (b ∨ c) =1. The graph on the right is a mixed construction, without a basis of atoms or co-atoms but whose dimensions are mutually orthogonal.
Figure 13 :
13 Figure 13: This GB is not characterizable. Dimensions 4 and 6 are orthogonal because they are on a common path but 4 and 3 are not orthogonal. The sub-GB containing the nodes {a, b, c, h, g, e} and their implications is characterizable. It contains the basis of mutually orthogonal dimensions {2, 4, 5, 6}, which makes it possible to characterize the nodes {a, b, c, h, g} and the node e is expressed linearly in this basis: e = a ⊕ c ⊕ g.
1. (x → y ∧ x = 1) (y = 1) 2. (x → y ∧ y = 0) (x = 0) 3. ((x = e) and e evaluates to the constant c) (x = c)
For any implication a → b of G, if a and b have the same Boolean value, then if a = b, any implication c → d of G such that dim(c, d) ⊆ dim(a, b) adds c = d to the current valuation. If a = 0 ∧ b = 1, for any implication c → d of G, if dim(c, d) ⊥ dim(a, b), we also add c = d and if dim(a, b) ⊆ dim(c, d), we add c = 0 ∧ d = 1 to the current valuation.
Proposition 6 (
6 local completeness). In a characterizable GB, the Gaussian propagation of the valuation (a = 0 ∧ b = 1) of any implication a → b of atomic dimension values all the nodes of the GB. Proof. Let a → b be an implication of atomic dimension of a characterizable GB G, valued with (a = 0 ∧ b = 1). For any implication c → d of G : If dim(c, d) = dim(a, b), the equation (a ⊕ b ⊕ c ⊕ d = 0) is deducible from the linear system of G by the property 4 of a characterizable GB. The valuation (a = 0∧b = 1) simplifies this equation to c⊕d = 1, an equation for which the Gaussian elimination is complete. It is automatically transformed into the valuation (c = 0 ∧ d = 1) because c → d. If c → d has a different dimension: If it is an atomic dimension, dim(c, d) ⊥ dim(a, b) is noted in the GB of the dimensions G and the basic propagation deduces c = d. If it is a non-atomic dimension E = {d 1 . . . d k }, where the d i are mutually orthogonal atomic dimensions of G : By the property 3 of a characterizable GB, G contains a set {x 1 → y 1 . . . x k → y k } of implications of respective dimensions {d 1 . . . d k }. By the property 4, the equation e : ( k i=1 (x i ⊕ y i )) = (c ⊕ d) is deducible from the linear system of G, even if the Gaussian elimination is not complete to deduce it in all cases. If dim(a, b) ∈ E, one of the implications x i → y i has the same dimension as a → b. This implication verifies (x i ⊕ y i ) = (a ⊕ b). For the other x j → y j , the basic propagation deduces x j = y j . With these equalities, the equation e simplifies to (a ⊕ b) = (c ⊕ d). The valuation (a = 0 ∧ b = 1) simplifies it to (c ⊕ d) = 1. Gaussian elimination is complete to deduce these equations. It simplifies to (c = 0 ∧ d = 1). Otherwise, dim(a, b) ⊥ E. From the valuation (a = 0 ∧ b = 1) and from the GB of the dimensions of G, the basic propagation deduces ( k i=1 (x i = y i )). The equation ( k i=1 (x i ⊕ y i )) = (c ⊕ d) simplifies to c ⊕ d = 0, i.e. c = d, for which Gaussian elimination is complete.
Let e : (( k i=1 (a i ⊕ b i )) = 0) be the equation where the a i → b i are the implications of the chain which are not equivalences in G ∧ V al. By the property 4 of a characterizable GB, e is derivable from the linear system of G because the sum ⊕ of its dimensions is empty. By ordering these implications a i → b i as they appear in the chain from x to y, we have in G ∧ V al the following equivalences: x = a 1 , b k = y and (∀i ∈ {1, . . . , k -1}, b i = a i+1 ).
The same reasoning from any (a i ∧ b i ) leads to the same contradiction. So (h1 ∧ h2 ∧ h3) =⇒ (∀i ∈ {1 . . . n}, a i = b i ) Proposition 5. In a characterizable GB, all the chains of implications between two nodes have the same dimension.
Proof. Let G be a characterizable GB. Let α and β be two nodes of G. Suppose that there are two implication chains in G between α and β, of dimensions E and F respectively. In other words, the dimension between α and β is calculated in two different ways. Let us show that E = F . Let {P, Q, R} be a tripartition of E ∪ F , with
The atomic dimensions in P , Q and R are all different and mutually orthogonal (property 1 of a characterizable GB). The dimensions of P are the only common dimensions of E and F . We have E = P + Q and F = P + R. By the property 3 of a characterizable GB, G contains at least one implication of dimension d for each atomic dimension
By the property 4 of a characterizable GB,
The sum of these two equations is : ((
Gaussian elimination is not complete for deducing a restorative equation, as described in proposition 4, when it is deducible from the linear system. As the proof of this proposition indicates, to check whether a characterizable GB satisfies the property 5, one must check for each of its implications a → b if the expression (a ∧ b) is contradictory, i.e. whether a = b is valid.
The propagation of valuations
The reasoning on the dimensions, which is used to deduce new equations and to classify the nodes created by the operations cross and complement, is based on Boolean reasoning. These reasonings are complete if we use a complete Boolean satisfiability test. To avoid the risks of combinatorial explosions, undesirable for knowledge base management operations, one can develop incomplete and polynomially bounded methods, which propagate constraints from valuations.
The Gaussian elimination is complete locally to what is expressed linearly from any basis of nodes of a GB. It is a linear reasoning that spontaneously detects linear equations that remove contradictory nodes and merge equivalent nodes.
To go further, we can trigger the propagation of a partial valuation of the GB. A valuation removes certain dimensions from the induced hypercube and allows deductions beyond the reach of linear reasoning.
Valuations
A valuation is a system of linear equations on the nodes of a GB: Definition 15. A valuation of a GB G is a set (or a Boolean conjunction) of linear equations modulo 2 of the form (( n i=1 x i ) = constant) with constant ∈ {0, 1}. The x i are nodes of G. |
00410059 | en | [
"shs.eco"
] | 2024/03/04 16:41:20 | 2009 | https://shs.hal.science/halshs-00410059/file/chevallier_carbonrisk.pdf | Julien Chevallier
email: [email protected].
Energy Risk Management with Carbon Assets
Keywords: C61, G11, Q40 Mean-variance optimization, Portfolio frontier analysis, CAPM, CO 2, Carbon, Energy, Bonds, Equity, Asset Management, EU ETS, CERs
This article proposes a mean-variance optimization and portfolio frontier analysis of energy risk management with carbon assets, introduced in January 2005 as part of the EU Emissions Trading Scheme. In a stylized exercise, we compute returns, standard deviations and correlations for various asset classes from April 2005 to January 2009. Our central result features an expected return of 3% with a standard deviation < 0.06 by introducing carbon assets -carbon futures and CERs-in a diversified portfolio composed of energy (oil, gas, coal), weather, bond, equity risky assets, and of a riskless asset (U.S. T-bills). Besides, we investigate the characteristics of each asset class with respect to the alpha, beta, and sigma in the spirit of the CAPM. These results reveal that carbon, gas, coal and bond assets share the best properties for composing an optimal portfolio. Collectively, these results illustrate the benefits of carbon assets for diversification purposes in portfolio management, as the carbon market constitutes a segmented commodity market with specific risk factors linked to the EU Commission's decisions and the power producers' fuel-switching behavior.
Introduction
Carbon assets, created on January 1, 2005 as part of the European Union Emissions Trading Scheme (EU ETS)1 , present very peculiar characteristics which are worth of investigation for portfolio management purposes. The determinants of CO 2 prices are indeed linked to other energy markets (brent, gas, coal), and institutional events, as highlighted in previous literature [START_REF] Christiansen | Price determinants in the EU emissions trading scheme[END_REF], [START_REF] Kanen | Carbon Trading and Pricing[END_REF], [START_REF] Mansanet-Bataller | CO 2 Prices, Energy and Weather[END_REF], [START_REF] Alberola | Price drivers and structural breaks in European carbon prices 2005-2007[END_REF]). Among these fundamentals, the fuel-switching behaviour of power operators, and the amendments to the scheme brought by the European Commission are key to understand the factors that drive the underlying price changes of carbon assets [START_REF] Convery | The European Carbon Market in Action: Lessons from the First Trading Period[END_REF], Delarue et al. (2008), [START_REF] Ellerman | A Top-down and Bottom-up look at Emissions Abatement in Germany in response to the EU ETS[END_REF], [START_REF] Chevallier | Risk aversion and institutional information disclosure on the European carbon market: a case-study of the 2006 compliance event[END_REF]). Last but not least, carbon assets seem to exhibit a weak link with macroeconomic risk factors, be it with industrial production as a proxy of GDP (Alberola et al. (2009a), Alberola et al. (2009b)), or with stock and bond indices as a proxy of macroeconomic changes [START_REF] Chevallier | Carbon futures and macroeconomic risk factors: A view from the EU ETS[END_REF]). Thus, the investigation of the interrelationships between carbon assets and energy variables on the one hand, and with stock and bond variables on the other hand, appears of particular importance for asset management.
In this article, we develop a stylized exercise to investigate the characteristics of energy, weather, bond and equity assets in terms of diversification for portfolio management. Indeed, one of the key insights of asset management to present [START_REF] Bodie | Investments[END_REF], [START_REF] Berk | Corporate Finance[END_REF]) is that diversification can reduce risk substantially. The main logic behind composing a portfolio not only with bonds and equities, but also with energy commodities, is to achieve a lower level of risk. A diversified portfolio may achieve a lower level of risk, because its individual asset components do not always move together. Besides, diversification does not necessarily reduce expected return. By including relevant asset classes, the goal of portfolio management consists in raising the expected return.
The literature on portfolio management with carbon assets is still very sparse. [START_REF] Hasselknippe | Managing carbon risks: A commodities market perspective[END_REF] first developed commodities market perspectives with respect to managing carbon risks. [START_REF] Kristiansen | Carbon Risk Management[END_REF] detail the key factors in carbon pricing, the fuel-mix and electricity prices that are relevant to include in carbon risk management, based on several country studies. To our best knowledge, only [START_REF] Mansanet-Bataller | CO 2 Prices and Portfolio Management[END_REF] have investigated the empirical question between CO 2 prices and portfolio management. The authors investigate the properties of CO 2 prices for Phase I (2005-2007) and Phase II (2008II ( -2012) ) of the EU ETS, coupled with other energy (brent, natural gas), and bond variables. Their main findings consist in showing that including CO 2 Phase I and Phase II prices can improve the investment opportunity set for an investor that initially invests in traditional assets.
Our results depart from previous literature (i) by providing insights into each class of asset's expected returns using the Capital Asset Pricing Model (CAPM, [START_REF] Merton | An Intertemporal Capital Asset Pricing Model[END_REF]); (ii) by applying the mean-variance optimization approach to a wider range of assets including energy, weather, bond and equity variables; and (iii) by extending the study period contained in our database from April 2005 to January 20092 . To investigate the properties of the new carbon asset class in terms of portfolio management, we adopt the basic framework of the CAPM and mean-variance optimization, knowing that many developments have occurred in the field3 .
More precisely, our regression analysis shows that carbon, gas, coal, and bonds assets appear particularly suitable for asset management. Our mean-variance optimization analysis shows that a global portfolio composed of energy (including carbon), weather, bond, equity risky assets and a riskless asset (U.S. T-Bills) may achieve a level of standard deviation < 0.06 for an expected return of 3%.
The composition of the globally diversified portfolio studied in this article unfolds as follows. Among carbon assets, we consider mainly futures carbon prices4 . We also retain Certified Emissions Reductions (CERs) credits5 . The reason behind this choice is that CERs may add diversification to a portfolio due to their fungibility with other international ETS than the EU ETS. Among energy assets, we select oil, natural gas, and coal prices. Among traditional assets, we retain bonds and equities. Finally, we choose to incorporate weather derivatives products in the composition of our portfolio, since they offer opportunities to hedge the risks attached to temperatures changes, and thus increases/decreases in CO 2 emissions, and appear as a complementary asset to carbon.
The remainder of the article is organized as follows. Section 2 presents the data used.
Section 3 examines asset management strategies with energy, weather, bond and equity variables. Section 5 details the optimal portfolio composition. Section 6 concludes.
Data
This section discusses the source of each time-series chosen for energy, weather, bond and equity variables, as well as the robustness checks implemented.
Source and descriptive statistics
The source of the data is Thomson Financial Datastream and Reuters, unless otherwise indicated. The various asset classes that we examine in this article are detailed in Tables 1 to 3 (see the Appendix) which provide the expected returns and standard deviations (Table 1); the correlation matrix (Table 2); and descriptive statistics (Table 3) for the energy, weather, bond and equity assets. The expected return and standard deviation for each asset class are used in Section 4 for the composition of the optimal portfolio. When each asset class is examined as part of a portfolio, we measure asset risk by the covariance between asset return and the return on the market portfolio.
Insert Figure 1 about here
The returns for energy and weather assets are displayed in Figure 1.
Insert Figure 2 about here
The returns for bond and equity assets are presented in Figure 2.
According to the matrix of cross-correlations between sector variables reported in Table 2, no simple correlation is over around 60% in absolute value. Since it is possible to have low correlations together with colinearity, we have investigated the presence of multicolinearity by comptuting the inflation of variance between explanatory variables. These calculations did not reveal serious problematic multicolinearities6 .
For carbon assets, it is worth emphasizing in Table 3 that the kurtosis coefficient is by far higher than 3, which is the value of the kurtosis coefficient for the normal distribution. This excess kurtosis denotes a high likelihood of outliers. Second, the skewness coefficient is different from zero and negative, which highlights the presence of asymmetry.
Let us detail in the next section the time-series used for each asset class considered in this article. For bonds, we retain the ECB 5-year Euro Benchmark Bond. For equities, we consider the Euronext 100 Price Index. Both bond and equity variables have been chosen for their ability to track changes in global market trends. For both of these variables, we use daily closing prices from December 31, 2004to January 21, 2009. As is standard in the financial literature [START_REF] Bodie | Investments[END_REF], [START_REF] Berk | Corporate Finance[END_REF]), we have considered for the riskless asset daily closing prices on the one-month U.S.
Energy, weather, bond and equity assets
Treasury Bill (T-Bill) 10 from April 1, 2005 to January 21, 2009.
In the next section, we discuss several robustness checks implemented.
Sensitivity tests
The purpose of this section is to demonstrate that the results obtained in Sections 3 and 4 are not sensitive to the choice of the time-series for energy, weather, bond and equity assets.
As sensitivity tests, we have considered the ECX December 2009 contract for carbon prices, the ECX CER Futures for CERs prices, the European Energy Exchange (EEX) off-peak 7 This choice of a carbon futures contract for delivery during Phase II of the EU ETS is motivated by the erratic behaviour of carbon prices during Phase I due to the banking restrictions implemented between Phase I and Phase II (Alberola and Chevallier (2009)). 8 To ensure that all energy prices are traded with the same currency, we converted dollars to euros using the European Central Bank daily exchange rate (available at http://www.ecb.int ). 9 As for oil products, the prices of such weather derivatives contracts have been converted to euro using the €/$ exchange rate by the ECB. 10 Data for the T-Bill rate may be obtained at http://research.stlouisfed.org/fred2/ electricity price, the London brent crude oil price, the natural gas-Henry Hub price, the daily coal futures Month Ahead price CIF ARA, the ICE Weather Futures contract, the Bond Schatz and Bond Bulb treasury bills, the Dow Jones EuroSTOXX 50 Price Index, and the Standard & Poor's Euro Price Index. By implementing these alternative robustness tests, the results commented below were not materially affected11 .
We discuss in the next section how to implement asset management strategies with the energy, weather, bond and equity assets contained in our database.
Asset management with energy, weather, bond and equity variables
This section reviews how to choose a portfolio composed of energy commodities, weather derivatives, bonds, and equities. We detail how expected returns are determined, and how they are related to energy risk management.
Expected excess return and market risk premium
Let N be the total number of risky assets. The excess return of asset n may be defined as:
f n R R - (1)
with R n the return on the risky asset, and R f the return on the riskless asset.
Then, let us define the expected excess return of asset n as:
f n R R E - ) ( (2)
where E(.) denotes the expected value of the asset's return.
Next, we define the market portfolio as the value-weighted portfolio of the N risky assets:
∑ = N n n n s P 1 (3)
with P n the price of one asset share, and s n the total number of shares.
Similarly, the weight of asset n in the market portfolio is:
∑ = N n n n n n s P s P 1 (4)
Now, let R M be the return on the market portfolio. The market risk premium may be computed as:
f M R R E - ) ( (5)
i.e. as the expected excess return of the market portfolio.
Following these basic definitions, we recall in the next section how to measure various types of risks involved in portfolio management.
Systematic and idiosyncratic risks
Based on the CAPM12 [START_REF] Merton | An Intertemporal Capital Asset Pricing Model[END_REF]), when studying asset returns we may estimate the following regression equation:
n f M n n f n R R R R ε β α + - + = - ) ( (6)
where the variation of asset n may be decomposed into:
-the systematic risk β n (R M -R f ), i.e. the risk perfectly correlated with the market portfolio. This type of risk affects all assets, such as macroeconomic shocks affecting the economy.
-the idiosyncratic risk ε n , i.e. the risk uncorrelated with the market portfolio. This type of risk affects only one asset. For equities for instance, idiosyncratic risk corresponds to events affecting only a particular company or industry.
From there, we may derive three characteristics of an asset:
1. the beta measures the asset's sensitivity to market movements13 :
) ( ) , ( M M n n R V R R Cov = β (7)
2. the alpha measures the asset's attractiveness; 3. the sigma is the standard deviation of ε n , i.e. the idiosyncratic risk.
In the next section, we provide an estimate of betas, alphas and sigmas for all types of risky assets considered as part of our globally diversified portfolio.
Regression analysis
Taking expectations of eq.( 6),
) ) ( ( ) ( f M n n f n R R E R R E - + = - β α (8)
we may get some insights on the alpha, beta and market risk premium of each asset class contained in our database.
To take into account the heteroskedasticity present in the excess returns of financial and energy assets alike, we implement the following ARCH(1,1) model [START_REF] Engle | Autoregressive Conditional Heteroscedast-icity with Estimates of the Variance of U.K. Inflation[END_REF]):
2 1 2 1 1 0 ) ) ( ( ) ) ( ( ) ( - - + = + - + - + = - t t f M n t f n f n t R R E R R E R R E ϕε ω σ ε β α α (9)
with a Gaussian innovation distribution, as is standard in the financial economics literature [START_REF] Hamilton | Time Series Analysis[END_REF]). R t is the return on the asset price, R t-1 is a proxy for the mean of R t conditional on past information, and ε t is the error term. Eq. ( 9) is estimated by Quasi Maximum Likelihood (QML, [START_REF] Gourieroux | Pseudo maximum likelihood methods: Theory[END_REF]). The estimate covariance matrix is estimated using the BHHH matrix [START_REF] Berndt | Estimation and Inference in Nonlinear Structural Models[END_REF]).
Estimation results
Eq.( 8) is estimated for each asset class composing our portfolio of energy, weather, bond and equity variables using the ARCH modelling structure detailed in eq.( 9). Estimation results may be found in Tables 4 to 11 (see the Appendix).
We comment below Tables 4 to 11 with respect to the values of the alpha, beta, and sigma coefficients. However, it should be kept in mind that an asset's expected return depends on the asset's risk through the asset's beta (i.e. the systematic risk), and not through the asset's sigma (i.e. the idiosyncratic risk). The basic insight of the CAPM is indeed that the systematic risk -and not the idiosyncratic risk -is priced by the market. In other words, the relevant measure of risk is beta, and not the variance.
In Table 4 (see the Appendix), we notice that the alpha coefficient for carbon excess returns is statistically significant at the 1% level. This result highlights the carbon asset's attractiveness for the composition of the optimal portfolio. On the contrary, the carbon asset's sensitivity to market movements beta is not statistically significant. The sigma coefficient is equal to 0.61. This value reveals a medium level of idiosyncratic risk for carbon assets.
In Table 5 (see the Appendix), we observe that the alpha coefficient for natural gas assets is not statistically significant. The beta coefficient however is significant at the 1% level and negative. This result indicates that the excess returns on natural gas prices are negatively and statistically significantly correlated with market movements. This characteristic of natural gas products appears of particular importance for diversification purposes in portfolio management. The sigma coefficient is equal to 2.77, which corresponds to a high level of idiosyncratic risk.
In Table 6 (see the Appendix), none of the alpha or beta coefficients are statistically significant for the excess returns on the electricity variable. Besides, the level for the idiosyncratic risk coefficient sigma is very high (8.97). These results suggest that due to the well-known high level of peaks in the time-series of electricity prices [START_REF] Joskow | Competitive Electricity Markets and Investment in New Generating Capacity[END_REF]), this variable does not appear particularly suitable for asset management strategies compared to the raw prices of other energy sources, such as oil, gas and coal 14 .
In Table 7 (see the Appendix), we note that the alpha coefficient for coal excess returns is statistically significant at the 1% level, which underlines this asset's attractiveness for portfolio management. Besides, the beta coefficient is also significant at the 10% level and negative. As for the natural gas variable, this result shows that coal asset's sensitivity is negatively correlated with market movements, which is of interest for diversification purposes.
The sigma coefficient is equal to 0.87, which reveals a medium level of idiosyncratic risk.
In Table 8 (see the Appendix), none of the alpha or beta coefficients for oil excess returns are statistically significant. The level of idiosyncratic risk for oil assets is in the medium range, with a value of 1.46. Due to these characteristics, oil assets do not surprisingly appear as very suitable for portfolio management either 15 .
In Table 9 (see the Appendix), we note that neither the alpha nor the beta coefficients are statistically significant for weather excess returns. Besides, the level of idiosyncratic risk is very high, with a value of sigma equal to 5.20. Thus, we may conclude based on these results that derivatives products do not appear to share the required properties for diversification and increasing returns purposes in portfolio management 16 .
In Table 10 (see the Appendix), we observe that the alpha coefficient for bonds is not statistically significant. This result is not surprising, since bonds are primarily purchased for the security of investments, and thus not for their attractiveness in terms of returns. The beta coefficient is statistically significant at the 1% level and positive. In line with the central role played by national governments in monetary policy, this result illustrates the strong link between the bond market and movements in global equity and commodity markets. The sigma coefficient is low (0.07), which confirms bond asset's interest for pooling risks.
In Table 11 (see the Appendix), none of the alpha or beta coefficients appear statistically significant for the excess return of CERs. This result does not appear especially surprising, given the high level of risks attached to the delivery of CDM credits to project developers (IETA ( 2008)). Like carbon assets, CERs carry a medium level of idiosyncratic risk, with a value of the sigma coefficient equal to 0.41.
14 Nevertheless, we choose to keep this variable in the determination of our optimal portfolio in the next section, due to the central role played by power producers on the European emissions market, which greatly influence the determination of the carbon price (Delarue et al. (2008), [START_REF] Ellerman | A Top-down and Bottom-up look at Emissions Abatement in Germany in response to the EU ETS[END_REF]). 15 However, due to the clear link between petroleum consumption and GDP [START_REF] Lutz | The Economic Effects of Energy Price Shocks[END_REF]) on the one hand, and to the fact that oil products are the most traded assets among energy commodities [START_REF] Kang | Forecasting volatility of crude oil markets[END_REF]), we choose to keep oil assets in the composition of our portfolio in the next section. 16 As for electricity and oil, weather appears as an important determinant of the price of carbon assets [START_REF] Kanen | Carbon Trading and Pricing[END_REF], [START_REF] Alberola | Price drivers and structural breaks in European carbon prices 2005-2007[END_REF]). Thus, we choose to keep this variable in the composition of our globally diversified portfolio in the next section. Among all the energy, bond and equity assets, this regression analysis indicates that the carbon, gas, coal, and bond assets share the best properties in terms of (i) sensitivity to market movements, (ii) attractiveness, or (iii) level of idiosyncratic risk to enter the optimal composition of our portfolio 17 .
To analyse the interplay between energy, weather, bond and equity assets -which is the purpose of the stylized exercise developed in this article -we decide to include all of them in the composition of our portfolio in the next section.
Mean-variance optimization and the portfolio frontier
Following the review of the properties of energy, weather, bonds, and equities assets in Section 3, we detail in this section the optimal composition of the portfolio based on meanvariance optimization and portfolio frontier analysis.
Portfolio frontier with risky assets only
In this section, we explore how to choose the optimal portfolio composed of energy, weather, bond and equity assets. This question can be addressed in two steps:
1. Among all portfolios with a given expected return, which is the portfolio with the minimum variance? This first step will give us a set of portfolio, one for each expected return. This set is called the portfolio frontier (PF), whose elements are frontier portfolios.
2. Which is the best portfolio on the PF? This answer will depend on how we trade off risk and return, and on the level of risk aversion of a specific group of investors.
Using the historical data from April 2005 to January 2009 for energy, weather, bonds and equities variables as explained in Section 2, we consider below the optimization program of choosing the global portfolio. The statistical properties of returns for all classes of assets, including expected returns, sample means, standard deviations, and correlations may be found in Tables 1 to 3 (see the Appendix) 18 . Among all portfolios that have a given expected return (E), the optimization problem consists in choosing the portfolio with the minimum variance: 17 As the CAPM implies than an asset's expected return depends on risk only through beta, this conclusion shall be read especially in the light obtained for the beta coefficient of each asset class. 18 We should always keep in mind that mean-variance optimization is only as precise as these estimates are. While the estimates for standard deviations and correlations are generally quite precise, the estimates for expected returns are quite imprecise (i.e. historical data for bonds and equities over a 75-year sample report a standard error around 2.5% [START_REF] Bodie | Investments[END_REF], [START_REF] Berk | Corporate Finance[END_REF].)
E R E w R E w R R Cov w w R V w R V N n n n N n n N n m n m n m n n n = = = + = ∑ ∑ ∑ ∑ = = = < 1 1 1 2 ) ( ) ( 1 ) , ( 2 ) ( ) ( (10)
with w n , n={1, ..., N} the portfolio weights to minimize.
The portfolio frontier analysis with risky assets only is displayed in Figure 3 19 .
Insert Figure 3 about here
Assuming that we only care about mean and variance, we only need to consider portfolios on the PF. By comparing portfolios on the PF in Figure 3, we may observe that the optimal portfolio achieves a standard deviation < 0.1 for an expected return around 3%. This result illustrates the benefits of diversification to reduce idiosyncratic risk by adding energy and weather variables to usual bonds and equities variables, and more particularly by managing energy risk with a new class of carbon assets. This exercise thus demonstrates that diversification outside a group of assets is more effective in reducing risk20 .
We develop in the next section a slight variation of the mean-variance optimization program by including also a riskless asset.
Portfolio frontier with a riskless asset
In this section, all frontiers portfolios are combinations of the riskless asset and the tangent portfolio (TP). We use the U.S. T-bills as the riskless asset. We only need to choose the weights of the risky assets, w n , n={1, ..., N}, given that the weight of the riskless asset is:
∑ = - N n n w 1 1 (11)
The variance of the riskless asset is indeed equal to zero, as is its covariance with all risky assets. The optimization problem in eq.( 10) needs to be rewritten as follows:
E R w R E w R E R R Cov w w R V w R V f N n n N n n n N n m n m n m n n n = - + = + = ∑ ∑ ∑ ∑ = = = < ) 1 ( ) ( ) ( ) , ( 2 ) ( ) ( 1 1 1 2 (12)
The PF is delimited by the line linking the riskless asset with the TP21 . Thus, to determine the PF, we only need to solve the optimization problem, and to draw the line linking that portfolio to the riskless asset.
The portfolio frontier analysis with a riskless asset is presented in Figure 4.
Insert Figure 4 about here By considering the line linking the riskless asset with the points on the hyperbola22 , we notice in Figure 4 that the optimal portfolio in this configuration allows achieving a standard deviation < 0.06 for an expected return around 3%. Thus, departing from the benchmark case in Section 4.2, the inclusion of a riskless asset such as the T-Bill rate allows minimizing the variance for the same level of expected return.
We are now able to answer carefully to the question "which portfolio to choose?".
Assuming that we care only about mean and variance, we should choose indeed a portfolio on the PF. Which portfolio depends then on how we trade off risk and return, i.e. on the level of risk aversion 23 . If we are very risk-averse, we should choose a portfolio closer to the riskless asset. If we are not very risk-averse, we should choose a portfolio closer to the TP, and even above the TP. For investors as a group, the demand will be a combination of tangent portfolio and riskless asset. These comments conclude our stylized exercise of portfolio management including bonds, equities, energy and weather variables, as well as a new class of carbon assets.
Concluding Remarks
This article provides a stylized exercise to investigate the diversification benefits that may be drawn from using carbon assets in portfolio management. Apart from traditional assets, there is a need on the carbon market to take into account the interrelationships with other energy markets, weather influences, and macroeconomic conditions, as shown in previous literature [START_REF] Christiansen | Price determinants in the EU emissions trading scheme[END_REF], [START_REF] Alberola | Price drivers and structural breaks in European carbon prices 2005-2007[END_REF], [START_REF] Chevallier | Carbon futures and macroeconomic risk factors: A view from the EU ETS[END_REF]). Thus, we introduce two types of carbon assets -carbon futures and CERs -among a global portfolio composed of energy commodities (oil, coal, gas), weather derivatives, bonds and equities. Our study period goes from April 2005 to January 2009. required properties in terms of betas to compose a globally diversified portfolio, and (ii) that a global portfolio with energy (including carbon), weather, bond, equity risky assets and a riskless assets (U.S. T-Bills) achieves a level of standard deviation < 0.06 for an expected return of 3%.
Collectively, these results provide insights into the benefits of introducing carbon assets for diversification purposes in portfolio management. Unlike other energy markets which exhibit a direct link with macroeconomic conditions, risk factors on the carbon market are mainly linked to power producers' fuel-switching behaviour and institutional decision changes by the EU Commission.
Finally, portfolio management with carbon assets is yet another attempt at eliminating idiosyncratic risk among a range of diversified investments, but not systematic risk, as the recent "credit crunch" crisis has shown the dependency of all types of assets to macroeconomic shocks. RT COAL -0,027721224 -0,006438851 -0,062186799 1
RT OIL -0,001466019 -0,026542535 0,036280144 0,008111349 1 RT WEA 0,047632133 0,002645503 0,031059249 -0,087117043 0,021932909 1
RT CER -0,050862875 0,056893671 -0,022524169 -0,018719988 -0,087943419 -0,08034643 1
RT BOND -0,037261286 -0,017078898 -0,039683893 -0,042892797 0,003310712 0,05231719 -0,054683998 1
RT EUR 0,048495721 -0,031195733 0,017930111 0,026260248 0,044474849 -0,0001236 0,018837533 0,328404693 1
Figure 1 :
1 Figure 1: Returns for Energy and Weather Assets Source: Thomson Financial Datastream and Reuters Note: ECX DEC08 refers to the ECX carbon futures contract of maturity December 2008, Gas to the natural gas variable, Elec to the electricity variable, Coal to the coal variable, Oil to the crude oil variable, WEA to the weather derivatives contract, CER to Certified Emissions Reduction credits as defined above.
Figure 2 :
2 Figure 2: Returns for Bond and Equity Assets Source: Thomson Financial Datastream and Reuters Note: BOND refers to the bond variable, and EUR to the Euronext 100 price index as defined above.
Figure 4 :
4 Figure 4: Portfolio Frontier Analysis with a Riskless Asset Note: T-Bill refers to the U.S. Treasury Bills which are used as a proxy of a riskless asset.
For carbon assets, we choose the carbon futures contract of maturity December 2008 7 traded on the European Climate Exchange (ECX) from April 22, 2005 to December 15, 2008, i.e. from the opening of the ECX market to the expiration date of the 2008 carbon futures contract. The carbon prices recorded in our database are daily closing prices in €. One European
Union Allowance (EUA) is equal to one tonne of CO 2 emitted in the atmosphere. For
diversification purposes, we also consider the price of secondary CERs credits, recorded as
daily closing prices by the Reuters CER Price Index from February 2, 2008 to January 21, 2009.
One secondary CER is equal to one EUA, and thus to the same CO 2 -equivalent.
For natural gas assets, we use the Zeebrugge Natural Gas Next Month price. For the
electricity price, we use the Electricity Powernext Baseload price. For coal prices, we use the
Coal Rotterdam futures. For oil products, we use the NYMEX Crude Oil Futures. Gas and
electricity prices are traded in €/MWh. Coal prices are traded in €/ton. Oil prices are traded in
$/barrel 8 . All energy prices recorded in our database are daily closing prices from January 1,
2005 to January 15, 2009.
For weather derivatives products, we consider the Climate Futures Eco Clean Energy
Index traded on the Intercontinental Exchange (ICE). The database contains daily closing prices
from July 13, 2007 to January 21, 2009 9 .
Table 1 : Expected Return and Standard Deviation for Energy, Bond and Equity Assets Note
1 : RT stands for returns, Std Deviation for standard deviation, ECX DEC08 for the ECX carbon futures contract of maturity December 2008, Gas for the natural gas variable, Elec for the electricity variable, Coal for the coal variable, Oil for the crude oil variable, WEA for the weather derivatives contract, CER for the Certified Emissions Reduction valid under the Kyoto Protocol, Bond for the bond variable, and EUR for the Euronext 100 price index as defined above.
RT ECX RT GAS RT ELEC RT COAL RT OIL RT WEA RT CER RT BOND RT EUR
DEC08
Expected -0,0018 0,0338 -0,0211 0,0027 0,0053 0,4321 0,01702381 2,83286E-05 -0,161643059
Return
Std Deviation 0,614890277 2,824325206 10,78246731 0,999584701 1,41804319 5,196337234 0,399817029 0,039726132 8,767931502
Table 2 : Correlation Matrix between Energy, Bond and Equity Assets Note
2 : RT stands for returns, Std Deviation for standard deviation, ECX DEC08 for the ECX carbon futures contract of maturity December 2008, Gas for the natural gas variable, Elec for the electricity variable, Coal for the coal variable, Oil for the crude oil variable, WEA for the weather derivatives contract, CER for the Certified Emissions Reduction valid under the Kyoto Protocol, Bond for the bond variable, and EUR for the Euronext 100 price index as defined above.
RT_BOND RT_CER RT_COAL RT_ECX RT_ELEC RT_EUR RT_GAS RT_OIL RT_TBILL RT_WEA
Mean 0.000963 0.017024 0.029840 -0.001818 -0.039664 0.085487 0.035882 0.014558 -0.001253 0.432105
Median 0.000000 0.000000 0.040000 0.020000 -1.091000 0.510000 -0.050000 -0.038158 0.000000 0.000000
Maximum 0.120000 1.530000 4.560000 3.650000 39.71300 28.39000 22.65000 11.77371 0.750000 17.00000
Minimum -0.120000 -1.310000 -3.740000 -7.400000 -34.05300 -32.65000 -21.55000 -12.84609 -0.490000 -15.59999
Std. Dev. 0.035708 0.399817 0.867270 0.614890 10.52693 8.149984 2.782299 1.468422 0.063300 5.196337
Skewness 0.071808 0.561290 -0.117488 -2.244994 0.667781 -0.417292 0.668721 -0.008155 0.956086 0.015606
Kurtosis 3.355982 5.053116 6.386975 29.79303 5.177692 4.645799 18.30428 14.43857 35.80764 3.820959
Table 3 : Descriptive Statistics for Energy, Bond and Equity Asset Returns Note
3 : RT stands for returns, Std Deviation for standard deviation, ECX DEC08 for the ECX carbon futures contract of maturity December 2008, Gas for the natural gas variable, Elec for the electricity variable, Coal for the coal variable, Oil for the crude oil variable, WEA for the weather derivatives contract, CER for the Certified Emissions Reduction valid under the Kyoto Protocol, Bond for the bond variable, and EUR for the Euronext 100 price index as defined above.
Dependent Variable: E(R ECX )-R f
Coefficient Std. Error
α 1 0.233822*** 0.018746
α 0 0.030971*** 0.015919
β n -0.001871 0.002132
Variance Equation
ω 0.206975*** 0.008719
φ 0.517981*** 0.029673
Diagnostic Tests
R-squared 0.010182
Adjusted R-squared 0.005920
ε n 0.614371
Log likelihood 772.3144
Durbin-Watson stat 2.173891
AIC 1.664485
SC 1.690392
F-statistic 0.049384
ARCH Test 0.463494
Q(20) 32.826
Table 4 : CAPM Regression Results for the ECX DEC08 carbon futures contract with a ARCH(1,1) model
4 Note: Bollerslev-Wooldridge robust standard errors. AIC refers to the Akaike Information Criterion, SC refers to the Schwarz Criterion, Q(20) refers to the Ljung-Box Q Statistic with a maximum number of lags of 20. The value for the F-Statistic is the p-value.
Dependent Variable: E(R GAS )-R f
Coefficient Std. Error
α 1 -0.148277*** 0.019587
α 0 -0.049021 0.036290
β n -0.015294*** 0.0001
Variance Equation
ω 3.641193*** 0.083878
φ 0.976279*** 0.058441
Diagnostic Tests
R-squared 0.011125
Adjusted R-squared 0.006867
ε n 2.775468
Log likelihood 2159.681
Durbin-Watson stat 1.927542
AIC 4.635291
SC 4.661198
F-statistic 0.034141
ARCH Test 0.437272
Q(20) 31.948
Table 5 : CAPM Regression Results for the Natural Gas Variable with a ARCH(1,1) model
5 Note: Bollerslev-Wooldridge robust standard errors. AIC refers to the Akaike Information Criterion, SC refers to the Schwarz Criterion, Q(20) refers to the Ljung-Box Q Statistic with a maximum number of lags of 20. The value for the F-Statistic is the p-value.
Dependent Variable: E(R ELEC )-R f
Coefficient Std. Error
α 1 -0.193808*** 0.027463
α 0 -0.024556 0.252310
β n -0.005041 0.033414
Variance Equation
ω 52.71211*** 2.026832
φ 0.385487*** 0.051454
Diagnostic Tests
R-squared 0.288642
Adjusted R-squared 0.282395
ε n 8.973819
Log likelihood 3270.581
Durbin-Watson stat 1.876152
AIC 7.129523
SC 7.176718
F-statistic 0.000000
ARCH Test 0.386772
Q(20) 43.851
Table 6 : CAPM Regression Results for the Electricity Variable with a ARCH(1,1) model
6 Note: Bollerslev-Wooldridge robust standard errors. AIC refers to the Akaike Information Criterion, SC refers to the Schwarz Criterion, Q(20) refers to the Ljung-Box Q Statistic with a maximum number of lags of 20. The value for the F-Statistic is the p-value.
Dependent Variable: E(R COAL )-R f
Coefficient Std. Error
α 1 -0.017292 0.025569
α 0 0.069475*** 0.018898
β n -0.003782* 0.002333
Variance Equation
ω 0.420828*** 0.016661
φ 0.541561*** 0.059253
Diagnostic Tests
R-squared 0.032222
Adjusted R-squared 0.007541
ε n 0.875754
Log likelihood 1125.123
Durbin-Watson stat 2.040838
AIC 2.419964
SC 2.445871
F-statistic 0.000000
ARCH Test 0.149836
Q(20) 32.806
Table 7 : CAPM Regression Results for the Coal Variable with a ARCH(1,1) model
7 Note: Bollerslev-Wooldridge robust standard errors. AIC refers to the Akaike Information Criterion, SC refers to the Schwarz Criterion, Q(20) refers to the Ljung-Box Q Statistic with a maximum number of lags of 20. The value for the F-Statistic is the p-value.
Dependent Variable: E(R OIL )-R f
Coefficient Std. Error
α 1 -0.053927** 0.026455
α 0 -0.048182 0.038928
β n 0.004600 0.005541
Variance Equation
ω 1.304628*** 0.074932
φ 0.392536*** 0.052151
Diagnostic Tests
R-squared 0.010448
Adjusted R-squared 0.006188
ε n 1.467020
Log likelihood 1604.130
Durbin-Watson stat 2.157620
AIC 3.445675
SC 3.471582
F-statistic 0.044520
ARCH Test 0.998397
Q(20) 26.224
Table 8 : CAPM Regression Results for the Oil Variable with a ARCH(1,1) model
8 Note: Bollerslev-Wooldridge robust standard errors. AIC refers to the Akaike Information Criterion, SC refers to the Schwarz Criterion, Q(20) refers to the Ljung-Box Q Statistic with a maximum number of lags of 20. The value for the F-Statistic is the p-value.
Dependent Variable: E(R WEA )-R f
Coefficient Std. Error
α 1 0.131258** 0.060792
α 0 0.352982 0.258217
β n -0.030425 0.027680
Variance Equation
ω 22.38411*** 1.890043
φ 0.165212** 0.077231
Diagnostic Tests
R-squared 0.008427
Adjusted R-squared 0.002178
ε n 5.205682
Log likelihood 1155.894
Durbin-Watson stat 2.039692
AIC 6.126092
SC 6.178038
F-statistic 0.052922
ARCH Test 0.861886
Q(20) 11.509
Table 9 : CAPM Regression Results for the Weather Variable with a ARCH(1,1) model
9 Note: Bollerslev-Wooldridge robust standard errors. AIC refers to the Akaike Information Criterion, SC refers to the Schwarz Criterion, Q(20) refers to the Ljung-Box Q Statistic with a maximum number of lags of 20. The value for the F-Statistic is the p-value.
Dependent Variable: E(R BOND )-R f
Coefficient Std. Error
α 1 0.213099*** 0.011935
α 0 0.000901 0.001663
β n 0.001360*** 0.000140
Variance Equation
ω 0.002116*** 0.000123
φ 0.785840*** 0.048804
Diagnostic Tests
R-squared 0.017500
Adjusted R-squared 0.013270
ε n 0.070768
Log likelihood 1282.670
Durbin-Watson stat 2.167503
AIC 2.735911
SC 2.710004
F-statistic 0.002505
ARCH Test 0.110491
Q(20) 32.9337
Table 10 : CAPM Regression Results for the Bond Variable with a ARCH(1,1) model
10 Note: Bollerslev-Wooldridge robust standard errors. AIC refers to the Akaike Information Criterion, SC refers to the Schwarz Criterion, Q(20) refers to the Ljung-Box Q Statistic with a maximum number of lags of 20. The value for the F-Statistic is the p-value.
Dependent Variable: E(R CER )-R f
Coefficient Std. Error
α 1 0.124443*** 0.050585
α 0 0.004425 0.026743
β n 0.001311 0.002119
Variance Equation
ω 0.104057*** 0.009300
φ 0.445578*** 0.118872
Diagnostic Tests
R-squared 0.007763
Adjusted R-squared 0.008371
ε n 0.410895
Log likelihood 118.7581
Durbin-Watson stat 2.044370
AIC 0.986121
SC 1.056349
F-statistic 0.074957
ARCH Test 0.324762
Q(20) 19.467
Table 11 : CAPM Regression Results for the CER contract with a ARCH(1,1) model
11 Note: Bollerslev-Wooldridge robust standard errors. AIC refers to the Akaike Information Criterion, SC refers to the Schwarz Criterion, Q(20) refers to the Ljung-Box Q Statistic with a maximum number of lags of 20. The value for the F-Statistic is the p-value.
The EU Emissions Trading Scheme (EU ETS) was established in
by the Directive 2003/87/EC, and launched for a trial period from 2005 to 2007. Phase II now covers 2008-2012, while the functioning of the scheme has been confirmed at least until 2020 with the end of Phase III.
[START_REF] Mansanet-Bataller | CO 2 Prices and Portfolio Management[END_REF] consider a time-period going from April 2005 to January 2008.
More particularly, we choose not to investigate the properties of carbon assets through multifactor models such as the Intertemporal CAPM (ICAPM), and arbitrage-based models (Arbitrage Pricing Theory, APT).
This choice is motivated by the non-reliable behaviour of carbon spot prices due to banking restrictions implemented between 2007 and 2008(Alberola and Chevallier (2009)).
Clean Development Mechanism (CDM) projects, introduced according to the article 12 of the Kyoto Protocol (UNFCCC (2000)), may generate Certified Emissions Reductions (CERs) credits for compliance in the EU ETS during 2008-12. The import limit is equal to 1.6 billion tonnes of offsets being allowed into the EU ETS from 2008-2020, i.e. an absolute maximum of 50% of the effort will be achievable through the CDM, coupled with quality criteria.
To conserve space, these results are not reproduced in the article and may be obtained upon request to the authors.
To conserve space, sensitivity tests are not reproduced in the article, and may be obtained upon request to the authors.
The assumptions underlying the CAPM may be summarized as follows: there are N risky asset and a riskless asset, short sales are costless, investors care only about mean and variance, investors have the same beliefs, and investors have an one-period planning horizon.
Holding all else equal, if the return on the market portfolio is higher by 1%, then the return on asset n is higher by β n .
Note that allowing short sales -selling an asset that we do not own -will only result in expanding the PF, as is standard in the portfolio management literature[START_REF] Bodie | Investments[END_REF],[START_REF] Berk | Corporate Finance[END_REF]).
Note this comment applies as long as diversification within a group of assets allows reducing, and eventually eliminating, idiosyncratic risk. However, it cannot eliminate systematic risk.
The basic insight here is that at the market equilibrium demand equals supply, and in particular the TP coincides with the market portfolio.
In this context, the portfolio frontier is indeed represented by the line with the steepest slope.
[START_REF] Chevallier | Risk aversion and institutional information disclosure on the European carbon market: a case-study of the 2006 compliance event[END_REF] demonstrate that the level of risk aversion is higher on the carbon market than on equity markets. This result is due to the high level of institutional uncertainty on this emerging commodity market. The authors however point out that the values for risk aversion on the carbon market should progressively converge to the values found on equity markets, as the formation of anticipations becomes more homogeneous among market operators. |
04100660 | en | [
"math.math-at",
"info.info-cg",
"math.math-co",
"math.math-gn",
"math.math-mp",
"math.math-nt",
"phys.hthe"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04100660/file/Sopin.pdf | Generalized graph complexes: the splitting into a complete graph
"Traditional" graph complexes 1.Explanations for the IHX-relation
If we consider a 3-regular graph G (every vertex has valency 3) and we collapse an edge e then the obtained graph G e has exactly one 4-valent vertex. As the figure below shows, there are exactly two other graphs whose edge collapse leads to the graph G e .
Figure 1: The numbers represent the four parts of the graph that get connected at e
The only axiom in the definition of a Lie algebra, in addition to the bilinearity and skewsymmetry of the Lie bracket, is the Jacobi identity. Picturing the Lie bracket as a rooted Y-tree with two inputs and one output, the Jacobi identity can be encoded by the next figure: The above explains results of Maxim Kontsevich [1 -2] in his study of the formality conjecture, which introduced the most basic graph complexes. These complexes are formed by vacuum Feynman diagrams of a topological field theory (alternatively, it governs the deformation theory of E n operads in algebraic topology [START_REF] Thomas Willwacher | Kontsevich's graph complex and the Grothendieck-Teichmüller Lie algebra[END_REF]).
Dirk Kreimer et al. [START_REF] Kreimer | Quantization of gauge fields, graph polynomials and graph cohomology[END_REF] showed how gauge theory amplitudes can be generated using only a scalar field theory with cubic interaction, i.e. all graphs relevant in gauge theory can be generated from the set of all 3-regular graphs by means of operators that label edges and cycles.
Definition
Let G be a connected graph. Let V (G), E(G) denote its set of vertices, and edges, and denote by h G : the number of loops, or genus, of G, e G : the number of edges of G, deg N G = e G -N h G : the degree of G for a fixed non negative integer N . An orientation on G is an element
η ∈ ( e G Z E(G) ) × .
If the edges of G are denoted by e 1 , . . . , e n , where n = e G , then an orientation is equal to either e 1 ∧ • • • ∧ e n or its negative. Thus, an orientation is simply an ordering of the edges up to the action of even permutations (interchanging any two edges reverses the orientation). As proved in [START_REF] Conant | On a theorem of Kontsevich[END_REF], this definition of orientation is equivalent to the definition given by Maxim Kontsevich.
Let GC n be the abstract vector space over Q spanned by equivalence classes of pairs (G, o), where G is a connected non empty graph with n vertices and o is the orientation of G.
GC = n GC n . The differential d : GC n -→ GC n-1 is given by d(G, o) = e∈E(G) (G/e, o G/e ),
where G/e is the graph obtained from G by contracting the edge e and o G/e is the induced orientation obtained by moving the edge e in the first place and then removing it.
By an edge contraction, it means an operation of the following form (see also Figure 1):
d * (G) = v∈V (G) split(G, v),
where split(G, v) is the operation that replaces the vertex v by two vertices connected by an edge and reconnects edges that were connected to the vertex v to the two new vertices in all possible ways.
A feature characterizing the cohomology of GC is that it depends on the parity of N in the degree of a graph G [START_REF] Thomas Willwacher | Kontsevich's graph complex and the Grothendieck-Teichmüller Lie algebra[END_REF][START_REF] Willwacher | Little disks operads and Feynman diagrams[END_REF].
The cohomologies of the various graph complexes are highly related. Obviously, the graph complexes with disconnected graphs are symmetric products of the complexes of connected graphs. Moreover, it was shown in [START_REF] Thomas Willwacher | Kontsevich's graph complex and the Grothendieck-Teichmüller Lie algebra[END_REF] that adding the trivalence condition changes the cohomology of the graph complexes only by a list of known classes, and the omission of graphs with tadpoles does not change the cohomology further. Notice also that graphs with multiple edges always have sign-reversing automorphism (flipping two parallel edges) and they vanish modulo defining relationships.
Although graph complexes are simple objects easy to define, their cohomology is still largely unknown [START_REF] Thomas Willwacher | Kontsevich's graph complex and the Grothendieck-Teichmüller Lie algebra[END_REF].
Remark 1. "The finite type invariants" in Knot theory can be recast in terms of "hairy graphs" (ordinary graphs with external legs ("hairs")). When all vertices are trivalent, this gives rise to the Vassiliev invariants [START_REF] Kontsevich | Feynman Diagrams and Low-Dimensional Topology[END_REF]
Generalization
The complex (GC, d * ) suggests a generalization, where vertex splitting (i.e. insertion of the complete graph on two vertices) is replaced by insertion of the complete graph on k ≥ 1 vertices (a complete graph is a graph in which each pair of graph vertices is connected by an edge; the case k = 1 means adding a loop (an edge that connects a vertex to itself); the case k = ∞ is also meaningful). Let m k be the corresponding map. Remark 4. The blowup of a graph is obtained by replacing every vertex with a finite collection of copies so that the copies of two vertices are adjacent if and only if the originals are. If every vertex is replaced with the same number of copies, then the resulting graph is called a balanced blowup. It comes from complete bipartite graphs (a graph whose vertices can be partitioned into two subsets such that no edge has both endpoints in the same subset, and every possible edge that could connect vertices in different subsets is part of the graph).
Proposition 1. For any i, j ≥ 2 [m i , m j ] = (j -i) • m i+j .
Proof. From the definition of m k and the properties of a complete graph (i.e. any permutation is possible) it becomes noticeable. Note that the complete graph with k vertices has
k(k-1) 2
(the triangular numbers) undirected edges.
Remark 5. Witt algebra?
Note that m 1 , m 2 , m 3 , m 4 generate everything.
Remark 6. Contracting an edge of a graph does not change the Euler characteristic as both the vertex number and the edge number decrease by one.
Similar to an edge contraction, let's use a vertex contraction:
δ(G, o) = v∈V (G) (G/v, o G/v ),
where G/v is the graph obtained from G by deleting the vertex v (a vertex-deleted subgraph is a subgraph formed by deleting exactly one vertex) and o G/v is the induced orientation.
Remark 7. Reconstruction Conjecture in graph theory asks: are graphs uniquely determined by their subgraphs? It has been shown by Béla Bollobás [START_REF] Bollobás | Almost every graph has reconstruction number three[END_REF] that almost all graphs are reconstructible, i.e. the probability that a randomly chosen graph of n vertices is not reconstructible goes to 0 as n tends to infinity. Check also the paper [START_REF] Stanley | Reconstruction from vertex-switching[END_REF] of Richard Stanley about switching-reconstructible graphs (for a vertex x ∈ V (G) the graph G x , obtained from G by deleting all edges incidents to x and adding edges joining x to every vertex not adjacent to x in G, is called a vertex-switching, i.e. to switch a vertex of a graph is to exchange its sets of neighbours and non-neighbours; Richard Stanley proved that a graph on n vertices is switching-reconstructible if n ≡ 0 mod 4).
There is a stronger version of the conjecture. Set Reconstruction Conjecture: any two graphs on at least four vertices with the same sets of vertex-deleted subgraphs are isomorphic.
If Set Reconstruction Conjecture is true, then δ has inverse.
Reconstruction Conjecture is true if all 2-connected graphs (a connected graph is called 2-connected, if the induced vertex-deleted subgraph for every vertex is connected) are reconstructible. The conjecture has been verified for a number of infinite classes of graphs, for example: regular graphs (a regular graph is a graph where each vertex has the same number of neighbors), trees, circle graphs (a circle graph is the intersection graph of a chord diagram, i.e. it is an undirected graph whose vertices can be associated with a finite system of chords of a circle such that two vertices are adjacent if and only if the corresponding chords cross each other), outerplanar graphs (a finite graph is outerplanar if and only if it does not contain a subdivision of the complete graph K 4 or of the complete bipartite graph K 2,3 ), maximal planar graphs (all faces (including the outer one) are then bounded by three edges).
Proposition 2. For any k > 2 [δ, m k ] = k • m k-1 . Moreover, we have [δ, m 1 ] = δ, [δ, m 2 ] = 2 • cardinality • id,
where the cardinality is the cardinality of the vertex set.
Proof. Rearranging terms in δ • m k .
Remark 8. The notion of a strong homotopy Lie (or L ∞ ) algebra is well-known in algebraic homotopy theory, where it originated. It is obtained by allowing for a countable family of multilinear antisymmetric operations of all arities n ≥ 1, constrained by a countable series of generalizations of the Jacobi identity known as the L ∞ identities. This notion admits specializations indexed by subsets S ⊆ N of arities and which are defined by requiring vanishing of all products of arities not belonging to S. This leads to the notion of L S algebra. The case S = {n}, when only a single product of arity n is non-vanishing, recovers the notion of n-Lie algebras, see [START_REF] Calin | Strong Homotopy Lie Algebras, Generalized Nahm Equations and Multiple M2-branes[END_REF] for more details.
Figure 2 :
2 Figure 2: The Jacobi identity
Figure 3 :
3 Figure 3: The IHX-relation (because of its appearance)
Figure 4 :
4 Figure 4: An edge contraction The complex (GC, d * ), where d * is the dual differential, carries the structure of a dg Lie algebra. The differential d * is defined by
Remark 3 .
3 [START_REF] Bar-Natan | On the Vassiliev knot invariants[END_REF].Remark 2. Graph complexes give special classes in diffeomorphism groups of spheres. Maxim Kontsevich's characteristic classes are invariants of framed smooth fiber bundles with homology sphere fibers[START_REF] Kontsevich | Feynman Diagrams and Low-Dimensional Topology[END_REF][START_REF] Watanabe | Some exotic nontrivial elements of the rational homotopy groups of Dif f (S 4 )[END_REF]. Lie decorated graph complexes describe the cohomology of the automorphisms of a free group [2][START_REF] Arone | Graph-complexes computing the rational homotopy of high dimensional analogues of spaces of long knots[END_REF].
Figure 5 :
5 Figure 5: Complete graphs on n vertices, for n between 1 and 5
Remark 9 .
9 Let m l k denote the corresponding map, where vertex splitting is replaced by insertion of the complete graph on k ≥ 1 vertices with l multiple edges. Hence, Proposition 2, using differential d from Subsection 1.2 (edge contraction), has an extrapolation.Proposition 3. Let ∆ k k-2 = [δ, [. . . , [δ, m k ] . . . ]], i.e. (k -2) times, for any k > 2, then for any l ≥ 0 ∆ k k-2 • ∆ k+l k+l-2 = 0. Proof. It follows from [δ, m k ] = k • m k-1 in Proposition 2. Note that m 2 • m 2 = 0 [1 -2].Remark 10. There is a connection with the study of multiple zeta values and the Kashiwara-Vergne conjecture[START_REF] Brown | Mixed Tate motives over Z[END_REF][14][START_REF] Schneps | Double Shuffle and Kashiwara-Vergne Lie algebras[END_REF]. |
04100819 | en | [
"stat"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04100819/file/Appeal%20to%20the%20Scientific%20Community_take%20care%20of%20Quality%20of%20published%20papers_new.pdf | Fausto Galetto
email: [email protected]
Appeal to the Scientific Community: let's take care of Quality of published papers
Keywords: Quality of Methods, Quality Education, Peer Review, Methods for Quality, Rational Manager, Quality Tetralogy, Intellectual Honesty, Reliability Integral Theory
They say that good papers are published only in "Peer Reviewed Trusted Journals (PRTJ)", while low quality papers are published in the "Predatory Publishing Journals". Here we use some cases to show that this is not true, because the quality of papers depends on the quality of the authors in the same manner that quality of teaching depends on the quality of professors. It seems that Peer Reviewers and Editors did not take care of the quality of published papers, because they missed Quality of Methods (Deming, Juran, Gell-Mann, Shewhart, Einstein, Galilei). This is very diffused in documents about Confidence Intervals and Control Charts [especially for Rare Events, where the data are assumed to follow the Weibull (or exponential) distribution]. Since generally the authors are professors it is important to see the two sides of the "publishing medal": authors and professors. Software (JMP, Minitab, SAS, …) get wrong Control Limits and do not find that processes are Out Of Control: this causes costs of dis-quality. The cases analysed here are from PRT Journals and teaching documents.
Introduction
In any activity we need to analyse information; information can be either given by words or by numbers (data provided by a measurement system). The analysis of information is done by suitable methods devised by competent scholars, working either in Universities or in Companies. Many times the methods are presented in papers published in Magazines and Journals or in books. Journals are broadly divided in two categories: 1) the Good and Reputed Journals and 2) the so-called "Predatory publishing" {Open Access Journals which publish papers and ask for a fee for that [named APC (Article Processing Charge) or a similar acronym]}.
The author, in his working life as Manager in Companies, as Professor in Universities and as Consultants, had the opportunity to read many wrong papers about data analysis.
For this reason, we titled this paper "Appeal to the Scientific Community: let's take care of Quality of published papers" because the people have the right of knowing the truth about the "Quality of published papers".
Unfortunately, the Universities, as said by Einstein «An Academic career poses a person in an embarrassing position, asking him to produce a great number of scientific publications…», ask the researchers for publications if they want to become professors; several times, the author asking the applicants (for professorship) "Why did you write such a statement…" got the answers either "My colleague wrote that,,," or "I found it in Wikipedia" or "I read it in that book…": in spite of their incompetence, they became professors!
We will show some fundamental ideas about the analysis of data collected in scientific experiments, based on the Scientific way of reasoning through Mathematics, Probability Theory and Statistics.
We start with the data in table 1, gathered in an experiment. A scholar must find the best from them. If the reader links to the three "forums" [1-3] iSigSigma, Academia.edu, Research Gate and looks at the discussions there (about statistical subjects) he can find a lot of people unable to correctly analyse the data in Table 1, writing wrong ideas on Probability and statistics Methods. They are not in line with the concepts provided in the documents [START_REF] Deming | Out of the Crisis[END_REF][START_REF] Deming | The new economics for industry, government, education[END_REF][START_REF] Shewhart | Economic Control of Quality of Manufactured Products[END_REF][START_REF] Shewhart | Statistical Method from the Viewpoint of Quality Control[END_REF][START_REF] Juran | Quality Control Handbook[END_REF][START_REF] Galetto | Quality in Higher Education Courses[END_REF][START_REF] Galetto | Hope for the Future: Overcoming the DEEP Ignorance on the CI (Confidence Intervals) and on the DOE (Design of Experiments[END_REF][START_REF] Dore | Introduzione al Calcolo delle Probabilità e alle sue applicazioni ingegneristiche[END_REF][START_REF] Mood | Introduction to the Theory of Statistics[END_REF][START_REF] Rao | Linear Statistical Inference and its Applications[END_REF][START_REF] Rozanov | Processus Aleatoire[END_REF][START_REF] Ryan | Statistical Methods for Quality Improvement[END_REF].
Suppose that a scholar ask himself the following questions:
(1) Do the two samples provide the same information about the process? (2) Do the two samples are distributed in the same way? (3) Do the sample 2 shows an improvement versus the sample 1? To answer the scholar either uses his intuition or uses a method. Since intuition can be fallacious he must use a sound method.
Let's see the steps he has to do: (i)
The data x ij (numbers, i=1, 2; j=1,2, …10) are to be considered the "determinations" of the Random Variables (RV) X ij (ii) Each RV X ij has a pdf (probability density function) which depends on k "parameters" , m=1, 2, …, k, which characterise the pdf (iii)
The "parameters" can be (a) either the "determinations" of the RV , distributed with a "known" pdf , or (b) "unknown" quantities (real numbers) and known type of pdf , or (c) pdf "completely unknown" (iv)
An "estimation function" is sought out to find the "best" value to be attributed to the parameter , for the cases (a) or (b) (v) While for the case (c), and for "Complete Samples" (as those in table 1), Probability and Statistics provide the ways to find "estimation functions" of the Mean (of the "Complete Samples"), of the Standard Deviation (of the "Complete Samples"), and other Moments (of the "Complete Samples"), … Case (c) is the easiest situation of computing the "estimation functions"
for "Complete Samples". In spite of that we find professors teaching wrong ideas. Notice the following wrong attached statement (excerpt 1) taken from a course on Quality Management, where Professors of Politecnico (of the Quality engineering Group) suggest Montgomery books to students. Any good student knows that the formula holds for any distribution and any sample size n: the Central Limit Theorem does not have any importance for that, BUT QEG professors do not know that!!! Excerpt 1 Wrong statement (from a course on Quality Management at Politecnico of Turin) [given by the QEG (Quality Engineering Group)] Remember: that formula holds for any distribution and any "Complete Samples" of sample size n.
For both the cases (a) and (b) we have a tool, based on the Likelihood Function (formula 1), for "Complete Samples"
(1) The "estimation function" of the "best" value to be attributed to the parameter , is different for the two cases.
For the case (a), let's see the figure 1, depicting the Bayesian method for estimation, because it is based on the Bayes' Theorem
Figure 1. The Bayesian method for estimation
In this case, the "parameter" is the "determination" of the RV , distributed with a "known" pdf , named "a priori (or Prior)" pdf, related to the scholar "a priori (or Prior)" knowledge (or experience). The test data D{x ij , i=1, 2; j=1,2, …10} in table 1 provide the Likelihood that is "mixed", via the Bayes' Theorem, with the Prior pdf: the "a posteriori (or Posterior)" pdf is computed; from that we can compute the Mean which estimates the mean value (a real number) of the Posterior pdf. From the Posterior) pdf we can compute two quantities (real numbers) and such that there is the "stated" probability , as in the formula (2)
(2)
The interval , named Credibility Interval, is a numerical interval which has a stated probability of comprising the RV , related to the "parameter" .
For example we could assume that the pdf of the data of table 1 is the exponential, where the parameter is the failure rate
(3) We can reparametrize (3) with and assume as the parameter.
To use the Bayesian estimation we must define either the "known" Prior pdf either with and or and ; if we, from Prior knowledge, use the Prior (with a specified value a, for the RV )
(4) we can compute the Mean of the Posterior pdf and the Credibility Intervals, both for Sample 1 and Sample 2.
We can then compare the means and the Credibility Intervals and take decisions.
It is clear that using another Prior pdf we get a different estimation and a different Credibility Interval. Therefore, two scholars have different estimations and different Credibility Intervals from the same data. On the other and, with the data of table 1 and without assuming any distribution we can compute the means and the standard deviations (sd) of both samples and decide only through our intuition. Engineering and Applied Sciences. Vol. 2, No. 3, 2017] invented by the author as the way to analyse both books and papers, because only very few people have been carefully considering Quality of the Methods: e. g., Deming,Juran,: professors, researcher, managers, scholars and students have been learning wrong ideas, in the Quality field: there is worldwide used book with many wrong concepts e.g., D. C. Montgomery falls in contradiction! He spreads wrong concept on Quality [START_REF] Montgomery | Introduction to Statistical Quality Control[END_REF].
The Theory of Confidence Intervals
This section is written to explain the method of approach to the problem of estimation especially designed for all cases in which the Prior distribution of the "parameter" to be estimated is not known: the parameter is then treated as an unknown constant and not as a RV.
Since we cannot use the Bayes Theorem we must find a new way for the estimation.
It is important that we do that because after almost one century from the first ideas on the CI (Confidence Intervals) they are not still understood [START_REF] Galetto | Quality in Higher Education Courses[END_REF][START_REF] Galetto | Hope for the Future: Overcoming the DEEP Ignorance on the CI (Confidence Intervals) and on the DOE (Design of Experiments[END_REF][START_REF] Dore | Introduzione al Calcolo delle Probabilità e alle sue applicazioni ingegneristiche[END_REF][START_REF] Mood | Introduction to the Theory of Statistics[END_REF][START_REF] Rao | Linear Statistical Inference and its Applications[END_REF]. It is dramatic seeing that many books and papers do not give the correct Theory of CIs!
The Scientific road to building the CIs (Confidence Intervals) can be found in [START_REF] Galetto | Hope for the Future: Overcoming the DEEP Ignorance on the CI (Confidence Intervals) and on the DOE (Design of Experiments[END_REF][START_REF] Dore | Introduzione al Calcolo delle Probabilità e alle sue applicazioni ingegneristiche[END_REF][START_REF] Mood | Introduction to the Theory of Statistics[END_REF][START_REF] Rao | Linear Statistical Inference and its Applications[END_REF][START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF]. Here we show the various steps.
To make the presentation very easily we consider only one "parameter" and the exponential pdf, with mean (3b) We consider the data of Sample 1 in the Table 1.
Here are the steps (related to the Sample 1): A. We name D={x 1j ; j=1,2, …10} the set of the data to be considered for the estimation of the parameter , with each RV X 1j following the pdf (probability density function) B. We write the Likelihood Function (formula 1), for the "Complete Sample 1" (1b) C. We find the Maximum of by setting (5) whose solution is the quantity [START_REF] Shewhart | Economic Control of Quality of Manufactured Products[END_REF] D.
is the determination of a RV T, sum of the RVs X 1j , (7) whose pdf can be derived, via Probability Theory [START_REF] Galetto | Hope for the Future: Overcoming the DEEP Ignorance on the CI (Confidence Intervals) and on the DOE (Design of Experiments[END_REF][START_REF] Dore | Introduzione al Calcolo delle Probabilità e alle sue applicazioni ingegneristiche[END_REF][START_REF] Mood | Introduction to the Theory of Statistics[END_REF][START_REF] Rao | Linear Statistical Inference and its Applications[END_REF][START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF], from that of the sum of the 10 RVs X 1j E. We write the Probability statement for the RV T which depends on the parameter (8) where L and U are two numbers and is a stated probability that the RV T is out of the probability interval L ------U [since the exponential distribution is often used for "life data" (Time To Failure)] T is named "Total Time on Test" F. From [START_REF] Juran | Quality Control Handbook[END_REF] we derive the equivalent Probability statement (9) where we see the random probability interval -----that covers (includes) the unknown "true" value of parameter ; see the figure 2 where the abscissa is the parameter, and the ordinate is the RV T G. We can define the two probability intervals L ------U and -----before any collection of the data D H. When we compute the number , named the "observed total time on test", the random interval -----becomes the "real" interval -------- "determination (of the Random Interval)" as it is shown in the figure 3: notice that the result holds also for Uncomplete Samples of n items and r failures (there are n-r ties) I. The "real" interval --------, "determination (of the Random Interval)", is named Confidence Interval (CI), and has the property that, in the long run, the ( )% of the CIs covers the unknown "true" value of parameter J. The probability that -------is either 1 or 0, because is not a RV We informed the authors and the Journals who published wrong papers; we wrote various letters to the Editors: they have been not published so far; Editors cannot acknowledge their errors. The same happened for Minitab: so people continue taking wrong decisions…
The "ocean full of errors by…" Dovoedo and Chakraborti, "Boxplot-based Phase I Control Charts for Time Between Events", QREI, Kumar, Rakitzis, Chakraborti, Singh (2022), "Statistical design of ATS-unbiased charts with runs rules for monitoring exponential time between events", CS-TM, Jones, Champ, "Phase I control charts for times between events", QREI, Fang, Khoo, Lee, "Synthetic-Type Control Charts for Time-Between-Events Monitoring", PLoS ONE, Kumar, Chakraborti, "Improved Phase I Control Charts for Monitoring Times Between Events" QREI, Dovoedo "Contribution to outlier detection methods: Some Theory and Applications", (found online, 2021, March), Liu, Xie, Sharma, "A Comparative Study of Exponential Time Between Event Charts", QT&QM, Frisén, "Properties and Use of the Shewhart Method and Followers", SA, Woodall "Controversies and Contradictions in Statistical Process Control", JQT, Kittlitz "Transforming the exponential for SPC applications". JQT, Schilling, Nelson "The effect of non-normality on the control limits of X charts", JQT, Woodall "The use of control charts in health-care and public health surveillance", JQT, Xie, Goh, Kuralmani, "Statistical Models and Control Charts for High-Quality Processes", (Boston, MA: Kluwer Academic Publisher, 2002), Xie, Goh, Ranjan, "Some effective control chart procedures for reliability monitoring", RE&SS, Xie, "Some Statistical Models for the Monitoring of High-Quality Processes", Boston, chapter 16 in the book Engineering Statistics (Pham Editor): Springer-Verlag, Zhang, Xie, M., Goh, "Economic design of exponential charts for time between events monitoring", IJPR, Zhang, Xie, Goh "Design of exponential control charts using a sequential sampling scheme", IIE Transactions, Zhang, Xie, Goh, Shamsuzzaman "Economic design of time-between-events control chart system", CIE, Santiago, Smith, "Control charts based on the Exponential Distribution", QE, Nasrullah, Aslam, "Design of an EWMA adaptive control chart using MDS sampling", JSMS, Balamurali, Aslam, "Variable batch-size attribute control chart", JSMS. On September 2022, the author looked (in the web) for other TBE papers and books to see their way of dealing with "Rare Events" Control Charts; he copied 77 pages of documents (several from Consultants) and downloaded 32 papers (Open Source). Several Journals asked from 15 $ to 60 $, to download a paper. The Open Source are: "Control Chart: Charts for monitoring and adjusting industrial processes", "TOOL #6 -XBar & R Charts", "Integrating Quality Control Charts with Maintenance", "A Brief Literature Review", "Paper SAS4040-2016, "Improving Health Care Quality with the RAREEVENTS Procedure Bucky Ransdell, SAS Institute Inc.", "Performance Criteria for Evaluation of Control Chart for Phase II Monitoring", "(Thesis) A Comparative Study of Control Charts for Monitoring Rare Events in Health Systems Using Monte Carlo Simulation", "A study on the application of control chart in healthcare", "Control Charts for Monitoring the Reliability of Multi-State Systems", "Part 7: Variables Control Charts2, "A Control Chart for Gamma Distribution using Multiple Dependent State Sampling", "A Variable Control Chart under the Truncated Life Test for a Weibull Distribution", "Plotting basic control charts: tutorial notes for healthcare practitioners", "Appendix 1: Control Charts for Variables Data -classical Shewhart control chart", TRUNCATED ZERO INFLATED BINOMIAL CONTROL CHART FOR MONITORING RARE HEALTH EVENTS", "Comparison of control charts for monitoring clinical performance using binary data", "A numberbetween-events control chart for monitoring finite horizon production processes", "Rare event research: is it worth it?", "Quality Improvement Charts: An implementation of statistical process control charts for R", "Control Chart Overview", "Statistical Process Control Monitoring Quality in Healthcare", "A Control Chart for Exponentially Distributed Characteristics Using Modified Multiple Dependent State Sampling", "Synthetic-Type Control Charts for Time-Between-Events Monitoring", "A systematic study on time between events control charts", "Lifestyle Management through System Analysis Monitor Progress", "Multivariate Time-Between-Events Monitoring -An overview and some (overlooked) underlying complexities", "A Comparison of Shewhart-Type Time-Between-Events Control Charts Based on the Renewal Process", "Control Charts for Monitoring Time-Between-Events-and-Amplitude Data", "How to Measure Customer Satisfaction Seven metrics you need to use in your research".
The "ocean full of errors by…" Dovoedo and Chakraborti, "Boxplot-based Phase I Control Charts for Time Between Events", QREI, Kumar, Rakitzis, Chakraborti, Singh (2022), "Statistical design of ATS-unbiased charts with runs rules for monitoring exponential time between events", CS-TM, Jones, Champ, "Phase I control charts for times between events", QREI, Fang, Khoo, Lee, "Synthetic-Type Control Charts for Time-Between-Events Monitoring", PLoS ONE, Kumar, Chakraborti, "Improved Phase I Control Charts for Monitoring Times Between Events" QREI, Dovoedo "Contribution to outlier detection methods: Some Theory and Applications", (found online, 2021, March), Liu, Xie, The case depicted in the figure 3 can be described as follows: the failure of electronic components while in use can be regarded as events of Poisson type with intensity 1/q which is to be determined from experiment. A convenient sample size n being chosen, n items are put into service simultaneously and kept in operation until exactly r items have failed.
Some If you go to the three forums [1-3] you find the same situation: scholars do not know the Theory.
The "NHSTP" (null hypothesis significance testing procedure)
This point is largely debated, with wrong ideas, as well. The road shown in the section 2 can be depicted as in the figure 4.
J. Juran, at the 1989 EOQC Conference in Vienna, highlighted the content of the paper [START_REF] Galetto | Quality of Quality Methods is important[END_REF] about the importance of the Quality of the methods for making quality: the paper shows the only good methods are crucial for suitable decision taking.
Since the data are unfortunately always variable we must consider all the uncertainties because they have consequences on our decisions: we face "decision-making under uncertainty".
That's why, before carrying out any testing we must think about the following points: I. We define the pdf (probability density function) followed by the RV X 1j (j to be determined), for the estimation of the parameter II. We state the so-called "Null Hypothesis" H 0 ={ 0} with a chosen risk (risk of 1 st type) that H 0 is Rejected, while it is actually true, and an "Alternative Hypothesis" H 1 ={ } is wrongly considered to be true [the type of H 1 can have different forms as There are various drawbacks….. I mention only a FIRST Point:
H 1 ={ }, H 1 ={ }, H 1 ={ }] III.
The confidence coefficient of a confidence interval derives from the procedure which generated it. It is therefore helpful to differentiate a procedure (CP) from a confidence interval: an X% confidence procedure is any procedure that generates intervals cover θ in X% of repeated samples, and a confidence interval is a specific interval generated by such a process. A confidence procedure is a random process; a confidence interval is observed and fixed. The statement A confidence procedure is a random process is nonsense. It is the same type of error as Statistical significance … is a sample statistic ………….
The above ideas are connected with a concept, wrongly understood by many people (in the three forums [1-3]), the p-value.
NOBODY can know the true values of the mean and of the variance 2 of the Distribution of the data: one can only estimate the parameters and the variance 2 from their Consider the area , which is a Random Variable because the estimator is Random Variable (written as S before). The RV 0 ------1 is uniformly distributed by definition.
The «Test Statistics» s, given the data D, is the "determination (=estimate)" of the estimator S, is here indicated by and the previous formula becomes the number 0 ------1 ( 12) that is named p-value: it is the probability of Rejection of the null hypothesis H 0, given the data D.
The number 0 ------1 tells us how much we can believe in H 0 , given the collected data D; all the "possible" numbers , which are the determinations of RV , are uniformly distributed. See figure 6: RS1 is the Random Sample of the RVs related to the Sample 1 (sample size n 1 , and g 1 failures). RS2 is the Random Sample of the RVs related to the Sample 2 (sample size n 2 , and g 2 failures). For the table 1, n 1 = g 1 =10 and n 2 = g 2 =10.
Many incompetent scholars make confusion between the RVs related to the Probability Statements and the Confidence Statements (confidence intervals and p-values).
Individual Control Charts (I-CC)
and Exponentially distributed data.
The Theory of Control Charts (CC) is shown in the Shewhart's books [START_REF] Shewhart | Economic Control of Quality of Manufactured Products[END_REF][START_REF] Shewhart | Statistical Method from the Viewpoint of Quality Control[END_REF] and very appreciated by Deming [START_REF] Deming | Out of the Crisis[END_REF][START_REF] Deming | The new economics for industry, government, education[END_REF] and Juran [START_REF] Juran | Quality Control Handbook[END_REF]. The CCs are the "thermometer for measuring the fever of a Process", using the Control Limits [CLs: LCL (Lower CL) and UCL( (Upper CL)], used to see if the Process is either "In Control (IC)" or "Out Of Control (OOC) [it has "fever"]". CCs are the tool for assessing the "health" of the process: CCs are a statistical tool for monitoring the "measurable output" of a Process. The "measurable output" (measures on the products provided by the products produced) can be viewed as a "Stochastic Process X(t)", ruled by a probability density for any set of n "Random Variables RV" X(t 1 ), X(t 2 ), …, X(t n ), considered at the "time instants" t 1 , t 2 , …, t n , of the Stochastic Process X(t)". In many applications the data plotted (on the CC) are the means , determinations of the RVs , i=1, 2, ..., n (n=number of the samples) computed from the data x ij , j=1, 2, ..., k (k=sample size); x ij are the determinations of the RVs at very close instants t ij , j=1, 2, ..., k; the RVs are assumed to follow a normal distribution because (Central Limit Theorem) they are the means of samples with sample size, k, each; usually k=5. For each RV , mean of the process (at time t i ), mean of RVs j=1, 2, ..., k, we assume here that it is distributed as : this is the assumption of W. A. Shewhart on page 278 of his book [START_REF] Shewhart | Economic Control of Quality of Manufactured Products[END_REF], and justified on page 289. The mean of all the RVs is indicated by and its determination is named "grand mean" and indicated by .
When the process is OOC (it has "fever") we say that it is operating in the presence of assignable causes of variation.
The Individual Control Charts (I-CC) have sample size k=1; see figure 7. The "grand mean" , in this case, becomes the mean . To compute the CLs (LCL and UCL) we are forced to use the differences ; we compute the n-1 ranges and then we can use the usual formulae, for the Normal distributed data [i=1, …, n-1 (n=total number of data)].
What do the scholar who do not have the right Theory? They transform the data in order to have the "transformed data" Normally distributed.
Before using any transformation, any scholar should see if it is suitable, because, as said by [START_REF] Deming | Out of the Crisis[END_REF][START_REF] Deming | The new economics for industry, government, education[END_REF], "Management need to grow-up their knowledge because experience alone, without theory, teaches nothing what to do to make Quality" and "The result is that hundreds of people are learning what is wrong. I make this statement on the basis of experience, seeing every day the devastating effects of incompetent teaching and faulty applications." and, moreover,
"It is necessary to understand the theory of what one wishes to do or to make."
To show how to compute the Control Limits for I-CC we use the following data about the time between failure of air conditioners on a Boeing 720 airplane: Had we chosen a higher CL we would have larger CIs. Since 1 is comprised in the CI, we can assume the exponential pdf, with 20% risk of being in error.
In this case we estimate the Mean Time To Failure MTTF=59.60.
To find the Control Limits, LCL and UCL, of the I-CC we need a suitable Theory.
We looked for it, by reading several documents [found in the literature and the Web…] and got the "ocean full of errors by …" (notice that one of the authors is very well known; he has more than 7000 Citations! Does that mean that he wrote good papers? Absolutely not!)]. All the papers in the above "ocean …" have the same problem: wrong CLs; all the authors confound the concepts, by stating that LCL and UCL (that actually are the Confidence Limit!) are the limits L and U of the Probability Interval.
The suitable theory is RIT [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF].
TBE data following Exponential distribution are with r=1
From
I make this statement on the basis of experience, seeing every day the devastating effects of incompetent teaching and faulty applications." and, moreover, "It is necessary to understand the theory of what one wishes to do or to make."
Why the authors in the "ocean full of errors by …" are so ignorant? Because they did not and still do not know the Theory.
They were (and are) "lead into temptation and delivered into evil" by the (inapplicable) formulae, that are applicable only for the normal data and not for exponential data, in spite of the wrong statements (excerpt 2) about the Peer Reviewers' incompetent statements "limits… typical in the vast literature…" and "The problem … exponential distribution is well-defined and solved". They use wrongly the Probability Interval L ------U as though it were the LCL ------UCL and put the estimates, either or 1/ , of the parameters in place of the parameters, q or =1/q. See the excerpt 2: "the temptation and the evil". Notice that an author ("ocean …") is Associated Editor. of … Now, the author is at risk. IF the Peer Reviewers (PRs) are taken from the "ocean …" they will not acknowledge the errors and could think the "strange statements (above)" and, hence, the paper would get the following evaluation: "Your manuscript is unsuitable for publication in …. I have attached comments at the bottom of this email. … Thank you for considering …. I hope the outcome will not discourage you from the submission of future manuscripts." "The associate editor (notice, perhaps the one with more than 7000 Citations!) has stated that the work lacks sufficient novelty statistically in terms of theory and methods."
Therefore, (1) the readers can know the wrong methods that have "sufficient novelty statistically in terms of theory and methods." (see the "ocean …", > 7000 Citations!) (2) but, on the contrary, they cannot know the rigth methods that prove how many incompetents published wrong papers, that diverted, are diverting and will divert people from learning scientific methods.
That's the reality we are confronting with… And nobody but F. Galetto seems to take care about it. See, e.g., .
Peer Reviewers and readers (and Editors, as well) should practice "metanoia" [START_REF] Deming | The new economics for industry, government, education[END_REF]
and remember his statement "The result is that hundreds of people are learning what is wrong. I make this statement on the basis of experience, seeing every day the devastating effects of incompetent teaching and faulty applications."
Now we see that Reliability Integral Theory (RIT) [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF][START_REF] Galetto | Quality of Quality Methods is important[END_REF] solves the problem of computing the Control Limits for Control Charts, especially for I-CC_TBE, exponentially distributed data.
Being the data exponentially distributed, also the ranges are exponentially distributed (Galetto books) and OOC. We see that actually the process is "Out Of Control".
All the wrong methods in the "ocean full of errors by …." (see the Excerpt 2, with 10 authors, and one with > 7000 Citations!) cannot find that actually the process is "Out Of Control".
Using the exponential pdf we can plot the I-CC as in figure 8 The figure 8 shows the Time Between Failures of the air conditioners and the LCL (symbol LCL_G_Exp), when the data are considered exponentially distributed: the Process is OOC (4 points below LCL).
Let q [the MTTF of any unit] the unknown parameter to be estimated and the "known" observed determination of the RV Total Time on Test . Let's fix the Confidence Level CL=1-; the Lower Limit of MTTF is q L and the Lower Limit of MTTF is q U : we have to solve the two equations ( 14) [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF], where the unknown variables are q and q , (see fig. 9) ( 14) is the reliability of a stand-by system [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF] ("associated to the reliability test, with exponential pdf") of n units for the interval 0 -----t, and MTTF unit = . It is also the Operating Curve [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF]. while is related to 1-/2 and 1. Thus, we have two lines, passing through the origin O [with linear scale, in figure 9 and 10]; putting q=q 0 the two lines intercept the vertical segment L ----U (probability interval), which has probability =1- that the "time to failure", Random Variable T, of any unit [vertical axis named "Total Time on Test" (for figure 9), because we consider all the data] is in the interval L ----U when q=q 0 . The angle depends on the values /2, 1-/2 and n (the number of data) [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF]. With the known quantity = [observed determination of the RV ], we can draw the horizontal line intersecting the two lines through the origin: the abscissas of intersections are the two numbers q L and q U , depending on the values /2, 1-/2 and n: q L and q U are the Lower limit and the Upper limit of the CI of the MTTF of each unit, with CL=1- [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF]. It is evident, for any intelligent person, that the two segments L ----U (vertical) and q L ----q U (horizontal) are two different intervals with clear different meaning and obvious different lengths q U -q L U-L! All the documents, known to the author, make this BIG ERROR: they confound the segment L ----U, a Probability segment, with the segment q L ----q U , which is a Confidence segment [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF]! See the "ocean full of …." and Excerpt 2 (10 authors).
RIT solves the case (TBE CC). We have to look at the figure 10 (similar to figure 9, but with different lines). Now the angular coefficient K are related to /2 and 1 (the single datum) while is related to 1-/2 and 1. As before we have two lines, passing through the origin O [with linear scale, in figure 10]; at q=q 0 the two lines intercept the vertical segment L ----U (probability interval), that has probability =1- that the "time to failure", Random Variable T, of any unit [vertical axis named "Time on Test", consider the single data]. The angle depends on the values /2, 1-/2 and the sample size 1 [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF].
Acting in figure 10 (as done for figure 9), with the known quantity "mean observed time to failure" = [observed determination of the RV ], we can draw the horizontal line intersecting the two lines through the origin: the abscissas of intersections are the two numbers LCL and UCL, depending on the two chosen values /2, 1-/2 and 1 (the single datum). As a matter of fact, in the I-CC, the CLs LCL and UCL must be consistent with the "individual" times to failures: we want to analyse if they are significantly different from the "true mean q", estimated by the "mean observed time to failure"
. Therefore the CLs are the values satisfying two equations, for any single unit, [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF] ("associated to the reliability test, with exponential pdf") of 1 unit for the interval 0 ----- (15) similar to the [START_REF] Rozanov | Processus Aleatoire[END_REF] with replaced by ; so we have 20 CIs [all equal], given and CL=1- [0.9973]; remember that in this case k=1 (sample size): I-CC! It is also the Operating Curve [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF].
All that (above) when =1. The same ideas, not the same formulae, can be used when the data are "Weibully" distributed.
Individual Control Charts (I-CC) and Weibully distributed data.
In the previous section we assumed that the shape parameter had the value =1, that is the pdf was exponential.
The same ideas, not the same formulae, can be used when the distribution of the data is Weibull (formula 13). Now we must estimate the two parameters and : with the data of Table 2, we find the estimates and ; both the estimates are determinations of the two estimators and , which have their own distribution to be found.
The variability of the two estimators and has an important effect on the Control Limits of the I-CC.
To find the Control Limits of the I-CC two roads can be followed: 1) we assume that the estimators has a "very small variability" (a very strong assumption!) and then we consider only the value , and 2) we use the distribution of .
The first road (case) is shown in the figure 11.
The estimated value
, "assumed as a constant parameter", moves down the LCL. It is very easy to compute it because the transformation transforms the Weibull pdf into the exponential pdf: so, the previous theory applies to the data and variables.
Figure 12. Surface of versus
The effect of the variability of the estimator (case 2) on the I-CC is very dramatic.
Since the distribution of the RV is not easily found we have to revert to simulations. Doing that the result is that the LCL is very near 0 and therefore can be assumed LCL=0. Consequently, there is no OOC in the Individual Control Chart when we consider the variability of the estimator of the shape parameter The Minitab and JMP software do not find that! Using the Theory in [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF] and simulations we have LCL=0: the figures 12 and 13 show "intuitively the reason. Other distributions, but the Normal one, can have the same problems.
Conclusion.
We recall some important ideas about the Control Charts and the Individual Control Charts (I-CC).
We are going to show some new cases about I-CC: we consider papers from the "ocean…"; we do not show here their data; we show only the right I-CC. Consider the figure 14 and compare with the figure 4.
This point is largely debated, with wrong ideas, as well. The road shown in the section 2 can be depicted as in the figure 4.
J. Juran, at the 1989 EOQC Conference in Vienna, highlighted the content of the paper [START_REF] Galetto | Quality of Quality Methods is important[END_REF] about the importance of the Quality of the methods for making quality: the paper shows the only good methods are crucial for suitable decision taking.
(i) For Control Charts the "Null Hypothesis" is H 0 ={ roc ss "In Con rol"} wi h a chos n risk of st type =0.003 that H 0 is Rejected, and we declare the process OOC, while it is actually IC, and an "Alternative Hypothesis" H 1 ={ roc ss "Ou Of Con rol"} is wrongly no foun h r ar OOCs with a stated probability (risk of 2 nd type) (ii) We must define the "probabilistic model" (the pdf) followed by the RV X j , for finding the "Statistic of the CC" (iii) From the Theory we derive the statistic S (via the "elaboration formula", which is a RV depending on the pdf of the RVs X j ) and the "Control Limits LCL, UCL" that are the "Acceptance Region" [complementary to the Rejection Region, the critical region C (iv) The probability of deciding that H 0 ={ roc ss "In Con rol"} is true (when actually it is OOC), with the probability 1- (the power) that we decide to decide that the Process is OOC, when it is OOC (v) When both the risks are fixed, before the test, we can compute the sample size n needed to satisfy both name D={x j ; j=1,2, …, n} the set of the data to be considered, (vi) Then we collect m subsequent empirical samples D i ={x ij ; i=1, 2, …, m; j=1, 2, …, k} ("Complete Samples") which allows us to Compute the LCL and UCL and to decide about H 0 (vii) We compute the sample quantity s, determination of the RV S; if s belongs to the Critical Region C, either s<LCL or s>UCL, we reject H 0 ={ roc ss "In Con rol"} an w claim ha i is OOC; otherwise we "accept (do not reject) H 0
Out of control kinds UCL LCL
Since the data are unfortunately always variable we must consider all the uncertainties because they have consequences on our decisions: we face "decision-making under uncertainty".
That's why, before carrying out any testing we must think about the following points:
The above points have dramatic consequences for Individual Control Charts, where k=1 and m<30.
We had the opportunity to use the software JMP (in Italian).
We analysed the case of table 2 and we got the proof that JMP does not know the true concepts; see the figure 15 and compare it with the figure 11… Notice that JMP does not consider the variability of the RV .
LCL and UCL are not in agreement with the Theory (fig. 14). Meditate… Use RIT: the n=30 TBE can be considered as the "transition times" (failure times, exponentially distributed) between states of a stand-by system of 30 units. We get the fig. 16 solving the two equations . Comparing the figures 16 and 17, it becomes very clear that the CC from "Improved Phase…" presents 5 errors about OOC.
Reader, could you think that "Improved Phase… " is scientific and this paper is not? How can the CC from "Improved Phase…" be good?
Simulations (five million!) show that only < 5% of the computations are correct… We agree with those authors that "Further work is necessary on the OOC performance of these charts": the further Work must be to STUDY (see Deming!). We ask the reader: do you think that these findings are not supported by Theory and Methods?
2 nd case: the paper "Control charts … Exponential Distribution", published in QE is no better.
The authors find the process IC: actually it is OOC (fig. 18) 3 rd case: the paper "Some effective … for reliability monitoring", published in RE&SS. Qualified authors Xie, Goh, Ranjan. Again WRONG Control Limits! See figure 19. 3 (where we used the same scale of Kumar et al.). Both the methods, "t1 Chart" and "ATS-unbiased t1 Chart", from the paper "Statistical design of ATS-unbiased …" provide wrong Control Limits. They say: "It can be observed that … detects a signal at the 67th point." The 67 th point is <0.63; obviously it is also <1.835 (the LCL of Galetto). A very strange conclusion is drawn: "… ATS unbiased t2-chart gives an OOC for the first time at the 36th point, while using the modified scheme… the chart detects an OOC, at the 30 th point." "Because the ATS-unbiased t1-chart … OOC at the 67th point whereas the ATS unbiased t2-chart … OOC at the 30th point, the example supports … that monitoring the times to every 2nd event (r>1) can speed up the detection of shifts in the process parameter." It is not clear (to us) why the authors write also: "ATS unbiased t2-chart gives an OOC for the first time at the 36th point, while using the modified scheme, the chart detects an OOC at the 30th point." IF there is an OOC at the "36th point" (for the non_modified scheme) and there is an OOC at the "30th point" (for the Modified Scheme) why it is NOT OOC the "22nd point"<"30th point"? The authors do not tell us. The Peer Reviewers and the editor did not find that! At this point, it should be clear that several Journals have been going on publishing wrong papers on I-CC_TBE [Individual Control Charts for TBE (Time Between Events)] data, exponentially distributed. Does, now, the reader think that the statement "The problem of monitoring TBE that follow an exponential distribution is well-defined and solved. I do not agree that "nobody could solve scientifically the cases" has to be considered a scientific idea? Absolutely not! This is due to lack of knowledge of the Sound Theory of the CC. The errors are in papers published by reputed Journals, written by reputed authors and analysed by reputed Peer Reviewers, who did not find the errors: moreover, they were and are read by reputed readers, who did not find the errors (see the "ocean full of errors …").
Nonetheless, they are wrong…. A true disaster. Their "formulae (wrong)" are used by the Minitab software [also JMP (seen before), SixPack, SAS, ...). [he users of such software take wrong decisions based on the "wrong formulae"… Worse, "the Software Management", informed of the errors did not take any Corrective Action: a very good attitude towards Quality! Those Journals publishing wrong papers on CC for "rare events" should, for future research about CC, accept the letters sent to their Editors and provide them to their Peer Reviewers, to avoid costly errors and decisions: the letters are not yet been published: the papers are wrong and obviously the Editors cannot acknowledge that. They did not used metanoia [START_REF] Deming | The new economics for industry, government, education[END_REF]. That is the big real problem: big errors of "well reputed" people make a lot of danger, and nobody (known to the author …), but the author (FG), takes care of teaching the students to use their own brain in order not to be poisoned by incompetents.
How many Statisticians, Professors, Certified Master Black Belts, practitioners, workers, students, all over the world, learned, are learning and will learn wrong methods and took and will take wrong decisions? If the reader considers that the author asked many [>>50] Statisticians and Certified Master Black Belts and Minitab users (you can find them in various forums such as ReasearchGate, iSixSigma, Academia.edu, Quality Digest, … and in several Universities) and nobody could solve scientifically the cases, he has the dimension of the problem. The author hopes that the Peer Reviewers of this paper have better knowledge than the discussants (in the various forums and in the "ocean full of errors …"…), otherwise he risks being passed off… .
In spite of all these proofs, the discussant who suggested the paper of J. Smith did not believe to the evidence (see the "ocean …"). He raised the problem that it could happen only by chance: he believed only in simulations (as do all who do not know Theory)! After ten million of simulations F. Galetto got that T Charts (Minitab, JMP and in all wrong papers) were wrong 93.3% of the times! We think that it should be enough…
The author, many times, with his documents , tried to compel several scholars to be scientific (from Galetto 1998 to Galetto 2022, in the References): he did not have success. Only Juran appreciated the author's ideas when he mentioned the paper "Quality of methods for quality is important" at the plenary session of EOQC Conference, Vienna (Galetto 1989). He always asked his students to use their own Intelligence, in order to avoid being poisoned by incompetents. He helped them with his papers presented at the HEI (Higher Education Institutions) Conferences since 1998. We saw that data need to be analysed with suitable methods devised on the basis of Scientific Theory and not on methods in fashion, in order to generate the correct CC. RIT [START_REF] Galetto | Affidabilità Teoria e Metodi di calcolo[END_REF][START_REF] Galetto | Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale[END_REF][START_REF] Galetto | Qualità. Alcuni metodi statistici da Manager[END_REF][START_REF] Galetto | Gestione Manageriale della Affidabilità[END_REF][START_REF] Galetto | Manutenzione e Affidabilità[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Reliability and Maintenance, Scientific Methods, Practical Approach[END_REF][START_REF] Galetto | Affidabilità per la manutenzione Manutenzione per la disponibilità[END_REF][START_REF] Galetto | Statistical Process Management[END_REF] is able to deal with many distributions and then usable for many types of data and make Quality Decisions about Quality matters. We showed various cases (from books and papers) where errors were present due to the lack of knowledge of a Sound Theory of Control Charts and of RIT. In order to show the several wrong ideas and methods related to financial and business considerations about quality in several books (not given in the references) we would need at least 80 more pages in this paper: we, obviously, cannot do that.
Therefore we ask the readers to look at some of the author's documents.
Figure 2 .
2 Figure 2. The two Probability Intervals connected to the RV T (Total Time on Test) Many scholars do not know the Theory; you find them in the forums [1-3] and in many papers, in good and respected Journals, as the ones in the following "Ocean full of errors by…".
Figure 3 .
3 Figure 3. The Confidence Interval --------and the Probability Interval --------[for the RV T (Total Time on Test)]
Figure 5 .
5 Figure 5. Deming's ideas about wrong scholars The wrong ideas are presented in the paper (THEORETICAL REVIEW) written by five authors (R. Morey, R. Hoekstra, J. Rouder, M. Lee, Eric-Jan Wagenmakers) The fallacy of placing confidence in confidence intervals, published in Psychon Bull Rev (2016) 23:103-123.There are various drawbacks….. I mention only a FIRST Point: The confidence coefficient of a confidence interval derives from the procedure which generated it. It is therefore helpful to differentiate a procedure (CP) from a confidence interval: an X% confidence procedure is any procedure that generates intervals cover θ in X% of repeated samples, and a confidence interval is a specific interval generated by such a process. A confidence procedure is a random process; a confidence interval is observed and fixed. The statement A confidence procedure is a random process is nonsense. It is the same type of error as Statistical significance … is a sample statistic ………….The above ideas are connected with a concept, wrongly understood by many people (in the three forums [1-3]), the p-value.NOBODY can know the true values of the mean and of the variance 2 of the Distribution of the data: one can only estimate the parameters and the variance 2 from their
are Random Variables!!!]. For easiness we consider the previous case of a single parameter ; we indicate with the ESTIMATOR and with the the density (pdf) of the estimator [which is a RV!].
Figure 6 .
6 Figure 6. The process of generation of the p-value [symbols f(t)= , , ]
Statistical Process Control Applied at Level Crossing Incidents, by O. Abdallah et al. International Journal of Sciences: Basic and Applied Research, 2023. Excerpt 2. Typical wrong formulae picked from the "ocean full of errors…" (10 authors, and one with > 7000 Citations!) The Deming statements are in order for excerpt 2: "Management need to grow-up their knowledge because experience alone, without theory, teaches nothing what to do to make Quality" and "The result is that hundreds of people are learning what is wrong.
Figure 8 .
8 Figure 8. Individual Control Chart of the Time Between Failures of the air conditioners. Notice that k=1 (sample size)
Figure 9 .Figure 10 .
910 Figure 9. Confidence Interval of the MTTF, and
Figure 11 .
11 Figure 11. Individual Control Chart of the Time Between Failures of the air conditioners, considering either the values 1 or 0.85 for the parameter . Notice that k=1 (sample size). See also the section Conclusion.
Figure 13 .
13 Figure 13. Curves of versus (abscissa) and We think we have shown the problems arising with I-CC for Weibull distributed data.Other distributions, but the Normal one, can have the same problems.
Figure 14 .Figure 15 .
1415 Figure 14. The flow-chart of steps for building control chart
Figure 16 .Figure 17 .
1617 Figure 16. [Excerpt] Control Chart from "Improved Phase… for Monitoring TBE" Figure 17. Control Chart, by RIT, for "Improved Phase… for Monitoring TBE" data; vertical axis logarithmic; UCL is >100.
Figure 18 .Figure 19 .
1819 Figure 18. Control Chart of Minitab authors' paper data (Urinary); vertical axe logarithmic. RIT used (F. Galetto) Figure 19. Control Chart of Xie et al. TBF data; vertical axe logarithmic. RIT used (F. Galetto)
Figure 20 .
20 Figure 20. Necessity
Figure 21 .Figure 22 .
2122 Figure 21. Intelligence vs "common sense"The author (figures 20, 21) has been always fond of Quality in his activity; for that reason, he wrote several papers and books showing scientific methods versus many wrong methods and presented them in several national and International Conferences: he wanted to diffuse Quality (from Galetto 1989 to Galetto 2022, in the References). The truth sets you free!
Table 1 .
1 Data gathered in an experiment: two samples of size 10. Higher values are better.
Sample 1 286 948 536 124 816 729 4 143 431 8
Sample 2 2837 596 81 227 603 492 1199 1214 2831 96
Journals that have wrong methods for computing the Confidence Interval are Journal of Quality Technology,
Kluwer Academic Publisher, Reliability Engineering &
System Safety, International Journal of Production
Research, IIE Transactions, Computers and Industrial
Engineering, Quality and Reliability Engineering
International, Quality Engineering, International Journal of
Production Research, Computers and Industrial
Engineering, …
Table 2 .
2 Time between failures of air conditioners on a Boeing 720 airplane. Higher values are better.
RS1=T 1 , T 2 ,…, T g1 ,…, T n1
RS2=T' 1 , T' 2 ,…, T' g2 ,…, T' n2
Random
Variable
DATA p-value (number) H 0
23 261 87 7 120 14 62 47 225 71
246 21 42 30 5 12 120 11 3 14
71 11 14 11 16 90 1 16 52 95
The data are not "Normally Distributed"; therefore we
cannot use the Shewhart Theory.
Fitting a Weibull pdf to the data of Table 2
(13)
we find the estimates
with Confidence Interval and
, with Confidence Level CL=80%,
with Confidence Interval
and , with Confidence Level CL=80%.
A very good result for a Peer Reviewed paper! The two Peer Reviewers did not know the Theory. "It is necessary to understand the theory of what one wishes to do or to make."[Deming] 4 th case: the paper "Statistical design of ATS-unbiased … time between events", we find a new wrong case copied from Santiago and Smith (2013): the CLs are wrong.According to the Kumar et al. computations, the CLs are: LCL=31.36 and UCL=1943.22, quite different from those of Santiago&Smith. The cause is not explained by the authors… It is interesting what we find with RIT. See Table
Table 3 .
3 Comparison of results from the paper "Statistical design of ATS…." and RIT
Type of LCL UCL Comment
Method
N. Kumar et al. 0.63 2093.69 Both LCL and UCL
"t1 Chart" lower than the
Scientific ones
N. Kumar et al. 31.36 1943.22 LCL 17 times higher
"ATS-unbiased than Scientific and
t1 Chart…" UCL 24% of the
Scientific ones
F. Galetto RIT 1.835 7940.01 Scientific
TRUTH is ALWAYS the TRUTH even though NO ONE believes it.
Idiocy Ignorance Incompetence
a LIE is ALWAYS a LIE even if EVERYONE believes it.
Galileo Galilei Galileo Galilei |
04100852 | en | [
"spi.meca.mefl",
"info.info-mo"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04100852/file/manuscript_V2.pdf | Kevin Schmidmayer
email: [email protected]
Luc Biasiori-Poulanges
Geometry effects on the droplet shock-induced cavitation
Keywords:
Introduction
Liquid droplets are well known to experience aerodynamic fragmentation upon interaction with a plane shock wave [START_REF] Guildenbecher | Secondary atomization[END_REF][START_REF] Theofanous | Aerobreakup of newtonian and viscoelastic liquids[END_REF]. More recently, it has been shown that in the very early stages of the shock-droplet interactionlong before the droplet starts to deform and break up -the growth of cavitation bubbles inside the droplet may occur [START_REF] Sembian | Plane shock wave interaction with a cylindrical water column[END_REF][START_REF] Kyriazis | Modelling cavitation during drop impact on solid surfaces[END_REF][START_REF] Biasiori-Poulanges | Shockinduced cavitation and wavefront analysis inside a water droplet[END_REF][START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF]. Such a process is likely to occur in a large spectrum of applications, as a desired or adverse effect, ranging from raindrop impact on aircraft [START_REF] Ando | Effects of polydispersity in bubbly flows[END_REF], to combustion and detonation of multiphase mixtures [START_REF] Meng | Numerical simulations of the early stages of high-speed droplet breakup[END_REF], through ink-jet printing or liquid jet-based physical cleaning [START_REF] Okorn-Schmidt | Particle cleaning technologies to meet advanced semiconductor device process requirements[END_REF][START_REF] Tatekura | Observation of water-droplet impacts with velocities of o (10 m/s) and subsequent flow field[END_REF], to name but a few. Because the experimental characterisation of the shock-induced cavitation within a droplet is particularly challenging [START_REF] Biasiori-Poulanges | Multimodal imaging for intra-droplet gas-cavity observation during droplet fragmentation[END_REF], recent research efforts have focused on the development of models and numerical methods to account for phase change or gas phase growth within the liquid phase. Using a model incorporating phase change, Kyriazis et al. [START_REF] Kyriazis | Modelling cavitation during drop impact on solid surfaces[END_REF] recently simulated the high-speed droplet impact on a solid substrate as experimented by Field et al. [START_REF] Field | The effects of target compliance on liquid drop impact[END_REF]. Laying the groundwork for the numerical simulation of shock-induced droplet cavitation, authors successfully showed the ability of such models to simulate the growth of bubbles. Comparing the numerical results with the experimental observations however displayed a large overestimation in the size of the bubble cloud. This is a direct consequence of the thermodynamic-equilibrium assumption which corresponds to an instantaneous equilibrium of pressures, temperatures, velocities and chemical potentials. Indeed, this approach, analogous to infinite relaxation rates for the pressures, temperatures, velocities and chemical potentials, enables the instantaneous expansion of the gas phase when subjected to a tensile wave [START_REF] Saurel | Modelling phase transition in metastable liquids: application to cavitating and flashing flows[END_REF][START_REF] Pelanti | A mixture-energyconsistent six-equation two-phase numerical model for fluids with interfaces, cavitation and evaporation waves[END_REF]. Very recently, Forehand et al. [START_REF] Forehand | A numerical assessment of shock-droplet interac-tion modeling including cavitation[END_REF] modelled and studied shock-induced cavitation in cylindrical and spherical droplets and were able to well capture the initial wave dynamics. However, their investigation also did not align with the experiments conducted by Sembian et al. [START_REF] Sembian | Plane shock wave interaction with a cylindrical water column[END_REF] on cylindrical droplets regarding the activity of the bubble cloud. As a result, the authors' conclusions only suggest that, like cylindrical droplets and like previous studies suggested, cavitation can occur for spherical droplets.
Within the context of heterogeneous nucleation, we recently introduced, assessed and validated a thermodynamically well-posed multiphase numerical model accounting for phase compression and expansion, which relies on a finite pressure-relaxation rate formulation [START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF]. Upon validation, we exploited the model to describe the phenomenology of the shock-induced cavitation at relatively low shock-wave Mach number (1 < M < 3) and for a cylindrical droplet, for which experimental results have already been reported [START_REF] Sembian | Plane shock wave interaction with a cylindrical water column[END_REF][START_REF] Field | The effects of target compliance on liquid drop impact[END_REF]. The bubblecloud activity was for the first time successfully captured. Accordingly, to our knowledge, there are currently no experimental or numerical studies available in the literature that examine the impact of cavitation events on interface disruption and atomization for cylindrical or spherical droplets, regardless of whether they present low or high Mach numbers.
Consequently, we extend herein our previous work through parametric simulations aiming to evaluate for the first time the geometry effects on the droplet shockinduced cavitation process. These geometry effects include the shape of the transmitted wave front, which is related to the ratio of the shock speed to the droplet sound speed, and the droplet geometry (cylindrical versus spherical). Based on the transmitted wavefront geometry, two cavitation regimes have been identified and the transition has been characterised. On more applied aspects, we also investigate the influence of the bubble cloud 1 on the interface disruption and compare the results against the pure liquid droplet test case. A parallel with the technique of effervescent atomization is eventually presented. Note that this work only considers heterogeneous cavitation, i.e., no phase change. The droplet initially contains pre-existing nuclei modelled as a liquid-gas mixture. Considering the difference in the acoustic impedance between both phases, such a modelling enables to simulate each phase response, within the mixture, to compression and expansion effects.
Problem description 2.1 Phenomenology
The shock-induced cavitation within a liquid droplet is initiated with the interaction of the shock wave with the droplet at time t = 0. Upon interaction, the shock is transmitted to the droplet, while part of the incident shock is diffracted around the droplet. It results in a compression wave propagating within the droplet in the stream direction. We refer to this wave as the transmitted wavefront or transmitted sound wave, denoted TSW. As a consequence of the large water-to-air acoustic impedance ratio, the TSW reflects at the droplet interface as a converging expansion wave. This first internal reflection of the TSW is denoted TSWr. As it propagates within the droplet, the TSWr generates low pressure regions in the internal flow field which, under some conditions, result in the cavitation and growth of bubbles forming a cloud. This bubble cloud eventually collapses and generates a spherical shock wave (CiS, for collapse-induced shock) originating from the cloud center. Upon reaching the droplet interface, the CiS similarly reflects as an expansion wave which, under some conditions, may also result in the cavitation and growth of bubbles. Note that when the expansion wave reflects at the droplet interface, it transforms into a compression wave. In this work, we focus on the effects of the TSW geometry on the shock-induced cavitation phenomenology. The parametric equations of the TSW read [START_REF] Biasiori-Poulanges | Shockinduced cavitation and wavefront analysis inside a water droplet[END_REF] x
M = [c l t -nR d (1 -cos α)] cos(θ -α) -R d cos(α), y M = [c l t -nR d (1 -cos α)] sin(θ -α) + R d sin(α), ( 1
)
where n is the water-to-air sound speed ratio c l /c g and R d is the droplet radius. The incident and refraction angles, α and θ , are related by the fundamental law of refraction, sin θ = n sin α. More details can be found in [START_REF] Biasiori-Poulanges | Shockinduced cavitation and wavefront analysis inside a water droplet[END_REF][START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF].
Problem dimensions
The Mach number M of the shock wave, the Weber number We, and the Reynolds number Re are defined as
M = U s c , We = ρU 2 d 0 σ and Re = ρUd 0 µ . (2)
U s is the incident shock wave velocity, c is the gas sound speed in the pre-shocked state, ρ is the density of the post-shocked gas, U is the post-shocked gas velocity, µ is the dynamic viscosity of the gas, σ is the surface tension coefficient and d 0 is the diameter of the droplet.
Herein, we report high values of We, ranging from ∼ 10 4 to ∼ 10 6 , and Re, from ∼ 10 6 to ∼ 10 7 , for Mach numbers from 1.6 to 6. This indicates that the inertial forces dominate the flow over the surface tension and the viscous forces, respectively. These effects are therefore neglected in our modelling.
In order to facilitate the description of the shockinduced cavitation within a droplet, we use dimensionless parameters. Unless otherwise specified, non-dimensionalization of the space and time variables, L and T , is done using the initial droplet diameter d 0 = 22 mm and the sound speed in water c l
L = L d 0 and T = T c l d 0 , (3)
where (• ) denotes a non-dimensional quantity. Note that the time shown in the figures starts from the first interaction of the shock wave with the interface. We also define the dimensionless volume
V * = V c V d , (4)
where V c = ∑ i α g,i V i and V d are the volumes of the bubble cloud and of the droplet, respectively. α g is the volume fraction of gas within the droplet and V i is the volume of the i-th cell. Cylindrical and spherical droplet volumes are identical in the beginning of the simulation to allow for better comparison. Since the initial radius is the same in both configurations, the depth of the cylindrical droplet is adapted in order to match the volume of the spherical droplet.
Numerical modelling
We use herein the modelling proposed by Biasiori-Poulanges & Schmidmayer [START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF] which is a slightly modified version of the modelling proposed by Schmidmayer et al. [START_REF] Schmidmayer | Modelling interactions between waves and diffused interfaces[END_REF] to simulate the compression and expansion of each phase within the liquid-gas mixture, while ignoring phase change. The modification is only related to the form of the pressure-relaxation terms (right-hand side). The details of this modelling are provided below for the selfconsistency of the paper.
Governing equations
The thermodynamically well-posed, pressure-and temperaturedisequilibrium, multi-component flow model conserves mass, momentum and total energy. It reads for N phases
∂ α k ∂t + u • ∇α k = δ p k , ∂ α k ρ k ∂t + ∇ • (α k ρ k u) = 0, ∂ ρu ∂t + ∇ • (ρu ⊗ u + pI) = 0, ∂ α k ρ k e k ∂t + ∇ • (α k ρ k e k u) + α k p k ∇ • u = -p I δ p k , (5)
where α k , ρ k , p k and e k are the volume fraction, density, pressure and internal energy of each phase, respectively, and for which k indicates the phase index. The mixture density and pressure are
ρ = N ∑ k=1 α k ρ k and p = N ∑ k=1 α k p k , (6)
while the mixture total energy is
E = e + 1 2 ∥u∥ 2 , ( 7
)
where e is the mixture specific internal energy
e = N ∑ k=1 Y k e k (ρ k , p k ) . (8)
In ( 8), e k (ρ k , p k ) is defined via an equation of state (EOS) and Y k are the mass fractions
Y k = α k ρ k ρ . (9)
Herein, we consider two-phase mixtures of gas (g) and liquid (l), in which the gas is modelled by the ideal-gas EOS
p g = ρ g (γ g -1)e g , (10)
and the liquid is modelled by the stiffened-gas (SG) EOS
p l = ρ l (γ l -1)e l -γ l π ∞,l , (11)
where γ and π ∞ are model parameters [START_REF] Métayer | Elaborating equations of state of a liquid and its vapor for two-phase flow models[END_REF]. Herein, similarly to [START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF], we use γ g = 1.4, γ l = 2.35 and π ∞,l = 10 9 . The interfacial pressure is defined as
p I = ∑ N k p k ∑ N j̸ =k z j (N -1) ∑ N k z k , (12)
where z k = ρ k c k and c k are the acoustic impedance and speed of sound of the phase k, respectively. For the pressure-relaxation terms between the phases, following Biasiori-Poulanges & Schmidmayer [START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF], δ p k reads
δ p k = µα k N ∑ j̸ =k α j (p k -p j ) . (13)
Note that µ is a finite constant parameter which can be selected in the ]0, ∞] range. However, for a given mixture and flow regime, only one value within this range accurately reproduces the physics. This value changes from one configuration to another and must be determined by comparison with appropriated experimental data. Since pressures are in disequilibrium here, the total energy equation of the mixture is replaced by the internal-energy equation for each phase. Nevertheless, the conservation of the mixture total energy can be written in its usual form
∂ ρE ∂t + ∇ • [(ρE + p) u] = 0. (14)
We note that ( 14) is redundant when the internal energy equations are also computed. However, in practice, we include it in our computations to ensure that the total energy is numerically conserved, and thus preserve a correct treatment of shock waves.
Based on the hyperbolic study, the mixture speed of sound, also called frozen speed of sound, is derived as
c 2 = N ∑ k=1 Y k c 2 k , (15)
which is found to be in agreement with previously reported expression [START_REF] Saurel | Simple and efficient relaxation methods for interfaces separating compressible fluids, cavitating flows and shocks in multiphase mixtures[END_REF]. We also recall that the model is in velocity equilibrium, respects the second law of thermodynamics and is hyperbolic with eigenvalues either equal to u or u ± c, where u is the velocity in the x-direction.
Numerical method
We numerically solve Eq. ( 5) using a splitting procedure between the left-hand-side terms associated with the flow and the right-hand-side terms associated with our relaxation procedure.
The left-hand-side terms are solved by an explicit finite-volume Godunov scheme where, to ensure the conservation of total energy, a procedure correcting the non-conservative terms of the internal-energy equations is required and it uses the mixture total-energy relation [START_REF] Pelanti | A mixture-energyconsistent six-equation two-phase numerical model for fluids with interfaces, cavitation and evaporation waves[END_REF]. The method corrects the total energy before the relaxation procedure, during the flux computation of the hyperbolic step, and therefore allows finite or infinite relaxations [START_REF] Schmidmayer | Modelling interactions between waves and diffused interfaces[END_REF].
The relaxation terms (system of ordinary differential equations) are integrated with a first-order, explicit, Euler scheme with time-step subdivisions [START_REF] Schmidmayer | Modelling interactions between waves and diffused interfaces[END_REF]. The number of subdivisions is adapted at each time step to verify the volume-fraction and pressure constraints. During this procedure, if the pressures are completely relaxed, i.e. a unique pressure for all phases, we terminate the Euler scheme and we perform from the initial state an infiniterelaxation procedure [START_REF] Saurel | Simple and efficient relaxation methods for interfaces separating compressible fluids, cavitating flows and shocks in multiphase mixtures[END_REF] to guarantee a unique pressure and better estimate the solution. This also assures a faster computation.
A second-order-accurate MUSCL scheme with twostep time integration is used [START_REF] Schmidmayer | ECOGEN: An open-source tool for multiphase, compressible, multiphysics flows[END_REF], where the first step is a predictor step for the second and the usual piece-wise linear MUSCL reconstruction [START_REF] Toro | Riemann solvers and numerical methods for fluid dynamics[END_REF] with the monotonized central (MC) [START_REF] Van Leer | Towards the ultimate conservative difference scheme III. Upstream-centered finitedifference schemes for ideal compressible flow[END_REF] slope limiter is used for the primitive variables.
In order to resolve the wide range of spatial and temporal scales of wavefronts and interfaces, an adaptive mesh refinement technique is employed [START_REF] Schmidmayer | Adaptive Mesh Refinement algorithm based on dual trees for cells and faces for multiphase compressible flows[END_REF]. The cell i is refined when the following criterion is fulfilled
|X Nb(i, j) -X i | min(X Nb(i, j) -X i ) > ε, ( 16
)
where X is a given flow variable. The criterion is tested for all neighboring cells, denoted by the subscript Nb(i, j), where the j-th cell is the corresponding neighbor of the i-th cell. The threshold is conservatively set to ε = 0.02. The above refinement criterion is tested for density, velocity, pressure and volume fraction and refines the cell if the criterion is fulfilled for any of these variables. In addition, neighboring cells of refined cells are also refined to prevent oscillations as well as loss of precision. This modelling is implemented in ECOGEN [START_REF] Schmidmayer | ECOGEN: An open-source tool for multiphase, compressible, multiphysics flows[END_REF], which has been validated, verified and tested for finiterelaxation rate in various setups such as droplet shockinduced cavitation, gas bubble dynamics problems, including free-space and near-wall bubble collapses, and liquid-gas shock tubes. Using infinite-relaxation rate, it has also been validated for surface-tension problems as well as column and droplet breakup due to high-speed flow (see, e.g., [START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF][START_REF] Schmidmayer | Modelling interactions between waves and diffused interfaces[END_REF][START_REF] Dorschner | On the formation and recurrent shedding of ligaments in droplet aerobreakup[END_REF] and references therein).
Computational setup
The calibration of the pressure-relaxation rate, µ, was done by Biasiori-Poulanges & Schmidmayer [START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF] against the experiment of Sembian et al. [START_REF] Sembian | Plane shock wave interaction with a cylindrical water column[END_REF], where a Mach 2.4 planar shock wave interacts with a cylindrical water droplet (column) of 22 mm in diameter. In this experiment, the growth of a bubble cloud has been imaged. The calibrated pressure-relaxation rate was found to be µ = 3.5 for an initial air volume fraction in water of α g = 10 -6 . This corresponds to the pre-existing nuclei in non-purified water. We recall that considering the difference in the acoustic impedance between both phases, the modelling enables to simulate each phase response, within the mixture, to compression and expansion effects, i.e. heterogeneous cavitation (without phase change). µ was again validated against a second experiment of Sembian et al. [START_REF] Sembian | Plane shock wave interaction with a cylindrical water column[END_REF] with M = 1.75, for which no bubble cloud has been recorded. Note that the results presented herein are based on the assumption that this calibration on the cylindrical droplet of Sembian et al. also works for spherical droplets with the same initial purity (nucleus concentration). Regardless, the phenomenology would be qualitatively unaffected as it is mostly governed by the internal wave dynamics and geometry effects. Computational setup corresponding to a planar shock wave interacting with a water droplet. The four blue segments aligned on the droplet vertical axis show the locations of the probes used to plot the pressure profiles in Fig. 3.
The two-dimensional (2D) computational setup, corresponding to a planar shock wave interacting with a water droplet, is shown in Fig. 1, where the x-axis is the axis of symmetry on which the center of the droplet of radius R d = 11 mm is located. Cylindrical axi-symmetry is used to model the spherical droplet while it is disabled to model the cylindrical droplet. Simulations are performed in a [6R d × 3R d ] rectangular computational domain. A symmetric boundary condition is applied to the bottom side of the computational domain, and non-reflective boundary conditions are imposed to the remaining boundaries. The droplet is initially located at the center, and is assumed to be in mechanical equilibrium with the surrounding air. The initial droplet is resolved by 100 cells per diameter. Adaptive mesh refinement (AMR) composed out of two grid levels, leading to 400 cells per diameter, and adapted to follow the flow discontinuities is used. The AMR level is selected based on the analysis of the grid sensitivity [START_REF] Biasiori-Poulanges | A phenomenological analysis of droplet shock-induced cavitation using a multiphase modeling approach[END_REF]. The shock wave is initialized inside the domain, and travels from left to right in air at atmospheric conditions. For the incident shock Mach number M, the initial flow field is determined from the Rankine-Hugoniot jump relations using a downstream density of 1.204 kg/m 3 and a 1 atm pressure. The water has a density of 1028 kg/m 3 . This was calculated to agree with the sound speed in water calculated from the experimental observations of Sembian et al. [START_REF] Sembian | Plane shock wave interaction with a cylindrical water column[END_REF] (≈ 1512 m/s), when using the Eq. 11.
In addition, note that a 3D simulation has been carried out on a coarser mesh to assess the quality of the 2D axi-symmetric approximation and only marginal differences were observed, arising from differences in numerical dissipation. Indeed, differences should emerge for longer observation times where surface tension and viscosity play a role. Hence, for computational saving reasons, 2D axi-symmetric simulations are only performed herein to represent a spherical droplet.
Further, note that the computation of the bubble cloud volume in the axi-symmetric configuration takes into account the volume of each cell, V i , with a cylindrical revolution. In order to assess the phenomenological differences between the cylindrical and the spherical droplet, we take the case where the incident shock wave travels at Mach 2.4.
Results and Discussion
The first phenomenological difference concerns the wave dynamics. The geometrical difference influences the shape and amplitude of the reflected shock wave from the impingement of the incident shock wave on the droplet, and, more importantly, of the transmitted sound wave (TSW) within the droplet. Fig. 2 depicts these shape differences with the help of numerical schlieren (i.e., the exponential of the negative, normalized density gradient [START_REF] Pishchalnikov | High-speed video microscopy and numerical modeling of bubble dynamics near a surface of urinary stone[END_REF][START_REF] Quirk | On the dynamics of a shock-bubble interaction[END_REF]). We observe that the reflected shock wave is delayed in the spherical droplet case. This behavior is explained by the fact that the reflected shock wave travels spherically (i.e. with two curvature angles), while it is cylindrically reflected (i.e. with one curvature angle) in the case of the column. Regarding the TSW, almost no influence of the curvature angles is observed by only looking at the schlieren images. Indeed, the positions of the waves are matching between the two cases. However, pressure profiles presented in Fig. 3 show that the TSW, which is a compression wave, has a weaker amplitude in the case of the droplet. This indicates that the additional curvature angle induces a weaker energy transmission of the incident shock wave towards the interior of the droplet. We also note that the initial compression jump of the TSW is followed by a slowly decreasing pressure. Then the TSWr, which are rarefaction waves, focus and generate a sudden drop in pressure. This pressure drop is observed for the column in Fig. 3 but not for the droplet. This difference is explained by a peculiar phenomenon happening in the case of the droplet. Fig. 4 presents pressure fields in two configurations, pure water and water with initial nuclei, for both the column and the droplet. In the case of pure water, cavitation is absent. Hence, the peculiar event is easier to elucidate. During the focus of the TSWr ( t = 1.22 and 1.25), we observe higher amplitudes of negative pressures for the droplet. Although the amplitude of the TSW is weaker for the droplet, this phenomenon is expected since the additional curvature angle induces a stronger focus of the TSWr. At t = 1.29, the peculiar phenomenon appears for the droplet case. Instead of having primarily one low-pressure point (tension) as for the column, we ob-serve two low-pressure points with short but sufficient distance from one another to generate compression in between. This compression then travels and counterbalances the rarefaction wave propagating in direction of the center of the droplet ( t = 1.32). For this reason, in opposition of the column case, no sudden pressure drop is observed at the center of the droplet but rather a sudden compression ( t = 1.56). Note that the compression is also observed after the rarefaction wave for the column, although with lower pressures. When nuclei are initially present, this phenomenon is weakened since a significant part of the energy is absorbed by the gas phase to generate the bubble cloud.
t R x c j G G K 5 o Z h L 6 i m G d h M P u q v K M M = " > A A A B 7 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 n E r 2 P B i 8 c K t h b a U D a b T b t 0 s 4 m 7 E 6 G E / g k v H h T x 6 t / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v S K U w 6 L r f T m l l d W 1 9 o 7 x Z 2 d r e 2 d 2 r 7 h + 0 T Z J p x l s s k Y n u B N R w K R R v o U D J O 6 n m N A 4 k f w h G N 1 P / 4 Y l r I x J 1 j + O U + z E d K B E J R t F K n R 4 K G X K C / W r N r b s z k G X i F a Q G B Z r 9 6 l c v T F g W c 4 V M U m O 6 n p u i n 1 O N g k k + q f Q y w 1 P K R n T A u 5 Y q G n P j 5 7 N 7 J + T E K i G J E m 1 L I Z m p v y d y G h s z j g P b G V M c m k V v K v 7 n d T O M r v 1 c q D R D r t h 8 U Z R J g g m Z P k 9 C o T l D O b a E M i 3 s r Y Q N q a Y M b U Q V G 4 K 3 + P I y a Z / V v c v 6 x d 1 5 r d E o 4 i j D E R z D K X h w B Q 2 4 h S a 0 g I G E Z 3 i F N + f R e X H
The second phenomenological difference concerns the bubble cloud resulting from these wave dynamics. In fact, in the case of the spherical droplet, one could think that this more pronounced curvature would induce a stronger focus of the TSWr and therefore a greater cavitation phenomenon. However, contrary to appearances, although the focus is stronger, it appears that the combination, of (i) the lower transmitted energy within the droplet with (ii) the peculiar wave-dynamics phenomenon explained above, counterbalances this focus. This results, as shown in Fig. 5, in a smaller volume of bubble cloud for the droplet compared to the column. Notably, the maximum bubble-cloud volume for the droplet is ≈ 23% of the one for the column. Further, note that the bubble clouds of both configurations start to grow at the same time and with very similar growth rates. However, the growth phase is longer for the column case. and the volume fraction of gas, respectively. We observe that the TSWr are more mitigated for the droplet, and the dynamics and position of the cloud is captured. For the droplet, the bubble cloud is no longer present from the image at t = 1.48. One can also note that the shock wave which could be emitted by the collapse of the bubble cloud (CiS) is not perceptible in the case of the droplet, whereas it is observed, although weak, for the column. Consequently, these phenomenological differences, here depicted for a Mach number of 2.4, indicate that cavitation is in general less likely to occur in the spherical droplet case. The aim of the next section is therefore to confirm it and to evaluate the critical Mach number for shock-induced cavitation.
Critical Mach for cavitation appearance
In order to assess the critical Mach number presenting the first signs of cavitation appearance, it is necessary to define an indicator offering more information that a simple threshold of volume of gas phase. In consequence, we consider the relative variation of volume between two consecutive simulations:
δV * max = V * max,p -V * max,p-1 V * max,p-1 , (17)
where V * max is the maximum bubble-cloud volume reached during the simulation. Subscript p denotes the evaluated Mach-number point and p -1 is therefore the preceding point (e.g., p for the simulation at M = 1.8 and p -1 for the simulation at M = 1.6). Fig. 7 presents V * max and δV * max from M = 1.6 to 3. Results from the simulations (markers) are completed by splines. The first observation is that the cloud volume is significantly larger for all Mach numbers in the case of the column compared to the droplet, but for below M = 1.8 where no cavitation is detected in both configurations. From δV * max , we note for the column (droplet) that the volume is increased by 64% and 264% (48% and 179%) at M = 1.8 and M = 2 (M = 2 and M = 2.2), respectively. This indicates that cavitation is very slightly starting at M = 1.8 for the column (M = 2 for the droplet). Since it started from a very small gas volume composed of only nuclei, we do not consider it sufficient to call it a cavitation phenomenon. However, based on the sudden and significant increase in volume between M = 1.8 and M = 2 for the column (M = 2 and M = 2.2 for the droplet), we can consider cavitation to first appear within this range, which is represented with a shaded area for each configuration.
Moreover, vertical lines represent the first observations of collapse-induced shock waves (CiS). One can note that the CiS are observed for larger Mach numbers than for the first cavitation observations: M = 2.3 and M = 2.5 for the column and droplet, respectively. In fact, the smaller bubble-cloud volume combined with the milder pressure field around the cloud didn't allow for a shock wave to be emitted at the end of the collapse of the cloud. In addition, as expected, a CiS is observed for a lower Mach number for the column compared to the droplet. However, one could have expected similar cloud volume for the first CiS apparition between both configurations, but they present V * ≈ 3.2x10 -5 and V * ≈ 1.6x10 -5 , respectively. This is explained by the fact that the cloud covers the entire thickness of the column and cylindrically collapses, whereas it presents higher concentration of bubbles/gas content and spherically collapses for the droplet.
In conclusion, we can confirm the indication of the previous section, i.e. cavitation is less likely to occur in the droplet case.
High Mach numbers
The influence of the Mach number on the internal pressure field is twofold: (i) the transmitted energy increases with M, and (ii) the geometry of the transmitted front changes from concave to convex. In the context of shock-induced cavitation, the two effects combine and affect the growth of the gas phase in the droplet. As shown in the bottom graph of Fig. 8, V * max is constant and equals to zero for 1.0 < M ≲ 2.2. This is due to incident shock waves not strong enough to generate cavitation upon reflection and amplification. At high Mach number, M ≳ 4.5, V * max exhibits a linear behavior, while for 2.2 ≲ M ≲ 4.5 stands out as a transitional region. This latter region is a direct consequence of the change in the geometry of the transmitted wavefront (see top graph, Fig. 8) from concave to convex with an infinite radius of curvature, R → ∞, appearing for M = 4.38. This critical Mach number corresponds to a shock speed equals to the sound speed in water, n = c l /u s = 1. Consequently, the fundamental law of refraction reduces to sin θ = sin α, where α and θ are the incident and refraction angles, respectively. According to the geometrical ray acoustics, the transmitted rays to the droplet are then aligned with the incident rays and propagate at the same speed which results in a plane wavefront. For n > 1 the acoustic rays diverge and propagate faster than the shock wave outside, which draw a spherically diverging (concave) shock. Conversely, n < 1 results in a spherically converging (convex) shock. The shape of the TSW as a function of the shock-wave Mach number is shown in Fig. 9. The linear trend reported at high Mach numbers are consistent with the small variation in R (see the dotted-dashed blue line, bottom graph of Fig. 8). The dimensionless volume V * max depends on the pressure induced by the expansion wave which corresponds to the first reflection of the transmitted wave (TSWr). The intensity of the expansion wave is related to its focusing, which depends on the shape of the TSW and the geometry of the reflector, here the droplet back face. For two TSW with relatively same R, the amplification rate during the expansion wave focusing is nearly equal since the shape of the TSW and the geometry of the reflector are both conserved. Consequently, the only transmitted energy acts on V * max , which approximately linearly increases over M ≳ 6. For large variation in R, that is for 3 ≲ M ≲ 5, the amplification rate strongly varies as the geometrical configuration for the TSWr significantly changes 2 . Fig. 10 shows the variation of the time-dependent gas volume within the droplet. It reports both an increase in V * over M as well as a phase shift in the temporal location of the maximum gas volume experienced by the droplet. Both the increase in the volume and the phase shift are related to the M-dependent transmitted energy, as discussed previously with Fig. 8. Dissociating the role of the TSW geometry on the phase shift from the contribution of the energy transmitted is not straightforward. However, we here assume the TSW shape to not drive the phase shift as the time required to complete the focusing is fixed by the non-deviated ray propagating along the droplet axis.
Z x q x h s s l r F u B 9 R w K R R v o E D J 2 4 n m N A o k b w W j 2 6 n f e u L a i F g 9 4 D j h f k Q H S v Q F o 2 i l d h e F D D n B X r n i V t 0 Z y D L x c l K B H P V e + a s b x i y N u E I m q T E d z 0 3 Q z 6 h G w S S f l L q p 4 Q l l I z r g H U s V j b j x s 9 m 9 E 3 J i l Z D 0 Y 2 1 L I Z m p v y c y G h k z j g L b G V E c m k V v K v 7 n d V L s X / u Z U E m K X L H 5 o n 4 q C c Z k + j w J h e Y M
Bubble-cloud influence on the droplet interface
In order to study the potential influence of the bubble cloud on the droplet interface and its consequence on the atomization process, we focus on the highest Mach number: 6. Indeed, we observed the largest cloud volume for this Mach number and it is therefore expected to observe its greatest influence on the droplet dynamics, at least at the back of the droplet where the cloud is located.
To assess this influence, we compare in Fig. 11 the results given by a simulation with a droplet of water containing initial nuclei, to the results given by a simulation 2 Note that the present numerical model does not account for the ionization of air. At low hypersonic regimes, the air temperature increases as the molecular bonds of the air molecule increase their vibration. In the high hypersonic regimes, the high temperature of air (> 2000 • C) results in the atom dissociation of the oxygen molecules. During the ionization process, atoms lose electrons to form a plasma which could strongly affect the aerodynamic and hydrodynamic processes. The critical Mach number for which ionization should occur is not clear, but most probably around 8.
involving pure water (no bubble-cloud appearance). At t = 0.79, results are almost identical, in particular in regard of the position and strength of the waves. Later, at t = 1.74, we observe the bubble cloud for the simulation containing initial nuclei. Note that the TSW reflections are completely absorbed by the cloud during its formation. Whereas waves continue to propagate for the pure-water case with potential perturbation of the interface at the front of the droplet. The cloud then collapses until t ≈ 3.37 where it reaches its minimum volume and emits a shock wave (CiS). At this time, we observe a shift of the position of the back of the droplet between the two cases. In fact, the cloud during its collapse pulls the back of the droplet. During the rebound phase, this shift is reduced until almost no difference is observed and it remains as it is a posteriori. Indeed, only a small activity of the cloud is afterwards noticed. This leads to the assessment of the global disruption of the interface caused by the cloud. At t = 5.09, from Fig. 11 with the schlieren or from Fig. 12 where interface contours are superposed, only marginal differences of interface dynamics are observed. In other words, the bubble cloud, although significantly impacting the wave activity, did not impact the overall deformation and disruption of the interface known to be mainly governed by the interface instabilities such as Kelvin-Helmholtz or Rayleigh-Taylor instabilities [START_REF] Dorschner | On the formation and recurrent shedding of ligaments in droplet aerobreakup[END_REF]. This indicates that the cloud activity, for this unique impulse scenario (a unique incident shock wave), does not play a role within the atomization process. Note that similar conclusions were observed for M = 3, therefore they are not presented here. In addition, although using a different numerical modelling, this result is consistent with the previous work of Nykteri & Gavaises [START_REF] Nykteri | Droplet aerobreakup under the shear-induced entrainment regime using a multiscale two-fluid approach[END_REF] mentioning that the fragmentation of a 1.9 mm droplet exposed to a M = 2.64 shock wave is not altered by the presence of the bubble cloud. When relating this result to effervescent atomization, this seems counter intuitive. Effervescent atomization is a method of atomization that involves inserting a small amount of gas into the liquid before it is atomized. This technique leads to significant improvements in performance in terms of smaller drop sizes and/or lower injection pressures [START_REF] Sovani | Effervescent atomization[END_REF]. As a result, one could have guessed that the presence of the bubble cloud would also have led to improved atomization. Thereby, in order to better mimic such technique, we propose an additional test case where a bubble is initially placed within the droplet. We assume that this bubble could have been voluntarily placed there or simply created during a previous pulse V * max (incident shock wave). Hence, we choose the volume of the bubble to equal the maximum volume of the bubble cloud of the previous simulation with nuclei. At M = 6, V * max ≈ 3.865x10 -3 , this leads to a bubble of initial radius R b ≈ 1.726 mm, to compare with the 11 mm-radius droplet. We place this bubble approximately at the location of the bubble-cloud collapse point, i.e. 8.5 mm from the center of the droplet, on the right. Fig. 13 presents a comparison for different time instants between a droplet with and without an initial bubble inside the droplet. The incident shock wave propagates once again at M = 6 and the water contains initial nuclei. At t = 0.88, we observe especially TSW reflection (rarefaction wave) from the impingement of TSW (compression wave) on the bubble. This TSW-bubble interaction induces the collapse of the bubble, the later occurring until t ≈ 3.2. At t = 1.31, the bubble cloud is formed in both configurations. In the case with the bubble, the cloud surrounds the bubble with a small layer of almost pure water separating the bubble and the cloud. Note that the outer shape of the cloud is similar in both cases. At t = 3.2, the cloud and the bubble have collapsed and CiS are observed. In addition, the formation of a thin jet is taking place at the back of the droplet. Indeed, similarly to bubble collapse near rigid or free surfaces or bubble collapse induced by a shock wave [START_REF] Blake | Cavitation bubbles near boundaries[END_REF][START_REF] Lauterborn | Physics of bubble oscillations[END_REF][START_REF] Supponen | Scaling laws for jets of single cavitation bubbles[END_REF], the non-spherical collapse of the present bubble induces a jet, here directed toward the back of the droplet. This jet is more prominent at t = 5.09. Whereas, also here, the cloud activity starts to fade. Fig. 14 shows the contours at the same latest time and allows a better observation of the jet but also of the well-known shape of a bubble jetting. Note that we measure an average speed of the jet of ≈ 107 m/s over a duration of t ≈ 2.06 (30 µs), with jet speed starting approximately 40% above this value and slowly decreasing over time. In comparison to the case without the initial bubble, we note a significant difference of the droplet contour at the back of the droplet where the jet takes place, but almost none elsewhere. This perturbation of the interface from the back is expected to play a key role in the atomization process and will certainly increase its speed. However, the complete assessment for longer times is out of the scope of the paper and implies to take into account viscous and surface tension effects, while also computing 3D simulations since axisymmetry breaks for longer times [START_REF] Dorschner | On the formation and recurrent shedding of ligaments in droplet aerobreakup[END_REF][START_REF] Meng | Numerical simulation of the aerobreakup of a water droplet[END_REF]. Furthermore, this problem is a toy problem and it would make more sense to invest time and resources on real-world applications where multiple pulses are encountered and may result in enhanced atomization due to these cavitation events. In many applications, successive shocks may be transmitted to the droplet. It typically happens when a shock wave is reflected at one or multiple walls, under the droplet exposure to a pulsed source, or when the droplet-shock interaction is followed by the droplet high-speed impact. This latter example occurs during supersonic flights, when raindrops interact with shock waves (e.g., the detached bow shock) before impacting the aircraft structure and cause rain erosion damage [START_REF] Meng | Numerical simulations of the early stages of high-speed droplet breakup[END_REF].
×10 -3 Simulations Spline R → ∞ y 1 = a 1 M + b 1 y 2 = a 2 M + b 2
One should note that we undertook as well a simulation where the bubble was placed in the center of the droplet, resulting in marginal differences with respect to interface disruption. This indicates that the change in water volume to gas volume (≈ 0.3865%) is not the main reason interface disruption differs. Hence, the position of the cloud is important, although this position is, most of the time, not practically controlled and is directly linked to the shape and size of the droplet.
Conclusion
Shock-induced cavitation within a column and within a droplet have been presented with phenomenological differences arising, which constitutes the first studied geometry effect. We observed that the energy transmitted within the droplet is smaller than within the column. This, combined with a peculiar wave dynamics involving a compression wave to appear between two low-pressure points (tension), indicates that cavitation is less likely 11/14 to occur for a droplet. The latter was then confirmed over a large Mach-number range. In addition, the critical Mach number for cavitation appearance was found to be between M = 1.8 and M = 2 for the column, and between M = 2 and M = 2.2 for the droplet.
Y n u B N R w K R R v o U D J O 6 n m N A 4 k f w h G N 1 P / 4 Y l r I x J 1 j + O U + z E d K B E J R t F K n R 4 K G X K C / W r N r b s z k G X i F a Q G B Z r 9 6 l c v T F g W c 4 V M U m O 6 n p u i n 1 O N g k k + q f Q y w 1 P K R n
The second geometry effect is linked to the shape of the transmitted front which changes from concave to convex when increasing the Mach number. The transition between the two appears for M = 4.38, corresponding to a shock speed equaling the sound speed in water. This gives rise to two regimes of cavitation: an exponentially (M < 4.38) and a linearly (M > 4.38) increasing bubblecloud volume.
Finally, the study of the bubble-cloud influence on the droplet interface concluded that, counter intuitively, the cloud has only marginal effect on the disruption of the interface and therefore on the atomization process when the droplet is subjected to a unique pulse (unique shock wave). However, exercising with a toy model, where there initially is a bubble within the droplet, strongly suggests that the presence of cavitation events within the droplet will enhance the atomization process when the droplet is subjected to multiple pulses. This result draws parallels with the technique of effervescent atomization. Ultimately, if time and resources are invested for real-world applications where multiple pulses are encountered, three-dimensional, viscous and surface tension effects should certainly be added to the modelling to assess interface disruption at longer times and therefore to assess atomization performance.
Figure 1 .
1 Figure 1. Computational setup corresponding to a planar shock wave interacting with a water droplet. The four blue segments aligned on the droplet vertical axis show the locations of the probes used to plot the pressure profiles in Fig. 3.
4. 1
1 Column versus droplet 4.1.1 Phenomenological differences for Mach 2.4
t e x i t s h a 1 _ b a s e 6 4 = "
Figure 2 .
2 Figure 2. Comparison of the wave dynamics before complete convergence of the rarefaction waves. Incident shock wave propagates at = 2.4. Schlieren is presented for both the column and the droplet.
Fig. 6 Figure 3 .
63 Fig.6presents at longer time instants the internal structure in regard of the wave dynamics and of the bubble-cloud activity, through the numerical schlieren
Figure 4 .
4 Figure 4.Comparison of the pressure fields within the column and the droplet. Results are presented for simulations with pure water and for water with initial nuclei for an incident shock wave propagating at M = 2.4. Note that for visualization purposes, the colorbar does not start at the minimum observed pressure, therefore, small regions of low pressure might be saturated in dark blue.
Figure 5 .
5 Figure 5. Evolution of the bubble-cloud volume for the column and the droplet. The incident shock wave propagates at M = 2.4.
< l a t e x i t s h a 1 _ b a s e 6 4 = " V c k 5 v 0 p X A M g x z W D U 3 G T h l L 0 j v 8 0 = " > A A A B 7 3 i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q i 8 c I 5 g H J E n o n s 8 m Q 2 d l 1 Z l Y I S 3 7 C i w d F v P o 7 3 v w b J 8 k e N L G g o a j q p r s r S A T X x n W / n c L K 6 t r 6 R n G z t L W 9 s 7 t X 3 j 9 o 6 j h V l D V o L G L V D l A z w S V r G G 4 E a y e K Y R Q I 1 g p G t 1 O / 9 c S U 5 r F 8 M O O E + R E O J A 8 5 R W O l d h d F M s T e o F e u u F V 3 B r J M v J x U I E e 9 V / 7 q 9 m O a R k w a K l D r j u c m x s 9 Q G U 4 F m 5 S 6 q W Y J 0 h E O W M d S i R H T f j a 7 d 0 J O r N I n Y a x s S U N m 6 u + J D C O t x 1 F g O y M 0 Q 7 3 o T c X / v E 5 q w m s / 4 z J J D Z N 0 v i h M B T E x m T 5 P + l w x a s T Y E q S K 2 1 s J H a J C a m x E J R u C t / j y M m m e V b 3 L 6 s X 9 e a V 2 k 8 d R h C M 4 h l P w 4 A p q c A d 1 a A A F A c / w C m / O o / P i v D s f 8 9 a C k 8 8 c w h 8 4 n z 8 J + I / 8 < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " 0 a z E R A Z c Q X F f X W H e 1 T R f M 2 L M F E s = " > A A A B 7 3 i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q i 8 c I 5 g H J E m Z n J 8 m Q 2 d l 1 p l c I S 3 7 C i w d F v P o 7 3 v w b J 8 k e N L G g o a j q p r s r S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T
Figure 6 .
6 Figure 6. Comparison of the internal structure between column and the droplet until the bubble-cloud collapse. The incident shock wave propagates at M = 2.4. The numerical schlieren is displayed and overlaid with the volume fraction of gas (white-to-blue colormap), with an opacity function to render translucent surfaces.
Figure 7 .
7 Figure 7. Evolution and variation (δ ) of the maximum bubble-cloud volume, V * max , for a shorter range of Mach number. The shaded areas represent Mach-number ranges where we observed the first signs of cavitation appearance. Dash-dotted vertical lines represent the first observation of a collapse-induced shock wave (CiS). The coloring of the shaded areas and of the vertical lines is consistent with the column and droplet configurations.
Figure 8 .
8 Figure 8. The top graph shows the variation of the transmitted wavefront curvature over M, where R is the radius of curvature of the wavefront and R * = R/R d . The dashed red line locates the critical Mach number M = 4.38 from which the wavefront is plan. For lower M, the transmitted wave is spherically diverging and for higher M it is a spherically converging. The bottom graph shows the effect of the Mach number on V * max .
Figure 9 .
9 Figure 9. Shape of the transmitted wavefront (TSW) as a function of the shock-wave Mach number when the on-axis transmitted ray reaches the droplet center.
Figure 10 .
10 Figure 10. Growth of the gas phase within the droplet as time proceeds and as a function of the Mach number.
t e x i t s h a 1 _ b a s e 6 4 = " t R x c j G G K 5 o Z h L 6 i m G d h M P u q v K M M = " > A A A B 7 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 n E r 2 P B i 8 c K t h b a U D a b T b t 0 s 4 m 7 E 6 G E / g k v H h T x 6 t / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v S K U w 6 L r f T m l l d W 1 9 o 7 x Z 2 d r e 2 d 2 r 7 h + 0 T Z J p x l s s k
Figure 11 .Figure 12 .
1112 Figure 11. Comparison over time between a pure-water droplet and a droplet of water containing initial nuclei. Incident shock wave propagates at M = 6. Schlieren is presented.
Figure 13 .Figure 14 .
1314 Figure 13. Comparison over time between a droplet with and without an initial bubble inside the droplet. Incident shock wave propagates at M = 6 and the water contains initial nuclei. Schlieren is presented.
Note that in this work, the "shock-induced bubble cloud" terminology stands for the liquid-gas mixture.
14/14
Acknowledgment
Authors acknowledge the financial support from the ETH Zurich Postdoctoral Fellowship program.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Author Contributions |
04100946 | en | [
"stat",
"info"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04100946/file/Labour_market_transformation_Work_in_progress_2023.pdf | Milena-Jael Silva
email: [email protected]
French labour market transformation in the smart cities era expansion: New smart service systems -New job skills policy development ?
Keywords: Smart City, labour market transformation, workforce requirements, job skills policy development, service industries (art, culture, tourism, transport, health...), innovation, neo-Schumpeterian
According to Schumpeter, every new socio-economic cycle occurs as a result of new kinds of technology that transform society to offer novelties, such as consumer products/services, production methods, standards, markets, types of industrial organisation, academic formations, job skills and works. Each business long cycle or Kondratiev wave has four phases: 1) depression, 2) recovery, 3) prosperity or expansion, and 4) recession. The current Smart City era that several researchers have related to "the sixth Kondratiev wave" is associated with smart technologies (i.e. big data, Internet of things). Nowadays, the sixth wave of Kondratiev is moving from the recovery phase to the expansion phase. To our knowledge, there is little academic literature that analyses this wave expansion phase, the transformation of job skills and works in the actual Smart City labour market. Data was collected through qualitative methods (i.e. job interviews, observations, career fairs), as well as from Emploi Public, LinkedIn and Pôle Emploi. The author analysed data through qualitative content analysis, quantitative text mining and topic modelling analysis. Their research proposes a framework offering a way to highlight and analyse the labour market transformation which would be useful for creating public policy and anticipating the change of workforce skills concerning service industries as art, culture, tourism, health, among others.
Introduction
A Smart City's principal goal is the medium and longterm improvement of the quality of life in a territory. According to (Cocchia, 2014, p. 13), the "concept of Smart City embraces several definitions depending on the meanings of the word "smart": intelligent city, knowledge city, ubiquitous city, sustainable city, digital city, etc. Many definitions of Smart City exist, but no one has been universally acknowledged yet."
This concept refers to a variable geometry complex phenomenon of an urban service system transformation through Information and Communications Technology (ICT) centred service systems, and non-ICT-centred service systems, configured ad-hoc according to given needs [START_REF] Silva-Morales | Comprendre la transformation institutionnelle et structurelle d'un système de service public urbain qui devient smart: une approche néo-schumpétérienne pour comprendre l'innovation technologique et institutionnelle dans les systèmes de service[END_REF]. In this sense, an urban agglomeration must design and implement their urban service system transformation projects based on their local context and their needs in the short, medium, and long term. Smart City projects articulate other related initiatives at local, regional, and national/global levels [START_REF] Piro | Information centric services in Smart Cities[END_REF][START_REF] Mosannenzadeh | A case-based learning methodology to predict barriers to implementation of smart and sustainable urban energy projects[END_REF]. [START_REF] Bannister | ICT, public values and transformative government: A framework and programme for research[END_REF] have defined transformation as "a change that creates a recognisable and significant difference between the prior and the posterior state of the transformed entity ". These authors argue that in the context of governance, transformation can take the form of new institutions (i.e., norms, standards) a new way of working, or an innovative new public service system. In this sense, we point out a lack of adequate theoretical and methodological frameworks to analyse the process of a labour market transformation in a public urban service system becoming smart. This research addresses this issue through the following questions:
How should the actual labour market transformation concerning territorial service system becoming 'smart' be highlighted? What knowledge, skills, abilities and roles are needed to contribute to service system's transformation and job skills policy development for services industries ?
Background
Subsection 2.1 presents an institutional approach for police development, subsection 2.2 presents a Kondratiev wave approach to analyse service system transformation, while subsection 2.3 presents studies about labour market transformation in various contexts.
Policy development, pluralism and institutional complexity: service innovation logics multiplicity for new smart service systems
For [START_REF] Greenwood | Institutional entrepreneurship in mature fields: The big five accounting firms[END_REF], the perspective of institutional logics emerges as a means of explaining the institutional change. According to these authors, the process of articulating different levels of analysis representing individuals, organisations, industries and society at the micro, meso and macro levels can be linked through the institutional logic. The institutional logic approach comes from the seminal paper of [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF], Which suggest the presence of a dominant logic within the existing overlapping institutional orders in contemporary Western societies, for instance, the family, the religion, the bureaucratic state, and also the market. For this authors, this central logic concerns a set of fabric practices and symbolic constructions, which constitute the underlying principles of every one of those institutional orders, available to organisations and individuals (p. 232)." They also suggest that the institutional logic of capitalism is the accumulation and also the commodification of human activity.
The concept of institutional logic was presented by [START_REF] Alford | Powers of theory: Capitalism, the state, and democracy[END_REF] to examine how the conflicting logics of capitalism, bureaucracy and democracy shape the formation of the modern state. Between the years 1990 and 2000, some seminal works acknowledged institutional logics as what characterises the content and also the meaning of institutions [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF][START_REF] Haveman | Structuring a theory of moral sentiments: Institutional and organizational coevolution in the early thrift industry[END_REF][START_REF] Thornton | Institutional logics and the historical contingency of power in organizations: Executive succession in the higher education publishing industry, 1958-1990[END_REF][START_REF] Scott | Institutional change and healthcare organizations: From professional dominance to managed care[END_REF]. They make it possible to link several levels of analysis concerning an inter-institutional system: a) the societal level, b) the level of the field and c) the level of individuals [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF]. Thornton & Ocasio (1999, p. 804) define institutional logics as "the socially constructed, historical patterns of material practices, assumptions, values, beliefs, and rules by which individuals produce and reproduce their material subsistence, organise time and space, and provide meaning to their social reality."
Institutional pluralism refers to contexts where actors are confronted with a variety of more or less complementary institutional logics allowing cooperation or competition. On the other hand, institutional complexity refers more specifically to how individual and collective actors face and respond to the conflicting demands associated with different logics [START_REF] Ocasio | Advances to the Institutional Logics Perspective[END_REF].
Institutional complexity is a variation of the institutional theory that emphasises multiple, sometimes concurrent, logics and complex organisational fields in which organisations can have multiple responses and feedback cycles [START_REF] Thornton | The institutional logics perspective : a new approach to culture, structure, and process[END_REF]. Although most of the work after [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF] continued to analyse the presence of multiple logics , [START_REF] Thornton | Institutional Logics[END_REF] identified two ways of approaching around two main research categories which have been used in recent research:
• Research which considers the way in which a logic pre-dominates in a specific field, being a "dominant" logic in this field, influencing the behaviour of social actors.
• Searches which considers the "coexistence" of multiple logics for long periods, without a predominance of one logic over the others.
Study in the first strategy investigates how one dominant logic becomes "replaced" by another or disappeared in a given time. Even the second line of studies suggests a longitudinal analysis of the "coexistence" of multiple logics in a particular field. The researcher must, however, concentrate on how elements of various institutional logics coexist within a specific period. For [START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF], it is given or acquired that the multiple institutional logics present a theoretical puzzle and that various logics, often conflicting, characterise the organisational fields. In practice, an institutional logic can help, delay or even prevent different processes of change [START_REF] Järvenpää | Collective identity, institutional logic and environmental management accounting change[END_REF]. According to [START_REF] Busco | Sustaining multiple logics within hybrid organizations: Accounting, mediation and the search for innovation[END_REF], institutional studies have also observed that multiple and heterogeneous logics struggle to persevere over time, as they are altered by dynamic tensions between different power and interest groups which simultaneously advocate partitioned logics. In this context of multiple institutional logics and in line with Silva-Morales (2017), we define a smart and inclusive service system as follows:
A system that allows the co-creation of public, economic or sustainable value and the management of multi-sectoral services such as health, education, mobility and transport, employment, urban resilience, culture, tourism, sport, leisure, in an interoperable, user-context sensitive way.
It constitutes a complex adaptive system allowing the combination of individual and collective service logics such as (Table 1):
• Individual service logics, for example:
-The customer-dominant logic of service [START_REF] Heinonen | A Customer-Dominant Logic of Service[END_REF]; the customer and employee logics [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF].
• Collective service logics, for example:
-The goods-dominant logic [START_REF] Mont | Clarifying the concept of product -service system[END_REF][START_REF] Manzini | A strategic design approach to develop sustainable product service systems: Examples taken from the 'environmentally friendly innovation' Italian prize[END_REF][START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Tukker | Eight types of product-service system: Eight ways to sustainability? Experiences from suspronet[END_REF][START_REF] Miles | Patterns of innovation in service industries[END_REF]; the public service logic [START_REF] Osborne | The SER-VICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF]; Osborne (2018); the service-dominant logic of the market [START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF][START_REF] Vargo | Institutions and Axioms: An Extension and Update of Service-Dominant Logic[END_REF]; and the technical logic [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF].
Urban service system digital transformation to become smart through the hybridisation or combination of several service logics can be studied through following construct: complex service system, adaptive service system, multi-level service system, human-centred service system, product-service system (PSS).
In the case of a public service system, the complexity increases in the co-creation process because it is necessary to consider the whole population, in particular those vulnerable to exclusion. Technical, societal and organisational challenges in public service have been highlighted by [START_REF] Bloch | Public sector innovation-From theory to measurement[END_REF]. To create data and smarttechnology driven smart service systems, it is necessary to have job skills in data science, deep learning, embedded systems, satellite remote sensing, urban resilience and management, among others. Table 1 shows some institutional logics involved in the job skills policy development co-creation process.
Socio-economic change: the Kondratiev waves approach
The Russian economist Nikolai Kondratiev proposed the idea of waves or socio-economic cycles in the 1930s [START_REF] Schumpeter | Business Cycles. A theoretical, Historical, and Statistical Analysis of the Capitalist Process[END_REF][START_REF] Schumpeter | Capitalism , Socialism, democracy[END_REF]. According to Kondratiev, societal changes have been connected with the development of long socio-economic cycles for roughly 40-60 years.
Long cycles are called "K waves" (Wilenius, 2014, p. 36). [START_REF] Wilenius | Seizing the x-events. the sixth k-wave and the shocks that may upend it[END_REF] report that each wave is driven by a specific technological innovation generating a dominant technological paradigm" [START_REF] Dosi | Technological paradigms and technological trajectories[END_REF]. There may be shorter or faster cycles.
According to Wilenius & Casti (2015, p. 336), Schumpeter's research on socio-economic cycles was in line with Marx's historical approach (1887,1893,1894). The financial crisis of 2007-2009, also called the subprime crisis, marked the end of the 5th Kondratiev wave and the emergence of the 6th Kondratiev wave around the year 2010. Currently, we are in the expansion phase of the 6th Kondratiev wave. According to these authors, the waves always begin with technological innovations that The SDL appeared for the first time in 2004 [START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF] claiming that management sciences have been dominated for a long time by a "Dominant Logic of Goods or Products". According to [START_REF] Orlikowski | the Algorithm and the Crowd: Considering the Materiality of Service Innovation[END_REF], this product logic is not relevant for the analysis and understanding of the service economy. These researchers (p. 203) argue that service logic replaces the old product logic. The SDL has evolved since 2004, and has moved from the concept of co-production to the concept of co-creation by proposing two new fundamental principles [START_REF] Vargo | On value and value cocreation: A service systems and service logic perspective[END_REF]. In the SDL, the term "services" in the plural implicitly designates the output units, whilst the term "service" refers to collaborative processes where skills/resources are used for the benefit of another entity [START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF]. In 2015, the SDL and its fundamental principles were transposed from the field of marketing to the field of management of information systems, in the form of a conceptual framework [START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF]. The SDL has received several criticisms and we agree with the criticism of [START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF]. These authors criticise the SDL for its tacit neo-liberalism. In this context, we consider the SDL as the logic of market service, focused on economic value and as indicated by [START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF]. Public service logic (PSL)
For [START_REF] Osborne | The SER-VICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF]; [START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of co-production and value co-creation[END_REF] the PSL and its implications for public services are in their infancy. [START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of co-production and value co-creation[END_REF] argued for a separation of the PSL (Public Service Logic) from the SDL for the following reasons:
• PSL requires considering co-production and co-creation of "public" value.
• SDL seeks to explore how to leverage value creation in private sector service companies for customer retention and profitability. However, for PDL, in public services, the repetition of service is likely to be seen as a sign of service failure rather than success.
Customer dominant logic (CDL)
Study of the human dimension which can contribute to new insights on the concepts: of human centred service innovation, human centred service system [START_REF] Heinonen | A Customer-Dominant Logic of Service[END_REF]. Service logic and service system Service logic describes how and why a unified service system works. It is a set of organising principles that govern the service experiences of customers and employees Kingman-Brundage et al. (1995, p. 21).
Customer logic
The client's logic which is the underlying reason that drives the behaviour of the client according to their needs and wants, which are often unpredictable. Customers often have expectations regarding their service experiences (Parasuraman et al., 1991, cited by (Kingman-Brundage et al., 1995, p. 23)). The client's logic signals the client as a consumer and as a co-producer of the service. It can be addressed by exploring the question of "What is the client trying to do and why?" (Kingman-Brundage et al., 1995, p. 24).
Technical logic
The technical logic is the "motor" of the service's operation. According to these authors, the technical logic is congruous with the concept of service, creating value by customers. However, when the technical logic is isolated from the concept of service, technical logic creates mutual disappointment for clients and representatives and "leads its claim life". This logic stems from hard and soft technologies, from politics and business rules. For this authors, technical logic address a question repeatedly asked by the employee and the customer: What is my role and how to perform it? (Kingman-Brundage et al., 1995, p. 24).
Employee logic
Employee logic is the fundamental individualistic reason that motivates the behaviour of employees. This logic gives rise to unpredictable and conflicting benefit performance, particularly in cases where work methods are vague, and workers are constrained to design their work processes (Kingman-Brundage et al., 1995, p. 25). [START_REF] Antonov | Development and current state of urban labour markets in russia[END_REF]. were working in a replaceable occupation in 2013. In any case, by applying a direct assessment, only some tasks can be supplanted by computers or computer-controlled machines according to programmable rules (task-based approach). In this context, they found that only 15% of the German employees were at risk of being supplanted by robotising. These authors argue that 15% of the work would not be disposed of due to the digital transformation because the automation probabilities only considered the technical feasibility. It is possible that these authors implicitly included the effect of the German welfare state as opposed to the study on the US, where social welfare is not a significant issue for policymakers. Also, they concluded that more research is required regarding the future development of employment. It needs to propose appropriate policies supporting an adjustment to technological changes.
Actual labour market transformation in Russia
Method
Research on change or transformation is useful for theory and practice because it must explore contexts, content and processes as well as their interactions over time (Musca, 2006, p. 153). Data was collected through recruitment interviews, fairs, and job search platforms. This data collection allowed us to observe, interact with the set of actors, "speak" the same language, and decipher how members of the urban service ecosystem are becoming smart to understand the reality of the phenomenon studied [START_REF] Weick | Sensemaking in organizations[END_REF]. The participation in these events allowed us to collect data as well as to distinguish the specificity of the transformation of the labour market in the context of an urban service system becoming smart.
Qualitative data collection
Job interview and career fair data collection
Data collection was performed between June 2018-August 2019). It was mostly dependent on the "Smart City" job offers or career fairs that took place during this period. The platforms were: LinkedIn, Emploi Public and Pôle Emploi. In this way, the first author was able to apply for various job offers in both the public and private sectors. well as to avoid "showcase projects".
Data analysis
As shown in Figure 1, we analysed the data by looking for common themes using an inductive approach [START_REF] Gioia | Seeking Qualitative Rigor in Inductive Research[END_REF]. The process of interpreting empirical data allowed for the highlighting of relationships between theories, concepts and empirical data. The first-order codes were grouped through more theoretical perceptions (theoretical categories). Then, we condensed the theoretical categories into more general dimensions (aggregate dimensions).
Preliminary results and discussion
In this article, we have tried to provide an initial response to the call of [START_REF] Frank | Toward understanding the impact of artificial intelligence on labor[END_REF]. These authors pointed to the need to create tools that map the dependencies be- ics (e.g. goods-dominant logic [START_REF] Mont | Clarifying the concept of product -service system[END_REF][START_REF] Manzini | A strategic design approach to develop sustainable product service systems: Examples taken from the 'environmentally friendly innovation' Italian prize[END_REF][START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Tukker | Eight types of product-service system: Eight ways to sustainability? Experiences from suspronet[END_REF][START_REF] Miles | Patterns of innovation in service industries[END_REF], public service logic [START_REF] Osborne | The SER-VICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of co-production and value co-creation[END_REF], service-dominant logic of the market [START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF][START_REF] Vargo | Institutions and Axioms: An Extension and Update of Service-Dominant Logic[END_REF], and technical logic (Kingman-Brundage et al.,
First-order codes
• We are the French leader in the optimisation of the waste collection market. Our objective is to help cities develop the Smart City on the subject of the collection at voluntary collection points.
• Enterprises bring together complementary expertise, solutions for transport and mobility infrastructures, and innovative services for regions and their citizens, with its Smart City offers (intelligent public lighting, urban video protection, urban equipment, urban hyper-vision).Besides, you have significant experience in the public sector and knowledge of information systems, data, and Smart City tools are additional assets for the mission. • Besides, you have significant experience in the public sector and knowledge of Information System, Data and Smart City tools are additional assets for the mission.
• Located in France (HQ), Canada and China, with an international R&D program on global connectivity and Smart City planning. "Software as a service" to simulate, integrate and visualise any data (simulations, sensors, administrations, experts and citizen).
• Do you like the challenges of data? Do you know how to animate, support and mobilise around the subjects you carry? Do you also master data analysis (databases, statistics, geographic information systems) and have notions in a computer language (R / Matlab / SQL / Python / HTML)? Are you as comfortable in the strategic as in the operational? Main activities: develop and animate Smart City procedures to become a connected, sustainable and inclusive city; pilot and lead projects; elaboration of technical specifications and studies; participation in project committees.
• Customer-dominant logic of service [START_REF] Heinonen | A Customer-Dominant Logic of Service[END_REF] and customer, employee logics [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF].
Theoretical categories et al., 1995). • Goods-dominant logic [START_REF] Mont | Clarifying the concept of product -service system[END_REF][START_REF] Manzini | A strategic design approach to develop sustainable product service systems: Examples taken from the 'environmentally friendly innovation' Italian prize[END_REF][START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Tukker | Eight types of product-service system: Eight ways to sustainability? Experiences from suspronet[END_REF][START_REF] Miles | Patterns of innovation in service industries[END_REF].
• Technical logic (Kingman-Brundage
• Service-dominant logic of the market [START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF][START_REF] Vargo | Institutions and Axioms: An Extension and Update of Service-Dominant Logic[END_REF]). • Proposed solutions to estimate the future of work (Frank et al., 2019, p. 6534).
• Public service logic [START_REF] Osborne | The SER-VICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of co-production and value co-creation[END_REF]. • Service-dominant logic of the market [START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF][START_REF] Vargo | Institutions and Axioms: An Extension and Update of Service-Dominant Logic[END_REF]). • Proposed solutions to estimate the future of work (Frank et al., 2019, p. 6534).
• Public service logic [START_REF] Osborne | The SER-VICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of co-production and value co-creation[END_REF]. The show is aimed at any data professional who is a decision-maker, expert, or user, be they private or public. The goal is to provide feedback and relevant information on current advances to democratise the various uses and challenges of data. 1995)). From the interview data, it can be seen that by far the greatest demand is for high-level technical skills. [START_REF] Heinonen | A Customer-Dominant Logic of Service[END_REF] or customer/employee logics [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF]. In this way, we have seen that the point of view of institutional logics [START_REF] Thornton | The institutional logics perspective : a new approach to culture, structure, and process[END_REF] conceptualises society as an interinstitutional system in which logics are characterised by social-cultural differentiation, fracture and inconsistency. As detailed by Thornton & Ocasio (2008, p. 119) fields if not from interinstitutional systems, instantiated and adopted in organisational fields, markets, industries and public or private organisations. The perspective of institutional logics is a meta-theoretical framework for examining the interrelationships between institutions, people and organisations in social systems (Thornton et al., 2012, p. 2). As pointed out by [START_REF] Thornton | Institutional Logics[END_REF], four dimensions common to all institutional logics are 1) sources of collective identity; 2) determinants of power and status; 3) social classification and categorisation systems, and 4) the allocation of attention. Although the dimensionality of institutional logics is at the centre of the perspective, they remain provisional; that is, the particular dimensions that are relevant to different contexts or empirical studies may vary. It is up to individual researchers to justify the existence of logics and their relevant dimensions in the context of a study. The Community of Agglomeration X was born from the union of several communities. We are currently looking for a digital project manager. You will intervene in all subjects around digital policy and innovation: public data, Smart City and digital uses.
You will also be in charge of the administration of our open data platform.
Aligned to a technical logic [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF] or a goods-dominant logic [START_REF] Mont | Clarifying the concept of product -service system[END_REF][START_REF] Manzini | A strategic design approach to develop sustainable product service systems: Examples taken from the 'environmentally friendly innovation' Italian prize[END_REF][START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Tukker | Eight types of product-service system: Eight ways to sustainability? Experiences from suspronet[END_REF][START_REF] Miles | Patterns of innovation in service industries[END_REF], it is essential to make technology forecasting by studying the implications in the medium and long term from an ethic and societal point of view of the technology adopted.
Map the dependencies between smart skills and job
requirements in the short and long term transformation of services industries (culture, health...)
As technology and institutional orders change, institutions at other levels change, and so do the different institutional logics associated with them. The institutional logics are historically contingent and evolve over time, both at the level of the institutional orders of society, as well as at other levels of analysis. According to [START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF], the SDL represents a tacit neo-liberalism of capitalist ideology, including broader perspectives of "markets" as networks of interactions, or "as complex configurations and systems". Also, these authors pointed out that these concepts, stemming from institutional theory, the theory of practice and the actors' theorem, presently permit SDL to consider institutions -rules, standards, meanings, symbols, practices and collaboration and, more generally, institutional arrangements, focused on economic value. A smart service system is based on collective service logics of general interest for individual logics for the creation of public, social and sustainable value related to inhabitants' well-being and their environmental impacts at the local, national and global levels in both the short and the long term. It is necessary to create tools that map the dependencies between skills and job requirements through ML and NLP to capture the latent structure of the labour market [START_REF] Frank | Toward understanding the impact of artificial intelligence on labor[END_REF] as technology and in-stitutional orders change.
Design strategies to improve workforce skills
Smart service system co-creation can focus on changes in the behavioural patterns of users and institutions through social innovation. The deployment of initiatives integrates a transformational process of institutional arrangements (e.g. standards, rules, APIs) and implements a public urban multiservice platform. It is necessary to combine various resources and strategies to anticipate a public service system transformation of the labour market in the short, medium and long term, identifying future talent gaps and social exclusion problems. Through cooperation between actors via the public service logic [START_REF] Osborne | The SER-VICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of co-production and value co-creation[END_REF] and the servicedominant logic of the market [START_REF] Vargo | Evolving to a New Dominant Logic for Marketing[END_REF][START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF][START_REF] Vargo | Institutions and Axioms: An Extension and Update of Service-Dominant Logic[END_REF], solutions can be proposed to estimate the future of work (Frank et al., 2019, p. 6534).
Evaluate the labour market transformation according to public service and sustainable logic
It is necessary to evaluate periodically to put into practice strategies that weakened any kind of discrimination and progress towards the resolution of social and environmental problems, according to the Smart City objectives. In this sense, the public service logic [START_REF] Osborne | The SER-VICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of co-production and value co-creation[END_REF] may contribute to improving our understanding of sustainable and public value co-creation.
Conclusion
The results of this research support the idea that Den- Consequently, it is necessary to take into account the needs, characteristics, job skills and data sources of citizens that are vulnerable to exclusion from the labour market in a post-pandemic context that increases artificial intelligence-based services.
penetrate the economic and social systems, leading to prolonged economic recovery and a constant increase in productivity: this development integrates new value systems, new social practices and new organisational cultures. During the past 200 years, there were four phases involving an economic crisis[START_REF] Schumpeter | Business Cycles. A theoretical, Historical, and Statistical Analysis of the Capitalist Process[END_REF] characterising each K wave: "Prosperity or expansion phase, recession, depression, recovery". His idea was that the succession of these phases was much more important than any other type of business, and that it is mainly dependent on psychological training that can lead to euphoria during periods of success and inversely to gloom during a turnaround. The recession would be the natural result of the rapid expansion that precedes it. Russian researcher Togan-Baranowsky had in fact already linked the phases of expansion and withdrawal to investment movements.ToWilenius & Casti (2015, p. 2) in general, each cycle encompasses the following events: a) New technology industries emerge, replacing old products or services. b) A new economic boom spreads with the rise of commercial markets. c) New value systems, norms, and rules begin to dominate, governing public debate and planning. d) A new organisation culture emerges, based on the new dominant technology. e) New professions emerge, adding new workforce requirements. Points (a), (c) and (d) below have been analysed by the longitudinal research of Silva-Morales (2017) in the public service system transformation context of the French Smart City strategy. The present work focuses on point (e) concerning the emergence of new professions and workforce requirements for the expansion of the 6th Kondratiev wave in a Smart City era in France. Concerning the 6th wave, Wilenius
in the future labour market, for example, the lack of highquality data on the nature of future work, and deficient understanding of how technologies interact with broader economic dynamics and institutional mechanisms. For them, overcoming these barriers requires enhancements within the longitudinal and spatial data collection and examination process. These authors suggest the need to create tools that map the dependencies between skills and job requirements: machine learning (ML) and natural language processing (NLP) could help to capture the latent structure of any labour market transformation, in order to strengthen labour forecasts and the capacity of policymakers to reply to real-time labour trends in a given territory.On the other hand,[START_REF] Frey | The future of employment: How susceptible are jobs to computerisation?[END_REF] analysed the transformation of the United States labour market from another perspective. For these authors, algorithms for massive data are presently rapidly entering domains that depended on design recognition and could easily replace work in a wide range of non-routine cognitive tasks. Their inquire about addresses the question of "how susceptible are current jobs to these technological developments?" They have performed a novel methodology to estimate the probability of computerisation for 702 occupations in the United States, distinguishing between tall, medium and low-risk occupations, depending on their probability of automation. According to their results, about 47% of the whole employment in the United States are in the high-risk category. Such occupations may be automated relatively soon, perhaps over the following decade or two. Their findings infer that, as technology progresses, lowskilled workers will perform tasks that are not susceptible to computerisation, that require inventive and social intelligence. In that context, for workers to win the race, they must procure creative and social skills. But, what about those who can't or what if everybody has the necessary skills? 2.3.2. Actual labour market transformation in Switzerland Balsmeier & Woerter (2019) analysed the labour market in Switzerland in light of the current digitalisation of the workplace. These authors inquire how the adoption of new technologies influences the creation and destruction of jobs within the Swiss context. According to these authors, the tasks that run the risk of being automated were performed by low-skilled employees. In contrast, most of the new tasks that emerge from the adoption of digital technologies complemented the highly skilled workforce. These authors have shown that more significant investment in digitalisation is related to the higher employment of highly qualified workers and a decrease of low-skilled workers. For these researchers, this change was driven almost completely by companies that utilise machine-based digital technologies, i.e. robots or the Internet of things. Combined with a diminish in occupations for low-skilled workers, inequality within the population is likely to increase, which postures a critical challenge for public institutions and policymakers.
Table 3 :
3 Current barriers to estimate the future of work in the U.S. along with proposed solutions. Source:(Frank et al., 2019, p. 6534). taxonomies -Connect susceptible skills to new technology -Improve the temporal resolution of data collection -Use data from career web platforms Limited modelling of resilience -Explore out-of-equilibrium dynamics -Identify workplace skill interdependencies -Connect skill relationships to worker mobility -Relate worker mobility to economic resilience in cities -Explore models of resilience from other academic domains Places in isolation -Labour dependencies between places (e.g. cities) -Identify skill sets of local economies -Identify the heterogeneous impact of technology across places -Use intercity connections to study national economic resilience 2.3.3. Actual labour market transformation in Germany Dengler & Matthes (2018) analysed for the first time the potentials for job replacement for Germany. This work transposes and improves the research of Frey and Osborne (2017) on the transformation of the current labour market from the United States to Germany. Unlike those authors, Dengler & Matthes (2018) assume that advances in computing could not replace a complete occupation, but only some tasks. In the German case, expecting that the whole professions are replaceable by computers or computer-controlled machines (occupancy level approach), they have obtained similar results to Frey & Osborne (2017): around 47% of the German representatives
[START_REF] Antonov | Development and current state of urban labour markets in russia[END_REF] has presented an investigation of the main stages of labour markets' transformation in Russia.His research analyses for the first time the state of the work market of all cities in the country in a broad range of organisations based on information from the Federal Tax Service (FTS). For this author, since 2010, there had been a transformative process of redistribution of labour resources between primary, secondary and tertiary sectors in the cities of Russia. This author argues that the transformation processes of local labour markets were challenging to investigate due to deficient or lacking data quality, as also noted by[START_REF] Frank | Toward understanding the impact of artificial intelligence on labor[END_REF]. Both works point out that the monitoring and analysis of the state of the labour market in cities require the development of new tools. In the Russian case, the lack of relevant statistical information at the municipal level on the labour markets of the Federal Statistical Service of the Russian Federation impedes a more in-depth understanding. The financial statistics of the Federal Fiscal Service of the Russian Federation remain the only source of information covering all territories.
job interviews were conducted for several Smart City recruitment processes, either in person, via phone, or over Skype. Each meeting took between 15 and 90 minutes. The first three interviews were open. Questions were purposefully open-ended to guide the general discussion and allow for follow-up questions. We began by asking about their career trajectory and exploring the LinkedIn profile of job interviewers. Afterwards, a semistructured interview protocol was created to distinguish the degree of maturity of the Smart City project, the skills, and the number of people comprising the work team as
tween skills and job requirements through machine learning and natural language processing to capture the latent structure of the labour market. There are some technical challenges. As noted by[START_REF] Frank | Toward understanding the impact of artificial intelligence on labor[END_REF] and Antonov (2019), the processes of transformation of local labour markets are challenging to investigate due to incomplete or deficient data quality. The data showed us an increment in the quantity of jobs offers for the implementation of smart services systems in the context of Smart City initiatives. As highlighted by Dengler & Matthes (2018), it needs to propose appropriate policies supporting the adjustment of the labour market to technological changes. In this context, based on our data, we agree with Balsmeier & Woerter (2019) that a decrease in jobs for low-skilled workers and an increase of inequality within the population is likely to occur, which poses a significant challenge for public institutions and policymakers. Nevertheless, in the case of French public organisations, very demanding and poorly paid job offers indicate a possible misunderstanding of the different roles in data-driven technical works. The co-creation process of smart urban servicesystems (e.g. complex service system, adaptive service system, multi-level service system, human-centred service system, product-service system, smart service system) driven by individual service logic (e.g. customerdominant logic of service[START_REF] Heinonen | A Customer-Dominant Logic of Service[END_REF], customer and employee logics (Kingman-Brundage et al., 1995)) allows defining the technical specifications. Smart urban service systems need the hybridisation or combination of several individuals and collective service log-
Figure 1 :
1 Figure 1: Data structure analysis
On the other hand, French public administrations generally do not have an internal technical development team.In this sense, some small public structures accustomed to technical outsourcing skills might think that hiring one or two people with technical skills is enough to develop and deploy a public service as shows:The Direction of Digital Planning is a small structure of 4 [non-technical skilled] people where you will have the opportunity to participate in various projects. The Digital Department offers a position to enable you to carry out innovative projects, in a current context marked by the creation of the new commune Y. It supports all of the business departments in their approach to developing tools, dematerialisation and optimisation of their processes. The present article aims to examine how the actual labour market transformation concerning territorial service system becoming 'smart' be highlighted? What knowledge, skills, abilities and roles are needed to con-tribute to service system's transformation and job skills policy development for services industries ? From our preliminary data analysis (Figure 1), we propose a prospective framework (Figure 2) to forecasting the labour market transformation in the current smart cities and smart territories era expansion. A medium and long-term perspective forecast, this French labour market transformation in the smart cities era expansion needs to solve a piece of a giant puzzle that involves several research fields and service sectors as culture, tourism, transport and health. 5.1. Define a communal vision to co-create smart services according to territory needs A smart service system is a complex set of interoperable data and digital public service sub-systems (e.g. health, education, mobility and transport, employment, urban resilience, culture, tourism, sport, leisure). Policymakers have to lead the definition of a shared vision adapted from the given territorial context and citizens' needs in terms of smart services. It is essential to take into account the needs of citizens/users in line with a customer-dominant logic of service
Figure 2 :
2 Figure 2: Towards highlighting the French labour market transformation in Smart City era to a service systems transformation and job skills policy development for services industries (culture, tourism, health, transport...).
5. 2 .
2 Determine a comprehensive view of smart technologies for services industries (culture, health...) Smart service systems are based on traditional ICT and emerging smart technologies such as Big Data, open data and IoT. In this sense, an adaptive service system is sensitive to the context and geographical position. Public co-creation spaces such as Fab labs (i.e. experimentation spaces) could be used to explain the different technologies to citizens. The verbatims below illustrate some of the main characteristics of the job and kind of smart service system to co-create: Data, as for the economic sector, is a significant challenge for the territories to improve, design services for citizens, facilitate, simplify public services and strengthen citizens' confidence in public action. It will be a question of bringing expertise on the data in the implementation of the digital strategy of Community of Agglomeration X. The stake will also be to animate the mutualised data within the inter-municipal structure and the 23 communes. We imagine and carry innovative partnerships around data, the connected city and artificial intelligence in connection with the skills of the agglomeration and the municipalities.
gler & Matthes (2018) and Balsmeier & Woerter (2019), we need more research and solutions regarding the future development of employment to create appropriate policies to support the adjustment to technological changes. Also, this research contributes to the neo-Schumpeterian longitudinal research of Silva-Morales (2017) in a French context. Emerging smart services are associated with higher degrees of employment of highly qualified workers and a reduction of (opportunities for) low-skilled workers. We need to take into account needs and characteristics, as much as possible, of citizens vulnerable to exclusion from the labour market and the workforce. This work contributes to the existing knowledge of labour market transformation analysis. Developing and implementing a workforce inclusiveness strategy that guarantees access to the workforce will determine successful long-term Smart City road-map strategies. The same technology could allow us to avoid future catastrophic scenarios where a decrease in employment for low-skilled workers intensifies inequality within the population. Inclusive smart services co-creation poses a significant challenge for services industries (art, culture, health...), public institutions, policymakers, data science, Smart City and service design research.7. Future workIn order to identify relevant topics of French labour market transformation, future quantitative research can scrape data from job platforms such as Emploi Public, LinkedIn and Pôle Emploi, for analyse new jobs in services industries (art, culture, tourism, ...) through ML, deep learning and text mining analysis. Also, other researchers will be able to study whether the current COVID-19 pandemic accelerates the whole process of labour market transformation, just as the social fallout.
Table 1 :
1 Some service innovation logics concerning smart service systems policy development. Source: Silva-Morales (2017) from literature review.
Institutional log- Definitions
ics
Service-
dominant logic
(SDL)
Table 2 :
2 Labour market transformation risk.
Labour mar- Relevant risk
ket transfor-
mation con-
text
United About 47% of the
Stated whole employment in
the United States are
in the high-risk cat-
egory (Frank et al.,
2019).
Switzerland The risk of being
automated were
performed by low-
skilled employees
(Balsmeier & Wo-
erter, 2019).
Germany 15% of the work
would not be dis-
posed of due to the
digital transformation
because the automa-
tion probabilities
only considered the
technical feasibility
(Dengler & Matthes,
2018).
Russia Lack of relevant sta-
tistical information at
the municipal level.
The Federal Statis-
tical Service of the
Russian Federation
impedes a more in-
depth understanding
Table 4 :
4 Data collection on job interviews.
Table 5 :
5 Data collection on career fairs.
Career fair Date Place Career fair description
PhDTalent October 5, 2018, Le PhDTalent Career Fair is the largest career
Career Fair from 09:00 to 18:00 CENTQUATRE-fair dedicated to PhDs in Europe. In 2018,
2018 Paris for its seventh edition, 120 companies were
present to recruit PhDs regardless of their
discipline.
DataJob November 22, Le Carrousel DataJob is a meeting room of data specialist
2018 2018, from 9:00 to du Louvre trades, dedicated to artificial intelligence and
19:00 Paris data profiles.
Salon de la September 10, Cité des
data 2019 from 9:00 to Congrès,
19:00 Nantes
, institutional logics do not rise from organisational
Evaluate the labour market
transformation
Determine a
comprehensive
view of smart
technologies and AI
Highlighting labour market transformation AI era expansion Define a communal vision to co-create smart services according to territory needs
Determine and design
strategies
to improve
workforce skills for
services industries
(culture, health...)
Map the dependencies
between smart and AI skills
for short and long term job market transformation |
04100966 | en | [
"stat",
"shs"
] | 2024/03/04 16:41:20 | 2022 | https://hal.science/hal-04100966/file/Pluralist_service_logics_to_ecological_innovation.pdf | Milena-Jael Silva
email: [email protected]
Atreyi Kankanhalli
Dimitris Karagiannis
H Karjaluoto
E Kasabov
T Kawai
J Kemppinen
Peter Kenning
Marco Kerkhof
Don Kerr
F Kerrigan
Effie Kesidou
Dan J Kim
H G Kim
Kibae Kim
Kyung Kyu Kim
Sojung Kim
Young Hoon Kim
Tony Kinder
Daniel Kindstrom
J Kingmanbrundage
S Kiumarsi
A Kiviniemi
V Kiviniemi
M Kleijnen
Michael Kleinaltenkamp
Marcus Koelling
Christos D Koritos
Helmut Krcmar
Dennis Krumwiede
Cheng-Yuan Ku
Sabine Kuester
Pluralist service logics to ecological service system innovation: An integrative framework based on text mining, network analytic and qualitative analysis
Keywords: innovation, service logics, service systems, institutional pluralism and complexity, network analytic, text mining, sociology of science
This article proposes an integrative framework to service logics pluralism and complexity in service research. This paper presents a longitudinal review of innovation, service logics and service systems research between 1986 and 2015. The article starts with a discussion about institutional limits of service-dominant logic of the market that has served as a theoretical foundation for service science. We use this as a starting point to analyse other latent service logics and the different ways in which service systems and service innovation have been conceptualised, to propose and pluralist and inclusive research framework. Authors perform eight iterations of a ten-step methodological process combining quantitative and qualitative methods based on text mining, network analytic, and grounded theory analysis, to capture latent research topics, theories and authors over time. The cleaned corpus has allowed us to examine 796 English articles, published in 277 peer-reviewed journals, indexed in WoS and Scopus, which synthesises the intellectual structure of these areas of research between 1986 and 2015. Draw on quantitative and qualitative analysis this paper offers several contributions. From a methodological point of view, the article provides a ten steps approach to a longitudinal quantitative-qualitative intellectual structure analysis. From a conceptual point of view, this research highlights: 1) a longitudinal view of innovation, service logics and service systems intellectual and conceptual structure with relevant topics, theories, and authors over time; 2) an integrative framework highlighting individual, organisational and field level to a new service research perspective based on service logics pluralism, complexity and co-existence of service logics, beyond a service-dominant logic (SDL) of the market perspective. This research put outs some ambiguities in SDL literature concerning institutional perspective. Paper concludes with five future research directions for improving institutional pluralism into service system innovation.
Introduction
Over the last decades, a considerable amount of research has been done on digital innovation, service innovation, service logics and service systems. Researchers from different disciplines have studied this research themes from various point of view: marketing [START_REF] Gummesson | The emergence of the new service marketing: Nordic School perspectives[END_REF][START_REF] Ostrom | Service Research Priorities in a Rapidly Changing Context[END_REF]Vargo et al., 2008); innovation management [START_REF] Maglio | Innovation and Big Data in Smart Service Systems[END_REF][START_REF] Sundbo | Management of Innovation in Services[END_REF][START_REF] Sundbo | Innovation as a loosely coupled system in services[END_REF]; public management [START_REF] Osborne | The SERVICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of coproduction and value co-creation[END_REF]; information system [START_REF] Barrett | Service Innovation in the Digital Age : Key Contributions and Future Directions[END_REF][START_REF] Eaton | Distributed tuning of boundary resources: the case of Apple's iOS service system[END_REF][START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF]; operation management [START_REF] Spohrer | Service Science, Management, Engineering, and Design (SSMED)[END_REF] and economy [START_REF] Gallouj | Innovation in services: a review of the debate and a research agenda[END_REF]. Previous research clarified that monodisciplinary approaches have various well-known limitations in the study of service systems. For instance, Kingman-Brundage et al. (1995, p. 21) remind us that: "The stovepipe world of academia seems to offer them few useful models; it continues to turn out single-focus articles -on services marketing, op-erations, or human resource management -that rarely capture the complexity of actual service systems".
On this basis, interdisciplinary research is required to understand the multidimensional and complex phenomenon of innovation in service systems [START_REF] Chae | An evolutionary framework for service innovation: In-sights of complexity theory for service science[END_REF][START_REF] Chae | A complexity theory approach to IT-enabled services (IESs) and service innovation: Business analytics as an illustration of IES[END_REF], by considering not only a dominant logic but also the complexity and pluralism of institutional logics approach [START_REF] Thornton | Institutional Logics[END_REF]. Although early research on the phenomenon of innovation and transformation in service systems, such as service science [START_REF] Chesbrough | A research manifesto for services science[END_REF][START_REF] Maglio | Service systems, service scientists, SSME, and innovation[END_REF][START_REF] Maglio | Fundamentals of service science[END_REF] and service-dominant logic [START_REF] Vargo | Evolving to a new dominant logic for marketing[END_REF] (SDL) aroused great excitement, there are few longitudinal analyses of the change or transformation in service systems over time, including various service logics analysis, at the individual and collective level [START_REF] Silva-Morales | Understanding the institutional and structural transformation of an urban public service system that is becoming smart: a neo-Schumpeterian approach to understanding technological and institutional innovation in service systems[END_REF].
As Greenwood & Suddaby (2006, p. 31) remind us, a theoretical elaboration or refinement is a process in which one contrasts preexisting understandings with phenomena observed to extend the existing theory. In this context, critical research of [START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF] about the theoretical refinement of SDL [START_REF] Vargo | Institutions and axioms: an extension and update of service-dominant logic[END_REF], adopting systemic and institutional elements from the institutional theory, has argued that SDL represents markets, rather than marketing; capitalist ideology rather than the "science of customer persuasion". For [START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF], SDL is now not limited to "marketing" but includes broader perspectives of "markets" as networks of interactions, "complex systems configurations" or multiservice platforms [START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF]. They also point out, that several concepts, stemming from the institutional, practice and actors theory, now can allow SDL to consider: institutions -rules, standards, meanings, symbols, practices and collaboration and, more generally, institutional arrangements, regardless long-term and transition, transformation or change process. Nevertheless, SDL has integrated the institutional perspective in a slightly too light manner without taking into account: 1) epistemological/ontological aspects in the theoretical combination process and 2) theoretical/conceptual challenges of the combined theories. Also, the service research to date has tended to focus on dominant service logic rather than pluralist service logics co-existence.
From an epistemological and ontological point of view, SDL is not clear on how it has been combining and transposing concepts and theories [START_REF] Ketokivi | Renaissance of case research as a scientific method[END_REF][START_REF] Avenier | Finding one's way around various methodological guidelines for doing rigorous case studies: A comparison of four epistemological frameworks[END_REF][START_REF] Fisher | Using Theory Elaboration to Make Theoretical Advancements[END_REF], for example:
• Specify the epistemological framework in which the integration was carried out.
• Interpret theory, according to the arrival epistemological framework. For example, if a theory has been developed in critical realism and the transposition place theoretical refinement in constructivism or interpretativism, the Generative Mechanisms will be interpreted as organising principles of our knowledge (they describe how we understand things and not necessarily how the world works).
• Check the compatibility of the theories to combine. Example of incompatibility: one is based on an atomistic vision of the world, the other on a relational vision.
• Finally, argue the merits of how theories have been combined.
If we consider only the theoretical / conceptual challenges of SDL with institutional theory [START_REF] Vargo | Institutions and axioms: an extension and update of service-dominant logic[END_REF], SDL does not discuss ambiguous aspects that various institutional theory approaches mobilises, for example, epistemological and methodological differences:
• The institutional theory has its theoretical foundations in sociology, economics, political science and history [START_REF] Olsson | Subversion in Institutional Change and Stability: A Neglected Mechanism[END_REF]. The seminal works on institutional theory appeared around the first half of the XXth century with the old institutionalism theory, a prototheory with five determining characteristics [START_REF] Peters | Institutional theory in political science: The new institutionalism[END_REF]): 1) legalism; 2) structuralism; 3) holism; 4) history; 5) normative analysis.
• In the 1970s, the work of [START_REF] Meyer | Institutionalized Organizations: Formal Structure as Myth and Ceremony[END_REF] contributed to institutional theory by pointing out that institutional environments are often pluralistic; there-fore, institutional arrangements are often incompatible.
• The neoinstitutionalism, born in the early 1980s. [START_REF] Olsson | Subversion in Institutional Change and Stability: A Neglected Mechanism[END_REF], synthesise four currents of neoinstitutionalism:
-Normative neoinstitutionalism. It studies how the norms and values embedded in political institutions shape the behaviour of individuals. Institutions have their dynamics, and change is understood to be linked to rules, following standard processes of interpretation, learning and adaptation. The logic of action is an adaptive behaviour.
-Sociological neoinstitutionalism, having significant similarities with normative institutionalism, was inspired by the work of Max Weber, to emphasise how institutions and their resulting myths influence the process of bureaucratisation [START_REF] Meyer | Institutionalized Organizations: Formal Structure as Myth and Ceremony[END_REF]. His argue that rationalisation and bureaucratisation have changed in society, resulting from a process in which organisations have become more similar, without necessarily becoming more efficient [START_REF] Dimaggio | The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields[END_REF]. Sociological institutionalism has a broad interest in institutions, including intra-organisational and interorganisational studies (institutionalised fields), private and public institutions, symbolic and material aspects. The concept of institutional isomorphism constitutes their funding base. A central perspective within sociological institutionalism is to see the institutional change at the macro level resulting from the adaptations of organisations in their environments (imitation, diffusion and isomorphism). Institutional arrangements deeply shape the structuring of organisational fields. A source of controversy in sociological institutionalism is related to the way of understanding the change in the "paradox of the integrated agency".
-The neoinstitutionalism of rational choice. The economic theory, notably the institutionalism of rational choice and the idea of induced exogenous change are the base of this approach.
-The specificity of historical neoinstitutionalism is to re-enter diachronic changes in behaviour and rules. It arose to deal with institutional (diachronic) change. It then presents itself in two versions: 1) simple historical institutionalism in which the synchronic differences and 2) complex historical institutionalism which allows an analysis of social life in all its complexity. It is the most structural version of all the neoinstitutionalist currents, with the path-dependency as a key concept. It studies how the choices made on the institutional design of government systems influence the future decision-making of individuals. Complex historical institutionalism tries to explain why and how change takes place during long periods of stability in a path-dependency action.
• The institutional work perspective of [START_REF] Lawrence | Institutions and Institutional Work[END_REF] highlights the dynamics of the process of institutionalisation -deinstitutionalisation and legitimation -delegitimation of rules, norms and practices.
• The institutional logics perspective with the paradox of the integrated agency which questions itself on "how can actors change institutions if their actions, their intentions and their rationality are conditioned by the institution itself they want to change? " (Holm, 1995, p. 398); society as an interinstitutional system; the material and cultural foundations of the institutions; multi-level institutions; and historical contingency [START_REF] Thornton | Institutional Logics[END_REF][START_REF] Thornton | The institutional logics perspective: a new approach to culture, structure, and process[END_REF][START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF].
• The institutional pluralism perspective is a variation of the institutional theory. This variation refers to contexts where actors are confronted with several more or less complementary institutional logics allowing cooperation or competition [START_REF] Dunn | Institutional Logics and Institutional Pluralism: The Contestation of Care and Science Logics in Medical Education, 1967-2005[END_REF][START_REF] Yu | Institutionalization in the Context of Institutional Pluralism: Politics as a Generative Process[END_REF].
• The institutional complexity perspective, other variation of the institutional theory which refers to how the individual and collective actors face and respond to the conflicting requirements associated with different logics [START_REF] Lee | Filtering Institutional Logics: Community Logic Variation and Differential Responses to the Institutional Complexity of Toxic Waste[END_REF][START_REF] Smith | Institutional complexity and paradox theory: Complementarities of competing demands[END_REF][START_REF] Ocasio | Advances to the Institutional Logics Perspective[END_REF]. The institutional complexity emphasises multiple, sometimes concurrent, logics and complex organisational fields in which organisations can have multiple responses and feedback cycles [START_REF] Thornton | The institutional logics perspective: a new approach to culture, structure, and process[END_REF].
Although SDL fails to consider institutional pluralism and service logics co-existence, this article provides new insights into service logics co-existence and pluralist perspective to service system innovation research. In this sense, analyse service system innovation and service logic research with this institutional theory approach, offers a significant opportunity to rethink and reorganise service system innovation and service logics research transformation. The term "service innovation" will be used in its broadest sense to refer to the process of creating value to improve o generate a new service. In the present study "service system" refer to an adaptive and complex system allowing the co-creation of the value for the provision of multisectoral services, for example, health, education, mobility and transport, employment, urban resilience, culture, tourism, sport, leisure, allowing the combination of resources according to institutional arrangements. Service systems concern several individual and collectives service logics co-existence to value creation or destruction. This article adopts the definition of institutions of Friedland & Alford (1991, p. 243) as "patterns of human activity by which individuals and organisations produce and reproduce their material subsistence and organise time and space. They are also symbolic systems, ways of ordering reality, and thereby rendering the experience of time and space meaningful". Also, this research adopts institutional logics definition of Thornton & Ocasio (1999, p. 804) as "the socially constructed, historical patterns of material practices, assumptions, values, beliefs, and rules by which individuals produce and reproduce their material subsistence, organise time and space, and provide meaning to their social reality".
From an epistemological point of view, this article is positioned with the interpretativism approach [START_REF] Avenier | Finding one's way around various methodological guidelines for doing rigorous case studies: A comparison of four epistemological frameworks[END_REF], coherent with the "becoming ontology" for a longitudinal process of complex transformation analysis [START_REF] Tsoukas | Complex Thinking, Complex Practice: The Case for a Narrative Approach to Organizational Complexity[END_REF][START_REF] Langley | Process studies of change in organization and management: Unveiling temporality,activity, and flow[END_REF]. Winter et al. (2006, p. 643) consider that: the becoming ontology emphasises the process, verbs, activities and dynamics of constitution or evolution of entities. A becoming ontology requires constant questioning of the conceptual categories considered as fixed". De Vaujany (2015) suggest that "relational ontology" is more in keeping with the becoming ontology. In relational ontology, Orlikowski & Scott (2008, p. 456) argue that in a relational ontology "social and material are intrinsically inseparable". According to Orlikowski (2009, p. 134), relational ontology does not favour humans or technologies and does not consider them as separate realities and, for Cecez-Kecmanovic et al. (2014, p. 809) and Orlikowski (2009, p. 128):
"the hypotheses which underlie a relational ontology are different from those which "substantialist ontology" which has long characterised and at the same time limited research. A "substantialist ontology" supposes that social / human objects and materials / techniques exist separately as distinct and autonomous entities which interact and influence other entities ". Indeed, a relational ontology breaks with the separation of entities (for example, people, technology) by making the primary unit of analysis the transaction rather than its constituent elements.
The nature of the phenomena implies that they are tangled and emerging. It is important to note that entanglement is not about adding relationships to connect ontologically distinct entities. The concept of entanglement means that the "things" that makeup reality are never separated before they have interacted; instead, they are always intertwined constitutively".
In this theoretical, epistemological and methodological basis, this article addresses the following questions:
From a longitudinal analysis of digital innovation, service innovation and service system research, what service logics are latent in the academic literature (English journal articles)? How has this intellectual structure evolved ?
In order to address these questions, this paper has been divided into five parts. Section 2 presents: 2.1) a gathering of a fragmented knowledge from multiple disciplines concerning innovation, service logics and service system research; 2.2) institutional pluralism, institutional complexity and institutional change relationships. Section 3 describes the ten-step methodology. The fourth section presents the findings of the research, focusing on highlighting an intellectual structure evolution over three periods. Section 5 presents the integrative framework. Finally, this paper presents some future research directions, limitations, and conclusions.
Background
2.1. Gathering fragmented knowledge from multiple disciplines concerning innovation, service logics and service system research
In a comprehensive study of service innovation research evolution, [START_REF] Carlborg | The evolution of service innovation research: a critical review and synthesis[END_REF] reported that the first serious discussion about a theory on service innovation emerged during the 1980s with the seminal work of [START_REF] Barras | Towards a theory of innovation in services[END_REF]. Although the notion of service innovation has been studied and characterised by different disciplines, and different authors with various point of view, some investigations [START_REF] Miles | Twenty Years of Service Innovation Research[END_REF][START_REF] Snyder | Identifying categories of service innovation: A review and synthesis of the literature[END_REF][START_REF] Barrett | Service Innovation in the Digital Age : Key Contributions and Future Directions[END_REF][START_REF] Carlborg | The evolution of service innovation research: a critical review and synthesis[END_REF] consider that service innovation is a commonly used unclear concept challenging to define precisely. There is no consensus developed in the academic community on what constitutes the boundary between the notions of "service innovation", "innovation in services", "new service development (NSD)", and "service design". These terms are often used interchangeably. In other words, researchers consider some of the concepts mentioned previously as synonyms; others, as distinct. This lack of clarity in the definition and the boundaries of service innovation across disciplines makes it challenging to implement interdisciplinary research [START_REF] Carlborg | The evolution of service innovation research: a critical review and synthesis[END_REF][START_REF] Snyder | Identifying categories of service innovation: A review and synthesis of the literature[END_REF]. According to Mendes et al. (2017, p. 185) and Carlborg et al. (2014, p. 374), several early studies in service research used "new service development" and "service innovation" interchangeably, for example in [START_REF] Menor | New service development: areas for exploitation and exploration[END_REF]; [START_REF] Droege | Innovation in services: present findings, and future pathways[END_REF]. [START_REF] Mendes | Uncovering the structures and maturity of the new service development research field through a bibliometric study (1984-2014)[END_REF] presented that the NSD concept is closer to marketing and service management domains. It involves a process for new service offerings that encompasses several stages from idea generation to launch in a market. Furthermore, the author has emphasised that service innovation is a larger and more complex notion. It includes any change that transforms one or more service attributes. Moreover, Carlborg et al. (2014, p. 386) argue that for future research, it is essential to distinguish the "innovation in service firms" from that of "service innovation".
Regarding the digital dimension, [START_REF] Huang | IT-Related Service: A Multidisciplinary Perspective[END_REF] discussed that recent research uses the notions "digital innovation" and "IT-enabled service innovation" as synonyms. Compared with other researchers, [START_REF] Barrett | Service Innovation in the Digital Age : Key Contributions and Future Directions[END_REF] have regrouped the notions "service innovation", "digital innovation", and "IT-enabled service innovation" indistinctly. According to this authors, digital innovation research has emerged as a new discourse in the field of information systems, beyond the usual study of technology adoption and diffusion. In this context, digital innovation is defined as new combinations of digital and physical components for creating new products [or services] [START_REF] Yoo | Research Commentary -The New Organizing Logic of Digital Innovation: An Agenda for Information Systems Research[END_REF]. It differs from other forms of innovation, mainly through generativity and numerical characteristics such as modularity [START_REF] Tilson | Research Commentary -Digital Infrastructures: The Missing IS Research Agenda[END_REF][START_REF] Yoo | Computing in Everyday Life: A Call for Research on Experiential Computing[END_REF]. Through digital innovation, a company can extend its boundaries to markets connected to digital networks beyond their organisations [START_REF] Lyytinen | Research Commentary: The Next Wave of Nomadic Computing[END_REF]. [START_REF] Barrett | Service Innovation in the Digital Age : Key Contributions and Future Directions[END_REF] suggest that digital innovation is gradually relocating from its periphery to the centre of service innovation in the digital age research, particularly involving smart service systems. They argue for service innovation in the digital age as an umbrella notion. Although previous research has differentiated "innovation in service" and "innovation in service industries", these differences have been questioned in recent years. Vargo et al. (2008) remind us that each economic exchange is essentially an exchange of services, where ICT has played a fundamental role as a resource that can be combined to create new opportunities for the exchange of services. Service innovation in the digital age [START_REF] Barrett | Service Innovation in the Digital Age : Key Contributions and Future Directions[END_REF] refers to the possible combination of digital technologies with other resources that service systems are enabled to process received in-formation to offer adaptive services to different contexts and audiences.
Except for the conference communication research of [START_REF] Brust | Servicedominant logic and information systems research: a review and analysis using topic modeling[END_REF] that only consider SDL, regardless data triangulation (data collection based only on Web of Science database), no reviews were found dealing whit institutional pluralism and complexity of service logics coexistence approach. Concerning service system literature, few researchers have focused on the analysis of its evolution. [START_REF] Hsu | A bibliometric study of SSME in information systems research[END_REF] as well as [START_REF] Sakata | Bibliometric analysis of service innovation research: Identifying knowledge domain and global network of knowledge[END_REF] conducted bibliometric studies, but have not attempted to analyse the evolution of intellectual structure.
Moreover, [START_REF] Beuren | Product-service systems: A literature review on integrated products and services[END_REF] emphasised that in future studies, researchers should delve into the environmental and social aspects, beyond the economic aspect of integrated products and service systems. Lindhult et al. (2018, p 478) conclude that: "crossbreeding the system, complexity, and innovation fields in research and management of a systemic logic for innovation are still in its infancy and deserve much more attention".
Institutional pluralism, institutional complexity and in-
stitutional change: service logics pluralism, innovation and service systems For [START_REF] Greenwood | Institutional entrepreneurship in mature fields: The big five accounting firms[END_REF], the perspective of institutional logics emerges as a means of explaining the institutional change. They suggest that the process of articulating different levels of analysis representing individuals, organisations, industries and society at the micro, meso and macro can be linked through the institutional logic [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF]. These authors indicated the existence, in contemporary Western societies, of a dominant logic in each of the institutional orders existing such as the church, the family and the market. This dominant logic or, institutional logics, represent a set of material practices and symbolic constructions which form the fundamental organisational principles of each of these institutional orders. For example, the institutional logic of capitalism would be the accumulation and commercialisation of human activity. The concept of institutional logic was introduced by [START_REF] Alford | Powers of Theory: Capitalism, the state, and democracy[END_REF] to analyse how the conflicting logics of capitalism, bureaucracy and democracy shape the formation of the modern state. Between the years 1990 and 2000, some seminal works pointed out institutional logics as what defines the content and the meaning of institutions [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF][START_REF] Haveman | Structuring a theory of moral sentiments: Institutional and organizational coevolution in the early thrift industry[END_REF][START_REF] Thornton | Institutional logics and the historical contingency of power in organizations: Executive succession in the higher education publishing industry, 1958-1990[END_REF][START_REF] Scott | Reviewed Work: Institutional Change and Healthcare Organizations: From Professional Dominance to Managed Care[END_REF]. They make it possible to link several levels of analysis concerning an inter-institutional system: a) the societal, b) the field and c) the individual level [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF][START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF].
Institutional pluralism focuses on contexts where actors are confronted with a variety of more or less complementary institutional logics allowing cooperation or competition. On the other hand, institutional complexity refers more specifically to how individual and collective actors face and respond to the conflicting demands associated with different logics [START_REF] Ocasio | Advances to the Institutional Logics Perspective[END_REF]. Institutional complexity is a variation of the institutional theory that emphasises multiple, sometimes concurrent, logics and complex organisational fields in which organisations can have multiple responses and feedback cycles [START_REF] Thornton | The institutional logics perspective: a new approach to culture, structure, and process[END_REF]. Although most of the work after [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF] continued to recognise the existence of multiple logics in reality, rapid analysis of studies on institutional logics [START_REF] Thornton | Institutional Logics[END_REF] identified two ways of approaching around two main research categories:
• Research which considers how a logic predominates in a specific field, being a "dominant" logic in this field and serving as a guide for the behaviour of social actors. For example, [START_REF] Thornton | Institutional logics and the historical contingency of power in organizations: Executive succession in the higher education publishing industry, 1958-1990[END_REF] and [START_REF] Reay | The recomposition of an organizational field: Health care in Alberta[END_REF].
• Research which considers the "co-existence" of multiple logics for long periods, without there necessarily being a predominance of one logic over the others. For example, [START_REF] Dunn | Institutional Logics and Institutional Pluralism: The Contestation of Care and Science Logics in Medical Education, 1967-2005[END_REF] and [START_REF] Goodrick | Constellations of Institutional Logics: Changes in the Professional Work of Pharmacists[END_REF].
In the first case, the aim is to explore how one dominant logic in a specific period has been "replaced" by another or has disappeared. In the second case, the aim is to analyse the co-existence in different periods of a specific field. However, the researcher must focus on how elements from different institutional logics coexist within a particular period. The researcher's interpretations must also consider the social and historical context at differ-ent times. For [START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF], the multiple institutional logics present a theoretical puzzle. Various logics, often conflicting, characterise organisational fields.
In practice, an institutional logic can help, delay or even prevent different processes of change [START_REF] D'adderio | Artifacts at the centre of routines: performing the material turn in routines theory[END_REF]J ärvenp ä ä & L änsiluoto, 2016;[START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF]. Institutional studies have also pointed out that multiple and heterogeneous logics struggle to persevere over time, as they are modified by dynamic tensions between different power and interest groups which simultaneously advocate separate logics [START_REF] Busco | Sustaining multiple logics within hybrid organizations: Accounting, mediation and the search for innovation[END_REF].
Reminding that this research is based on multiple institutional logics co-existence approach, actors can be located in different social fields acting according to their interests by contributing in their way to the gradual transition from one logic to another, in particular, because organisations maintain alignment with societal changes [START_REF] Wright | Wielding the willow: Processes of institutional change in English county cricket[END_REF]. This pluralist approach allowing to answer questions such as: how the balance between different logics evolves in pluralist fields; how do the different actors, public, private and citizens, negotiate different logics and, in so doing, contribute to maintaining the balance between the logics within the different fields and level of analysis, for example, at societal, collective, individual?
According to [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF], institutional logics arise from the institutional orders of the interinstitutional system. For these authors, institutional logics can be defined at different levels of analysis, for example, global systems, societies, institutional fields or domains and organisations. While societal logics permeate other levels, institutional logics at different levels are not only variants or combinations of societal logics, but they are also shaped by local variations and cultural adaptations from this level. An institutional logic at a particular level of analysis can be identified insofar as we recognise an institution associated with this level and the organisational principles of this institution have a certain unifying coherence, but not complete, because the institutional logics are subject to internal contradictions. For [START_REF] Thornton | The institutional logics perspective: a new approach to culture, structure, and process[END_REF] institutional logics are the organising principles of institutions, where institutions can be identified at different levels of analysis, including the institutional orders of society. These authors suggest the need to analyse how logics at other levels of analysis are influenced by societal logics. The institutional logics are historically contingent and evolve and change over time: where institutional orders change, institutions at other levels change. Research is needed to examine historical changes in institutional logics at several levels of analysis. The research of [START_REF] Lee | Filtering Institutional Logics: Community Logic Variation and Differential Responses to the Institutional Complexity of Toxic Waste[END_REF] shows how different types of community logics filter the logics at the field level when the actors of the organisation interpret them. These authors emphasise the need to analyse the role of community logics, particularly how logics are articulated at different levels of analysis. They explored the conditions under the logical manifestations at different levels (for example, the individual, the organisation, the region, the nationstate) become more salient and more powerful. [START_REF] Scott | Reviewed Work: Institutional Change and Healthcare Organizations: From Professional Dominance to Managed Care[END_REF] have examined how professional, government and managerial logic at the societal level is shaping the transformation of the organisational health domain in San Francisco. Their research shows the transition from a dominant professional logic to the co-existence of three logics without any one dominating. This interpretation differs from that of [START_REF] Vargo | Evolving to a new dominant logic for marketing[END_REF], who emphasise a dominant logic. For some authors, this is surprising because most studies claim to be a "descendants" of [START_REF] Friedland | Bringing society back in: Symbols, practices and institutional contradictions[END_REF] for which, the organisational fields are always subject to multiple logics. In this sense, institutional complexity, as defined by [START_REF] Ocasio | Advances to the Institutional Logics Perspective[END_REF], refers to the experience of organisations confronted with incompatible prescriptions of multiple institutional logics. Also, for these authors, pluralism describes a multiplicity of institutional logics, and complexity implies the involvement of incompatibility and tensions between logics. These authors put out that complexity is particularly acute when the defenders of logic, who are not powerful enough to dominate the field but whose influence, however, restricts the actions of others. The above considerations are compatible with the way we see heterogeneity in the degrees of institutional pluralism, and the related experiences of complexity, in all fields and organisations. In fields where a single logic is dominant or where institutional settlements lead to relatively stable adjustments between the logics, institutional complexity is likely to be latent for most organisations. While scholars recognise their increasing prevalence within organi-sations, research offers conflicting perspectives on their implications, causing confusion and inhibiting deeper understanding. [START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF] proposed a framework that delineates several types of logics within organisations. Its framework classifies several types of logic multiplicity within organisations in contested, aligned, estranged, dominant, in terms of "degree of compatibility" and "degree of centrality" and, explains how field, organisational, and individual factors influence these two dimensions.
• Degree of centrality -High: multiple logics are core to organisational functioning.
-Low: one logic is core to organisational functioning, other logics are peripheral.
• Degree of compatibility -High: logics provide compatible prescriptions for action.
-Low: logics provide contradictory prescriptions for action. Also, [START_REF] Ocasio | Advances to the Institutional Logics Perspective[END_REF] pointed out four forms of change in institutional logics: 1) assimilation, which concerns the incorporation of external elements into existing logics; 2) development, which refers to endogenous reinforcement; 3) expansion, which involves changes from one field to another; and 4) contraction, which refers to a decrease in logical range. From the phenomenological institutionalism and sociology of knowledge, Casasnovas & Ventresca (2019, p. 135 -136) identifies two research design trends for institutional logics analysis:
• A shift in research design from field-level studies to organization-specific contexts, where conflicts are prominent in the organization,
• and a shift in the conception of logic transitions, originally from one dominant logic to another, then more attention to co-existence or blending of logics.
Based on these research trends, these authors proposed a typology of four available research designs to institutional logics studies that mark a changed conception of organisations as actors, highlighting differences in levels of analysis and logics struggles. Their findings point to changed conceptions of actors in institutional analysis.
Research method
We used an inductive approach (Bandara et al., 2015, p. 169) because our research goal is to inductively highlight the underlying themes and theories on service logics, service system and service innovation delineating the intellectual structure evolved. The raw and processed data are available in a GitHub repository. Except for researcher triangulation that not be performed due to the enormous amount of time required for this type of qualitative and quantitative analysis combining several datasets, this research used four types of triangulation to increase the research validity (Murray, 1999, p. 194): 1) data triangulation. This article uses two source of primary raw data: a) ISI WoS and b) Scopus bibliometric databases.
Secondary data: pdf journal articles; 2) methodological triangulation, an attempt to improve validity by combining various techniques in one study ; 3) theoretical and 4) analytical triangulation. In line with the methodological triangulation, Figure 1 presents the methodological approach combining quantitative and qualitative analysis.
The quantitative phase is based on bibliometric methods, networks analytic, text mining and science mapping techniques. On the other hand, the qualitative phase is based on a grounded theory with thematic analysis [START_REF] Gioia | Seeking Qualitative Rigor in Inductive Research[END_REF][START_REF] Wolfswinkel | Using grounded theory as a method for rigorously reviewing literature[END_REF]. The methodological process takes into consideration the recommendations of [START_REF] Wolfswinkel | Using grounded theory as a method for rigorously reviewing literature[END_REF] and [START_REF] Rowe | What literature review is not: diversity, boundaries and recommendations[END_REF]. The following paragraphs will go into detail and explain all the tasks required in each of the ten steps.
3.1.
Step 1: Define research design, research questions, methods and tools Zupic & Čater (2014, p. 11), research design undergoes three stages:1) to define research questions; 2) to chose appropriate methods to answer these research questions (e.g., bibliometric methods, text mining, science mapping, qualitative thematic analysis, a combination of methods);
3) select appropriate tools. For this research, we have selected the following tools: VantagePoint to text mining; NVivo to content analysis; Mendeley to the bibliographic management; Ucinet and Gephi to network analysis. In order to answer research questions presented in the introduction, we have combined several methods: intelligent bibliometric methods based on text mining; network analytic; science mapping; and qualitative content analysis.
These are defined below.
Intelligent bibliometrics and intellectual structure approaches
According to [START_REF] Zupic | Bibliometric methods in management and organization[END_REF], the bibliometric method offers a vast potential for mapping the scope of different domains. They stress that science mapping with bibliometric methods is useful to support researchers to understand the field's structure and to introduce quantitative rigour into traditional qualitative literature reviews.
Their study provides evidence that in the future bibliometric methods will become the third significant approach besides the traditional qualitative literature reviews and meta-analyses. Furthermore, these authors argue that while a traditional qualitative analysis gives us depth with a reduced number of documents, bibliometric methods allow us to rigorously manage a vast number of records resulting in a graphical description of the knowledge base or the intellectual structure of a research area. For Shafique (2013), the term "knowledge base" refers to the ideas, perspectives, approaches, theories, and methods used to create new knowledge in a specific scientific field. This author argues that "intellectual structure" refers to a set of attributes of the knowledge base that can provide an organised and fundamental understanding of a scientific field or research topic [START_REF] White | Author cocitation: A literature measure of intellectual structure[END_REF][START_REF] Culnan | Mapping the Intellectual Structure of MIS, 1980-1985: A Co-Citation Analysis[END_REF][START_REF] Ramos-Rodríguez | Changes in the intellectual structure of strategic management research: a bibliometric study of theStrategic Management Journal, 1980-2000[END_REF][START_REF] Pilkington | The evolution of the intellectual structure of operations management-1980-2006: A citation/co-citation analysis[END_REF][START_REF] Zhang | Parallel or intersecting lines? intelligent bibliometrics for investigating the involvement of data science in policy analysis[END_REF]. Also, [START_REF] Vogel | The Visible Colleges of Management and Organization Studies: A Bibliometric Analysis of Academic Journals[END_REF] suggests that the intellectual structure of a field encompasses research traditions, disciplinary composition, evolutionary patterns over time, related research topics, and theories over time. An intellectual structure provides compelling evidence of appearance, transformation, drift, differentiation, fusion, implosion, the revival of research traditions, or invisible colleges [START_REF] Kuhn | The structure of scientific revolutions[END_REF][START_REF] Vogel | The Visible Colleges of Management and Organization Studies: A Bibliometric Analysis of Academic Journals[END_REF].
Likewise, the intellectual structure also means to uncover the existence of scientific schools and college networks known as "invisible colleges" (De Solla [START_REF] De Solla Price | Collaboration in an invisible college[END_REF][START_REF] Vogel | The Visible Colleges of Management and Organization Studies: A Bibliometric Analysis of Academic Journals[END_REF]. [START_REF] Culnan | Mapping the Intellectual Structure of MIS, 1980-1985: A Co-Citation Analysis[END_REF] and [START_REF] Hassan | Distilling a body of knowledge for information systems development[END_REF] research utilises these methods to distilling a body of knowledge of information system field. [START_REF] Love | Reflections on Information Systems Journal's thematic composition[END_REF] investigation has utilised latent semantic analysis (LSA) and cluster analysis to identify Information System structure research themes by a quantitative literature review and mapping of IS field history. Moreover, [START_REF] Shibata | Detecting emerging research fronts based on topological measures in citation networks of scientific publications[END_REF] have developed a method to detect new emerging topics by combining co-citation and cluster analysis. To [START_REF] Cobo | Scimat: A new science mapping analysis software tool[END_REF], bibliometric approaches concern two types of procedures: performance analysis and science mapping. While performance analysis evaluates scientific actors (e.g., countries, universities, departments, researchers) to study the impact of their scientific activities, science mapping allows longitudinal or temporal analysis to obtain perceived structural changes of scientific research over time.
Although several bibliometric methods were studied (e.g., citation, bibliographic coupling, co-citation, coauthor, co-word), the authors have decided to use: 1) co-citation analysis [START_REF] Small | Citation structure of an emerging research area on the verge of application[END_REF] and 2) coword analysis [START_REF] Callon | From translations to problematic networks: An introduction to co-word analysis[END_REF]. According to previous literature, the output of co-citation analysis returns a set of groups that represent the intellectual basis of a research field. Whereas, the co-word analysis allows mapping of the force of association into textual data that can be understood as semantic or conceptual groups, hence represent the research topics studied in a research field. This show that the two methods complement each other.
The combination of several bibliometric methods allows investigating different units of analysis of the collected data (e.g., word/concept/keyword or term extracted from a title, abstract, or document's body; document; author; journal; reference; institution from affiliation, and country from affiliation) [START_REF] Cobo | Science Mapping Software Tools: Review, Analysis, and Cooperative Study AmongTools[END_REF][START_REF] Zupic | Bibliometric methods in management and organization[END_REF][START_REF] Ñoz | Analysing the scientific evolution of egovernment using a science mapping approach[END_REF]. Besides, while a longitudinal co-word analysis allows studying literature evolution, a longitudinal co-citation analysis can help to study intellectual structure continuity.
In this sense, this research performs intelligent bibliometric based VantagePoint text mining tool. According to Zhang et al. (2020, p. 5), VantagePoint proposed intelligent bibliometric streaming data analytics and machine learning techniques to identify complicated relationships and tracing potential topic changes in a dynamic scenario.
For this authors, this tool allows highlighting the development and application of intelligent models for recognising patterns in bibliometrics and entitle this cross-disciplinary direction Intelligent bibliometrics. They suggest that intelligent models could be any computational models incorporating advanced data analytic approaches or artificial intelligence techniques, such as optimisation, streaming data analytics, network analytics, fuzzy systems, and various machine learning techniques as neural networks, extends the scope of traditional bibliometrics.
Social network theory and network analytic
Networks theory is a basis for uncovering an intellectual structure by bibliometric approaches [START_REF] Borgatti | On Network Theory[END_REF][START_REF] Borgatti | A Graph-theoretic perspective on centrality[END_REF][START_REF] Borgatti | The Network Paradigm in Organizational Research: A Review and Typology[END_REF][START_REF] Borgatti | A relational view of information seeking and learning in social networks[END_REF]. [START_REF] Wasserman | Social Network Analysis: Methods and Applications[END_REF] and [START_REF] Parkhe | New frontiers in network theory development[END_REF] argue that social network perspective focuses on multilevel relationships among social entities (e.g., persons, groups, organisations, nation-states, cities, websites, scholarly publications). [START_REF] Lazega | R éseaux sociaux et structures relationnelles[END_REF] has observed that the analysis of social networks helps to recognise the structural properties of social groups, one of the contributions of sociology. It facilitates the reconstruction of a system of interdependencies to describe how the system influences the behaviour of its members.
The systematic application of mathematical or statistical models, graph theory, and linear algebra to relational data is the basis of the analysis of the social networks. As the interdependencies that can exist between social system components are complex, network analysis can synthesise their structure by offering a simplified representation of this complex social system. These authors point out that this brings a significant added value of the sociology of science to understand how collective action creates academical change. Furthermore, network analysis has developed several measurement techniques [START_REF] Borgatti | A Graph-theoretic perspective on centrality[END_REF][START_REF] Borgatti | On Network Theory[END_REF][START_REF] Wasserman | Social Network Analysis: Methods and Applications[END_REF] to describe the relational structures of networks (e.g., structural equivalence, cohesion, centrality, and autonomy). In this sense, we have chosen the degree centrality measure for analysing the "authors by research themes/theories" networks over time which refers to the central authors of a research domain.
Step 2: Interdisciplinary data collection
In order to gather fragmented knowledge from multiple disciplines and increase research validity, this recherche performed raw dataset collection triangulation from 2 databases.Search criteria were defined iteratively by debating and contrasting perspectives with different disciplines of researchers. As claimed by [START_REF] Leslie | Web of Science, Scopus and Google Scholar: A content comprehensiveness comparison[END_REF] [START_REF] Carlborg | The evolution of service innovation research: a critical review and synthesis[END_REF], the starting point for data collection was the work of [START_REF] Barras | Towards a theory of innovation in services[END_REF], and the total study period was from 1986 to 2015. Also, in line with [START_REF] Mendes | Uncovering the structures and maturity of the new service development research field through a bibliometric study (1984-2014)[END_REF], we have considered only peer-reviewed articles in English. The first author export the final (iteration 8) WoS and Scopus' search equations (see appendix .1) and the raw datasets are available in a repository.
Step 3 Data merge
Taking into account data triangulation (WoS and Scopus data sources), we refining data collection consistency and reliability through eight iterations. The last dataset collection allowed us to merge a corpus of 2439 articles, 1004 derived from WoS and 1435 from Scopus. We used Van-tagePoint text mining tool to merge data whit appropriate filters: "Scopus Filter" and "WoS Filter".
Step 4: Data cleaning
Data cleaning and preprocessing was a long necessary step to validate the data consistency, especially when merging a dataset corpus from several databases for data triangulation.
1 1 1 1 1 1 2 4 3 3 4 3 7 (1986-1995), P2 (1996-2005), and P3 (2006-2015). The P1-corpus is composed of 12 articles, the P2-corpus of 57, and the P3corpus of 727. Figure 2 presents the total corpus of 796 articles after data cleaning. proving the quality of data [START_REF] Kongthon | A Text Mining Framework for Discovering Technological Intelligence to Support Science and Technology Management[END_REF]. For example, the set of constructs S-D Logic, Service-Dominant Logic, SDL, S-D L have been grouped in the same category. Subsequently, Latent Semantic Analysis (LSA) allows a truncated representation of the original structure to improve the accuracy of posterior quantitative data analysis, reducing the adverse effects of synonymy and polysemy [START_REF] Evangelopoulos | Latent Semantic Analysis: five methodological recommendations[END_REF]. The LSA algorithm is based on cosine similarities, a measure of cohesion between a forthcoming article and the centroid of all existing topics to assign it to the most similar topic from the euclidean distance between the article and the centroid of its assigned topic (Zhang et al., 2020, p. 5) [START_REF] Porter | Research profiling: Improving the literature review[END_REF].
2. The continuous graph layout "ForceAtlas2" Gephi algorithm [START_REF] Jacomy | Forceatlas2, a continuous graph layout algorithm for handy network visualization designed for the gephi software[END_REF], to analyse the authors by themes adjacent correlation matrix in periods P1, P2, and P3 (see ForceAtlas2 network analytics section in appendices).
3. The global network cohesion, Freeman's degree centrality and 2-mode centrality measures to the author by themes/topics matrix in P1, P2, and P3 whit UCINET (see appendices).
In the results (section 4), only (a) MDS of FA with Van-tagePoint longitudinal data visualisation and interpretation will be presented. The other data visualisations and measures will be presented in appendices.
3.8.
Step 8: Qualitative analysis based on selective content analysis and inductive grounded theory By considering Zupic & Čater (2014, p. 30):
"bibliometric methods are no substitute for extensive reading and synthesis. Bibliometrics can reliably connect publications, authors, or journals; identify research substreams, and produce maps of published research, but it is up to the researcher and their knowledge of the field to interpret the findings -which is the hard part".
To interpret preprocessing data in previous steps, selected articles were downloaded and imported into the Mendeley reference management software and analysed with the software for qualitative analysis NVIVO. We have performed: (a) an analytical reading [START_REF] Boell | A hermeneutic approach for conducting literature reviews and literature searches[END_REF][START_REF] Bandara | Achieving Rigor in Literature Reviews: Insights from Qualitative Data Analysis and Tool-Support[END_REF] to longitudinal qualitative analysis based on NVivo selective coding [START_REF] Wolfswinkel | Using grounded theory as a method for rigorously reviewing literature[END_REF]. (b) Also, an inductive grounded theory analysis [START_REF] Gioia | Seeking Qualitative Rigor in Inductive Research[END_REF], compatible with the interpretative epistemological paradigm and the becoming ontology hypotheses. As shown data structure in figures 3, 4, 5 and by considering [START_REF] Gioia | Seeking Qualitative Rigor in Inductive Research[END_REF], the first-order codes were grouped through more theoretical categories (second-order themes). Then, the theoretical categories were grouped into more general dimensions (aggregate global dimensions or categories).
Grounded theory articulation was performed iteratively in step 9 and 10, in order to propose an analytical service logics framework based on institutional pluralism [START_REF] Reay | Qualitatively capturing institutional logics[END_REF][START_REF] Ocasio | Advances to the Institutional Logics Perspective[END_REF][START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF]) and institutional complexity [START_REF] Smets | From practice to field: A multilevel model of practice-driven institutional change[END_REF][START_REF] Thornton | The institutional logics perspective: a new approach to culture, structure, and process[END_REF][START_REF] Smith | Institutional complexity and paradox theory: Complementarities of competing demands[END_REF]. The process of interpreting empirical data allowed to highlight relationships between theories, concepts and collected data.
3.9.
First-order codes
"Customer logic is the underlying rationale that drives the customer's behaviour, based on his needs and wants. Typically, it interjects realities into service process that are foreign to the process itself (Storbacka, 1992), and whose impact is often unpredictable. Customers often have expectations about anticipated service experiences (Parasuraman et al., 1991): about the role of contact employees, about how the service is to be provided, and about how service outcomes will compare with those of other providers (McCallum and Harrison, 1985). A customer often "scripts" future service encounters in his mind, imagining a "coherent sequence of events . . . involving him either as participant or as observer" (Abelson, 1976). Two scripts drive customer logic: customer as consumer and customer as coproducer. As consumer, the customer asks, "How can I get what I want? (Gr önroos, 1984;Johnston and Lyth, 1991;Parasuraman et al., 1985). As coproducer, he asks, "What is my role, and how do I perform it?" (Bowen, 1986;Bowen and Schneider, 1985;Kelley et al., 1990;Lovelock and Young, 1979;Mills et al., 1983). Customer logic can be surfaced by exploring the question, "What is the customer trying to do, and why?..." In Kingman-Brundage et al. (1995, p. 24). "We propose that marketing should start considering CD logic as the next step towards an in-depth understanding of customer experience. This means that the ultimate outcome of marketing should not be the service but the customer experience and the resulting value-in-use for customers in their particular context. "The service logic model proposed in this article delineates the organizing principles that govern a service system, and presents a way of fostering integration through design of the interactions that link the key processes of seamless servicecustomer, technical and employee logics..." In Kingman-Brundage et al. (1995, p. 20). "A service logic describes how and why a unified service system works. It is a set of organizing principles which govern the service experiences of customers and employees. Only after the logic of a service system has been made explicit does the system become amenable to management control, mainly through the activities of service system design... Such principles, or "logics", are often competitive and hence divisive among themselves. Some of an organization's departments are dominated by a sales logic; others by a bureaucratic-legal logic; still others by an industrial logic (Gummesson, 1993;Schlesinger and Heskett, 1991;Zuboff, 1988)... In contrast, the service logic is integrative and collaborative..." In Kingman-Brundage et al. (1995, p. 21). "Service logic propositions. Proposition 1: Service logic analysis of customer, technical and employee logics, performed against operating demands implied by the service concept, reveals the organizing principles that govern the services experiences of customers and employees..." In Kingman-Brundage et al. (1995, p. 35) First-order codes "The distinctive context and nature of public (compared to private) service and services.
Individual service logics
Global dimension
There is thus an element of value creation negotiation across the stakeholders for any particular public service that is unfamiliar to the majority of for-profit firms. In this case the value creation relationship is not a simple dyadic one but is rather dependent upon relationships between the user, a network of PSOs, and possibly also their family and friends... public service users also inhabit the dual role of being both the users of public services and citizens who may have a broader, societal interest, in the outcomes of public services. This is entirely unknown ground for-profit firms and is an issue that has been explored by Pestoff (2006) and Strokosch and Osborne (2016), amongst others..." In Osborne (2018, p. 2-3). "Public Service Logic challenges the product-dominant assumptions of the New Public Management (NPM) about the nature and management of public service delivery... , the focus here has been primarily upon 'value' as welfare outcomes and personal well-being. The co-creation of value as capacity to change and develop has not been explored sufficiently. " Osborne (2020)
Theoretical categories
• Public service logic [START_REF] Osborne | The SERVICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of coproduction and value co-creation[END_REF][START_REF] Osborne | Co-production and the co-creation of value in public services: A perspective from service management[END_REF][START_REF] Osborne | Public Service Logic: Creating Value for Public Service Users, Citizens, and Society Through Public Service Delivery[END_REF]. • Research designs in the study of institutional logics [START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF][START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF]. "Technical logic is the "engine" of service operation.
Collective service logics
Global dimension
Impersonal and objective, it comprises the basic principles that govern service production. When technical logic is consistent with the service concept, it generates outcomes valued by customers. When divorced from the service concept, technical logic takes on a life of its own to the mutual dissatisfaction of customers and employees alike. It derives from hard and soft technologies, from relevant law, and from corporate policy, rules and regulations. Technical logic's effect on service process is generally predictable, but the logic itself is normally implicit (Zuboff, 1988)... Technical logic can be surfaced by asking, "How are service outcomes produced, and why?..." In Kingman-Brundage et al. (1995, p. 24-25). "The term "product-service systems" (PSSs) has been defined as "a marketable set of products and services capable of jointly fulfilling a user's need..." In Mont (2002, p. 238).
First-order codes
"The service-centered view can be stated as follows: 1. Identify or develop core competences, the fundamental knowledge and skills of an economic entity that represent potential competitive advantage..." In Vargo & Lusch (2004, p. 5). "We offer a broadened view of service innovation-one grounded in service-dominant logic-that transcends the tangible-intangible and producer-consumer divides that have plagued extant research in this area..." In Lusch & Nambisan (2015, p. 155). "We criticize this notion by contrasting their views on commodity value with Marxist and post-Marxist literatures, finding SDL ill-equipped to understand consumer culture, but also continuing to propagate simplistic and misguided views of "value" in commodity markets. We conclude by challenging SDL's suitability as a candidate for all-encompassing social theorizing because of its tacit neoliberalism. SDL continues to provide a logic without political economy in its incapability to question its relationship with culture and inequalities of power..." In Hietanen et al. (2017, p. 1,15) "Traditionally, many people have considered products separately from services. However, recent years have seen the 'servitization' of products and the 'productization' of services. Morelli... sees 'servitization' as the evolution of product identity based on material content to a position where the material component is inseparable from the service system. Similarly, 'productization' is the evolution of the services component to include a product or a new service component marketed as a product. The convergence of these trends is the consideration of a product and a service as a single offering -a PSS. This is consistent with Wong... who sees a PSS as fitting into a spectrum where pure products are at one end and pure services at the other." In Baines et al. (2007, p. 4) • Technical logic [START_REF] Zuboff | Automatefin-fonnate: The two faces of intelligent technology[END_REF][START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF][START_REF] Silva-Morales | L'Innovation des Services Publics à l'aide de Tics dans le contexte des Smart Cities[END_REF][START_REF] Mikhaylov | Artificial intelligence for the public sector: opportunities and challenges of cross-sector collaboration[END_REF][START_REF] Reichstein | Deep learning and process understanding for data-driven earth system science[END_REF][START_REF] Dresp-Langley | Seven properties of self-organization in the human brain[END_REF][START_REF] Vecoven | Introducing neuromodulation in deep neural networks to learn adaptive be-haviours[END_REF]. • Research designs in the study of institutional logics [START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF][START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF].
Theoretical categories
• Service-dominant logic of the market [START_REF] Vargo | Evolving to a new dominant logic for marketing[END_REF][START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF][START_REF] Vargo | Institutions and axioms: an extension and update of service-dominant logic[END_REF][START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF].
• Research designs
in the study of institutional logics [START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF][START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF].
• Goods / Products dominant logic of PSS [START_REF] Mont | Clarifying the concept of product-service system[END_REF][START_REF] Manzini | A strategic design approach to develop sustainable product service systems: Examples taken from the 'environmentally friendly innovation' Italian prize[END_REF][START_REF] Tukker | Eight types of product-service system: eight ways to sustainability? Experiences from SusProNet[END_REF] From the industrial era until today...
• Research designs
in the study of institutional logics [START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF][START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF]. 4.1. P1 intellectual structure (1986 backwards -1995) Table 1 shows an overview of the ten journals of P1 cor- (1995).
Societal service logics
Global dimension
Inside corpus P1, [START_REF] Barras | Towards a theory of innovation in services[END_REF], [START_REF] Miles | Services in the new industrial economy[END_REF], [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF][START_REF] Buzzacchi | Technological regimes and innovation in services: the case of the Italian banking industry[END_REF] are the four most cited articles. In addition, [START_REF] Khan | Models for distinguishing innovative and noninnovative small firms[END_REF], [START_REF] Smith | Computer Simulation Applications in Service Operations : A Case Study from the Leisure Industry[END_REF][START_REF] Latour | Cultural Anchoring in the Service Sector[END_REF], [START_REF] Kerkhof | Improved customer service using new information technology as an enabling tool[END_REF], [START_REF] Evangelista | Measuring innovation in services[END_REF], [START_REF] Beltramini | High technology salespeople's information acquisition strategies[END_REF], [START_REF] Metcalfe | Technology systems and technology policy in an evolutionary framework[END_REF], and [START_REF] Barras | Interactive innovation in financial and business services: The vanguard of the service revolution[END_REF] complete the P1 corpus. As shown in Table 1, most of P1 corpus articles were published in marketing and innovation journals.
4. 1.1. P1 (1986 -1995) theoretical point of view
The results of P1 corpus analysis indicate that the following theories have strongly influenced the intellectual structure in this period:
• The Schumpeterian approach.
• The Barras' reverse product cycle.
• The evolutionary theory of technology policy. peter et al., 1934, p. 66). The Schumpeter approach highlights many features of service innovation, such as the strong presence of organisational innovations, the involvement of multiple actors in innovation processes, and the importance of knowledge coding to lead innovation.
In Schumpeterian perspective innovation research is confronted with a problem of mixing activities or domains.
Therefore, conceptual reinforcement of service-oriented innovation studies is needed to build a bridge based on a broad and conceptually sound perspective of innovation that encompasses both product and services.
Barras' reverse cycle model [START_REF] Barras | Towards a theory of innovation in services[END_REF][START_REF] Barras | Interactive innovation in financial and business services: The vanguard of the service revolution[END_REF] adopted the three innovation cycle phases proposed by Abernathy & Utterback (1978) to build a theory of innovation in services. [START_REF] Barras | Towards a theory of innovation in services[END_REF] argues that the innovation process adopts the form of an "inverted product cycle" with a process innovation preceding a product innovation because services adopt products developed by the manufacturing sector. Products then stimulate the change in the service provision resulting in process innovation.
This type of change also stresses the central role of new From the evolutionary theory of technology policy Metcalfe (1995) theoretical point of view, the success of economic development of a country needs to forge a link between the ability to acquire, absorb, disseminate, and implement technology and a national system of innovation.
According to this author, for policy purposes, the degree of connection between these different dimensions is at the core of technology policy.
P1 (1986 -1995) factor analysis
The decomposition of the P1 corpus via technical NPL highlights as results 496 terms distilling a synthesis of knowledge on research in service innovation, service system for the period P1. A thesaurus grouped the 496 terms extracted from the corpus into 20 driving themes / theories. Globally, the P1 intellectual structure exposes the beginning development of ICTs. In this period, technologies have influenced services industries and earliest service innovation theories. For example, Barras' model of the "reverse product cycle" has often been cited as the earliest service innovation theory [START_REF] Barras | Towards a theory of innovation in services[END_REF][START_REF] Carlborg | The evolution of service innovation research: a critical review and synthesis[END_REF][START_REF] Toivonen | Service Innovation: Novel Ways of Creating Value in Actor Systems[END_REF]. In fact, in this period, there was a tendency to transpose the theories developed for .2 is available in appendices. The four factors representing the P1 corpus intellectual structure will be presented in the following paragraphs.
The "innovation in services industries factor" is linked to the Schumpeterian theory and the "reverse product cycle" theory. Figure 6b shows that this factor brings together the following themes: Schumpeterian, generation of new types of services, reverse product cycle, consumer/customer/user, service/sector, innovation in services, adoption and diffusion. Furthermore, Figure 6b shows that the "innovation in services industries factor" is connected to two other factors: "service innovation"
and "information technology".
Besides, in this factor, [START_REF] Miles | Services in the new industrial economy[END_REF] deals with "adoption and diffusion of technology" in service companies. In this sense, the "service innovation" factor, is associated with adoption and diffusion themes and highlights Miles'
(1993) and [START_REF] Barras | Towards a theory of innovation in services[END_REF] works. The "service innovation/service logic" factor brings together the themes of service logic, service system, and service innovation.
Also, in this factor, [START_REF] Smith | Computer Simulation Applications in Service Operations : A Case Study from the Leisure Industry[END_REF] stresses the complexity of service systems and, [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF] provide the first model for service logic in service systems. These authors suggest three fundamental logics for resource integration in a service system: customer, technical, and employee logics. According to these authors, these three logics are linked through three interfaces: the encounter interface links the client and the employee' logic, the technical interface links customer and technical logic, and the support interface links employee and technical logics.
Furthermore, Figure 6b shows that the factor "national government" brings together the themes "technology policy", "national system of innovation", and "institutional". In this factor, [START_REF] Metcalfe | Technology systems and technology policy in an evolutionary framework[END_REF] outlines an evolutionary theory of technology policy. This author argues that the success of the economic development of a country is closely linked to its ability to acquire, absorb, disseminate, and implement technology in a national system of innovation.
In this sense, a national system of innovation is a set of institutions that contribute to the production and dissemi- The factor "neo-Schumpeterian theory" encompasses the theme "Innovation in services" and is linked to the "consumer theory" factor. There are several attempts to develop neo-Schumpeterian theories. For example, [START_REF] Gallouj | Towards a Neo-Schumpeterian Theory of Innovation in Services ?[END_REF] discusses the [START_REF] Barras | Towards a theory of innovation in services[END_REF]'s neo-Schumpeterian theory, that is, the reverse product cycle.
Gallouj claims that it is necessary to specify the area in which the model is valid or extend Barras' theory because their model is exclusively based on the industrial era tradition. Gallouj suggests extending Barras' model in at least two directions: 1) beyond IT and 2) the service sector. On the other hand, Figure 7b shows that the factor "consumer theory" encompasses the theme "Lancasterian" and is linked to the neo-Schumpeterian theory and the innovation in service factors. Figure 7a shows that [START_REF] Gallouj | Innovation in Services[END_REF] has proposed a neo-Schumpeterian ap- Another factor in Figure 7a and 7b, that is the factor "innovation in services" factor, embraces the themes "interorganisational", "service sector/industries" and the factor service innovation encompass the themes "web service innovation", "service system", "service innovation", and "consumer/customer/user". In this factor Sundbo (1997)'s investigation aimed at understanding how service companies innovate and organise innovation activities, whereas
Evangelista & Sirilli (1998) provided empirical evidence for the relevance and nature of innovation activities in the service sector. [START_REF] Lee | Smart products and service systems for e-business transformation[END_REF] discussed the needs of next-generation products and manufacturing systems.
Concerning product-service systems, Manzini & Vezzoli (2003, p. 851) defined a "product-service system" as an innovation strategy that goes from the design (and sale) of physical products to the design (and sale) of a system of products and services capable of meeting the specific requirements of customers, that is, Product-Service System (PSS). [START_REF] Tukker | Eight types of product-service system: eight ways to sustainability? Experiences from SusProNet[END_REF] argued that it is necessary to promote sustainable PSSs.
Another latent factor into P2 was the "service management" factor that brings together the themes "technological innovation", "service management", and "value coproduction". In this factor, [START_REF] Van Der Aa | Realizing innovation in services[END_REF] shed some light on innovations in multi-unit forms, combinations of services, and cooperation with customers. According to these authors, a multi-unit organisation fosters new ways of organisation with a new balance between standardisation and personalisation. Furthermore, the new combinations of services extends and redefines service companies' service portfolio. The creation of new service groups requires the integration and realisation of synergies in the service portfolio. The cooperation with customers redefines roles and relationships such as "coproduction". Also, these authors suggest that the application of IT to services can increase their efficiency and quality. On the other hand, in this factor, Den Hertog To continue, Figure 7b shows that the factor "value co-creation" embraces the theme "service-dominant logic". This factor marks the transition between coproduction and co-creation of value and has gained importance from the work of [START_REF] Vargo | Evolving to a new dominant logic for marketing[END_REF]. These authors emphasised that we are the heirs of a dominant logic based on the exchange of "goods" or products (the industrial era tradition), based on material resources, intrinsic value, and transactions. This factor shows the emergence of a new focus on intangible resources, cocreation of value, and relationships. According to these authors, the offer of services for economic exchange is beyond the supply of products and service (singular) must be separated from services (plural). Singular refers to the process of using its skills for the benefit of others, while the plural relates to the carriers of competences and, are comparable to products.
Along with into P2 corpus, the factor "ICT/digital technology/IT" encompasses the themes "service encounter", "service provider", "service sector", "market", "customer", and "ICT/digital technology/IT". In this factor, [START_REF] Sundbo | Innovation as a loosely coupled system in services[END_REF] noted that many studies on innova- Figure 7b shows that the factor "standard" brings together the themes "institutional", "standard", and "eservice/m-service". In this factor, Figure 7a underlines the research of [START_REF] Sundbo | Management of Innovation in Services[END_REF]; [START_REF] Sundbo | Innovation as a loosely coupled system in services[END_REF]; [START_REF] Gallouj | Towards a Neo-Schumpeterian Theory of Innovation in Services ?[END_REF]; [START_REF] Gallouj | Innovation in Services[END_REF]. [START_REF] Gallouj | Innovation in Services[END_REF] discuss the problems of product standardisation and [START_REF] Sundbo | Innovation as a loosely coupled system in services[END_REF] point out that service industries are under pressure to reduce costs and tend to standardise. According to these authors, this standardisation means that the production of service is not unique, and, the service companies use modular systems. Also, in this factor, [START_REF] Yoo | The role of standards in innovation and diffusion of broadband mobile services: The case of South Korea[END_REF] The factor "resource-based view/theory (RBV)" brings out the themes of "complex/complexity" and "health-care".
In this factor, [START_REF] Ray | Information Technology and the Performance of the Customer Service Process: A Resource-Based Analysis[END_REF] presented an empirical study that examined the degree to which IT influence customer service through an RBV-based analysis.
Within the factor "public services" the theme "public service innovation" was tackled. In this factor, [START_REF] Walker | Evidence on the Management of Public Services Innovation[END_REF] analysed innovation as a central element of the UK government program to improve public services. This author has stressed that there is little evidence on how innovation is managed in public service organisations.
The factor "competition" brings together the themes "digital/ICT/IT service innovation", "business model", "e-service innovation/m-service innovation", and "eservice/m-service". For example, the search of Michalski ( 2003 The corpus of period P3 is composed of 727 articles from 255 sources synthesised in Table 3 & Spohrer (2006) and [START_REF] Maglio | Service systems, service scientists, SSME, and innovation[END_REF].
An essential aspect of the intellectual structure development observable in the P3 corpus is the adoption of a more systemic and holistic view in service research.
This systemic approach was gained ground thanks to the emergence of interdisciplinary service science (Chesbrough & Spohrer, 2006;[START_REF] Maglio | Service systems, service scientists, SSME, and innovation[END_REF][START_REF] Spohrer | Steps toward a science of service systems[END_REF][START_REF] Spohrer | Service Science[END_REF][START_REF] Maglio | Fundamentals of service science[END_REF]. The goal of service science is to move forward in understanding the creation of value in a complex and human-oriented service system, no matter if it is based on IT or not. Furthermore, in P3, a service system can have some supplementary characteristics beyond intangible characteristics related to the context of P1 corpus [START_REF] Barras | Towards a theory of innovation in services[END_REF][START_REF] Barras | Interactive innovation in financial and business services: The vanguard of the service revolution[END_REF][START_REF] Buzzacchi | Technological regimes and innovation in services: the case of the Italian banking industry[END_REF][START_REF] Kerkhof | Improved customer service using new information technology as an enabling tool[END_REF] and the and P2 corpus [START_REF] Gallouj | Towards a Neo-Schumpeterian Theory of Innovation in Services ?[END_REF]. [START_REF] Kerkhof | Improved customer service using new information technology as an enabling tool[END_REF] and [START_REF] Miles | Services in the new industrial economy[END_REF] show examples of P1 technologies' characteristics, such as electronic data interchange (EDI), automated teller machines (ATM), and fax, disseminated in the industrial era and service era traditions. According to Mukhopad- Smart technologies allow and need the horizontal and vertical integration of actors in a service ecosystem. Such integration is achieved through digital infrastructures and architectures in modular layers that can be reconfigured dynamically [START_REF] Tiwana | Platform Ecosystems: Aligning Architecture, Governance, and Strategy[END_REF][START_REF] Tiwana | Evolutionary Competition in Platform Ecosystems[END_REF]. The global corpus shows that although in P1 and P2 the research focused on the notion of IT adoption in organisations [START_REF] Barras | Towards a theory of innovation in services[END_REF][START_REF] Miles | Services in the new industrial economy[END_REF][START_REF] Gallouj | Innovation in Services[END_REF][START_REF] Lyytinen | Research Commentary: The Next Wave of Nomadic Computing[END_REF], in the P3 period a considerable number of re- What can be clearly seen in P3 corpus is an emphasis on institutional arrangements, and complexity surrounding value creation efforts through innovation, integration of service innovation into daily life activities and multiactor perspectives. Also, this period shows an interest in knowledge, competencies, and capabilities behind concrete results. SDL and service science topics have paid particular attention to the institutionalisation of innovation at different levels of analysis [START_REF] Vargo | Innovation through institutionalization: A service ecosystems perspective[END_REF]. Figure 8b shows the following 23 factors with the heaviest weight: "Smart City/smart city services", "service innovation", "complexity theory", "well-being/transformative service research", "institutional theory", "Digital Divide", "Self-service systems", "national health system", "ITservice innovation", "Public service innovation", "C-B approach", "Value co-creation", "Service Science (SSMED)", "Public Service System", "product-service systems", "boundary resources", "governance", "mobile ecosystems", "digital artefact", "modularity theory", "eservices", "service delivery", and "Internet of things (IoT)".
Due to space constraints, we will describe below only some factors.
As can be seen from the Figure 8b, the factor "Service Innovation" brings together the themes "service innovation", "consumer/ customer/ user", "service provider", "market-oriented", "resource-based view (RBV)", "customer service innovation", "reasoned action theory", "dynamic capability", and "context-aware service". This factor is linked to two other factors: "Institutional theory" and "Complexity theory". Papers concerning this factor regard service innovation and resource integration principally. Besides, the factor "Complexity theory" embraces the themes "Complexity theory" and "Dynamic capability".
This factor is linked to the factor "Service Innovation". Some authors [START_REF] Gummesson | Quality, service-dominant logic and many-to-many marketing[END_REF][START_REF] Gummesson | B2B is not an island![END_REF][START_REF] Gummesson | 2B or not 2B: That is the question[END_REF][START_REF] Gummesson | The emergence of the new service marketing: Nordic School perspectives[END_REF] analysed value co-creation within complex service systems. [START_REF] Gummesson | Quality, service-dominant logic and many-to-many marketing[END_REF] From the graph 8b we can see that the "Smart City / Smart City Services" factor connects the themes "Smart City / Smart City Services", "Urban /City", "Citizen", and "e-Government". This factor is linked to the factors "Selfservice systems", "Public service innovation", and "Institutional Theory". In this factor, some authors [START_REF] Arduini | Technology adoption and innovation in public services the case of e-government in Italy[END_REF][START_REF] Reggi | How advanced are Italian regions in terms of public e-services? the construction of a composite indicator to analyze patterns of innovation diffusion in the public sector[END_REF]Arduini et al., One interesting factor is the Service Science (SSMED), embracing "Service system" and "Complex service system" themes, and linked to the factors "value co-creation"
and "Product-Service System (PSS)". Service science was born around 2006 as an interdisciplinary approach to advance the understanding of different service systems [START_REF] Spohrer | Service Science, Management, Engineering, and Design (SSMED)[END_REF]. SSMED emerged independently from SDL, but SSMED and SDL are part of the same research community or invisible college [START_REF] Vogel | The Dynamic Capability View in Strategic Management: A Bibliometric Review[END_REF]). An invisible college can be defined as communications in dyads or groups amongst scholars sharing interests in a particular area [START_REF] Price | Networks of scientific papers[END_REF][START_REF] Vogel | The Dynamic Capability View in Strategic Management: A Bibliometric Review[END_REF]. For [START_REF] Maglio | Fundamentals of service science[END_REF] SDL is the theoretical basis is service science. On the other hand, other authors [START_REF] Pan | Achieving Customer Satisfaction through Product-Service Systems[END_REF][START_REF] Liu | Constructing a sustainable service business model: An SD logic-based integrated product service system (IPSS)[END_REF][START_REF] Smith | Servitization and operations management: a service dominant-logic approach[END_REF][START_REF] Beuren | Product-service systems: A literature review on integrated products and services[END_REF] dealing with PSS and servitisation. Their investigation is an extension of Baines' review, considering "servitisation" and "productisation" material component inseparable from a service system. • Appearance: is the emergence of a new college without a predecessor in the same field (p. 1031). In the case of this article, several service logics appeared in P1. Also, the service system construct.
• Transformation: is the gradual or sudden change of an existing college, which may result in the formation of a new college. To some extent, this evolutionary pattern applies to all colleges because they always change over time (p. 1031). Figure 9 shows that the SDL has been changing since its appearance in the P2 period until after the P3 period.
• Drift: is the process by which parts of a college become incorporated into another, preexisting college (p. 1032). Modularity theory, systemic innova-tion theory and complex adaptive systems theory become incorporated into another preexisting college.
• Differentiation: is the process by which a broadly defined college splits up into several new colleges In some cases, even core colleges suddenly disappear (p. 1033). In this article, the reverse product cycle is an example of a college implosion.
• Revival: refers to the reappearance of a college that has temporarily disappeared (p. 1033). For example, after a loss of popularity, the ANT and the characteristics-based approach have reappeared in P3 corpus. 4 presents some service logics latent in the P1, P2 and P3 corpus.
Institutional pluralism and complexity in
Considering that institutional logics are defined as "the socially constructed, historical patterns of material practices, assumptions, values, beliefs, and rules by which individuals produce and reproduce their material subsistence, organise time and space, and provide meaning to their social reality" (Thornton & Ocasio, 1999, p. 804), in the following paragraphs we present some service logics latent in the corpus. Before presenting the service logics latent in the literature, it is essential to note that according to [START_REF] Cruz | A criticism of the use of ideal types in studies on institutional logics[END_REF], institutional logics should not be confused with ideal-types. For this author, an ideal type is a concept of sociology defined by Max Weber to help understand or theorise certain phenomena, without claiming that the characteristics of this type are always and entirely found in the phenomena observed in reality. Weber focuses on the neo-Kantian theory of knowledge regarding reality as an infinite reality. The concept of ideal-type would, therefore, be an instrument to unify the parts of this reality, chosen contingently by the researcher who fixes a particular interest on a subjective analysis of certain aspects. In the Weberian methodology, dimensions that the researcher keeps of the phenomenon studied through an ideal-type depends on the way the researcher is positioned, their vision of the world and culture. In this sense, it is essential to note that it is not possible to find an ideal-type pure in the real world. This is not a problem, because the principal value of the concept is its ability heuristics, namely its 2011), the artefacts are at the centre of routines as theories and routines as practices (ostensive and performative aspects). Our article suggests that value is created through a (re-)framing process and destroyed through an overflowing process (figure 10a).
Co-Existence or Blending
Constellation of logics
Two or more logics co-exist over a long period of time in a specific industry, either blending or in an uneasy truce.
Organizational responses
Two or more institutional logics co-exist in an organization, with varying degrees of conflict or hybridization.
Individual coping
Organizational members straddle between two or more logics within their organization, amidst contradictory institutional demands.
Outcomes
Institutional logics
Description
Employee logic
Employee logic is the underlying reason that motivates the behaviour of employees. This logic is individualistic and gives rise to irregular and inconsistent service performance, especially in cases where work procedures are ambiguous, and employees are forced to invent their processes [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF].
Client / Customer's logic
To Kingman-Brundage et al. (1995, p. 23): " the client's logic is the underlying reason that drives the behaviour of the client according to their needs and wants which are often unpredicTable. Customers often have expectations regarding their service experiences (Parasuraman et al., 1991, cited by (Kingman-Brundage et al., 1995)). The client's logic signals the client as a consumer and as a co-producer of the service. It can be addressed by exploring the question: "What is the client trying to do and why?"." In the customer-dominant logic (CDL) of [START_REF] Heinonen | A Customer-Dominant Logic of Service[END_REF], the study of the human dimension which can contribute to new insights on the concepts: human-centred service innovation and human-centred service system.
Service logic
Service logic describes how and why a unified service system works. It is a set of organising principles that govern the service experiences of customers and employees [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF]. Concerning the Nordic school of service logic, Saarij ärvi et al. (2017, p. 17-18) puts out that: "according to the American Marketing Association's definition, and in particular the stakeholders that are defined as being involved in marketing, service logic is more concerned with customers, clients and partners, whereas S-D logic stresses enlarging the perspective to networks, economies, and nations, that is, to society at large. This evident difference in emphasis helps understand and reflect the contradictions between the perspectives. The contradictions are due to different ontological approaches to what eventually constitutes value creation and the fact that S-D logic adopts and is better suited to a more macro level of analysis whereas service logic takes a more micro perspective. The perspectives view interaction in different ways due to S-D logic using the concept in describing interaction on a more macro level whereas service logic is more focused on the interaction taking place between the firm and the customer. According to these authors "provider service logic is a business logic based on service provision (Gr önroos, 2011a)"... Service logic sees service both as the fundamental basis of business and as a logic of value creation (Gr önroos, 2011a). In this respect, too, all firms are service businesses (Gr önroos, 2009)..."(p. 9).
Public service logic (PSL)
For [START_REF] Osborne | The SERVICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF] and [START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of coproduction and value co-creation[END_REF][START_REF] Osborne | Public Service Logic: Creating Value for Public Service Users, Citizens, and Society Through Public Service Delivery[END_REF] the PSL and its implications for public services are in their infancy. The search for Osborne (2018) argued for a separation of the PSL (Public Service Logic) from the SDL for the following reasons:
• PSL requires considering co-production and co-creation of "public" value.
• SDL seeks to explore how to leverage value creation in private sector service companies for customer retention and profitability. However, for PDL, in public services, the repetition of service is likely to be seen as a sign of service failure rather than success.
Continue on the next page
Institutional logics
Description
Technical logic
The technical logic is the "engine" of the service's operation. According to [START_REF] Kingman-Brundage | Service logic": achieving service system integration[END_REF], when the technical logic is compatible with the concept of service, it generates results valued by customers. However, when separated from the concept of service, technical logic "leads its own life" and creates mutual dissatisfaction for customers and employees. This logic stems from hard and soft technologies, from politics and business rules. Technical logic provides answers to a question repeatedly asked by the employee and the customer: What is my role and how to perform it?. This logic is based on the technological evolution impact in the transformation of society [START_REF] Schumpeter | The Creative Response in Economic History[END_REF]. "This logic can logic can be based on goods or services as intelligence artificial algorithms, sensors to teledetection, additive manufacturing, in order to create "product-service systems" (PSSs), "a markeTable set of products and services capable of jointly fulfilling a user's need..." Mont (2002, p. 238).
Good Dominant Logic
"Marketing inherited a model of exchange from economics, which had a dominant logic based on the exchange of "goods," which usually are manufactured output. The dominant logic focused on tangible resources, embedded value, and transactions... In its most rudimentary form, the goods-centered view postulates the following: 1. The purpose of economic activity is to make and distribute things that can be sold. 2. To be sold, these things must be embedded with utility and value during the production and distribution processes and must offer to the consumer superior value in relation to competitors' offerings. 3. The firm should set all decision variables at a level that enables it to maximize the profit from the sale of output. 4. For both maximum production control and efficiency, the good should be standardized and produced away from the market. 5. The good can then be inventoried until it is demanded and then delivered to the consumer at a profit..." In Vargo & Lusch (2004, p. 1-5).
Service-Dominant logic (SDL)
The SDL appeared for the first time in 2004 [START_REF] Vargo | Evolving to a new dominant logic for marketing[END_REF] claiming that management sciences have been dominated for a long time by a "Dominant Logic of Goods or Products". These researchers (p. 203) argue that service logic replaces the old product logic. The SDL has evolved since 2004. SDL has moved from the concept of co-production to the concept of co-creation by proposing two additional fundamental principles (Vargo et al., 2008). In the SDL the term "services" in the plural, implicitly designates the output units, the term "service" refers to collaborative processes, where skills/resources are used for the benefit of another entity [START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF]. In 2015, the SDL and its fundamental principles were transposed from the field of Marketing to the field of Management of information systems, in the form of a conceptual framework [START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF].
The SDL has received several criticisms. We agree with the criticism of [START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF]. These authors criticise the SDL for its tacit neo-liberalism. In this context, we consider the SDL as the logic of market service, focused on economic value and as indicated by [START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF]. According to Saarij ärvi et al. ( 2017), SDL recent emphasis on studying markets instead of marketing.
Service logics and the organisational level of analysis
By considering that institutional logic is defined as socially constructed sets of material practices, assumptions, values and beliefs that shape cognition and behaviour [START_REF] Thornton | The institutional logics perspective: a new approach to culture, structure, and process[END_REF] Future studies may focus on issues related to the context of service provision. Also, more research is needed to understand sophisticated new features such as selforganising (Dresp-Langley, 2020), context-awareness [START_REF] Vecoven | Introducing neuromodulation in deep neural networks to learn adaptive be-haviours[END_REF] and artificial intelligence for the public sector [START_REF] Mikhaylov | Artificial intelligence for the public sector: opportunities and challenges of cross-sector collaboration[END_REF]. Its leads to the following research question: how can institutional complexity apprehend the multidimensionality, systemic aspect, opportunities and challenges of service logic cross-sector collaboration in service systems, in particular, the evolution towards context-awareness smart service systems?.
Our results highlight the following topics with technologi- [START_REF] Maglio | Innovation and Big Data in Smart Service Systems[END_REF][START_REF] Carlborg | The evolution of service innovation research: a critical review and synthesis[END_REF]. This evolution has highlighted relationships between digital innovation and technological agility [START_REF] Sørensen | Academic agility in digital innovation research: The case of mobile ICT publications within information systems 2000-2014[END_REF]. Subjects [START_REF] Srivastava | Bridging the Service Divide Through Digitally Enabled Service Innovations: Evidence from Indian Healthcare Service Providers[END_REF][START_REF] Ben Letaifa | Toward a service ecosystem perspective at the base of the pyramid[END_REF], well-being [START_REF] Anderson | Transformative service research: An agenda for the future[END_REF][START_REF] Sanchez-Barrios | Services for the underserved: unintended well-being[END_REF], and services of the Base of the Pyramid (BoP) [START_REF] Reynoso | Learning from socially driven service innovation in emerging economies[END_REF][START_REF] Sanchez-Barrios | Services for the underserved: unintended well-being[END_REF]. Does this lead to the to the fol- and interaction [START_REF] Bloch | Public sector innovation-From theory to measurement[END_REF][START_REF] Osborne | The SERVICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF]. We stress the lack of studies on the ecological transformation of urban service system (small, medium, big cities) (Silva-Morales, 2017).
Direction 5: multilevel and intermediation in service logics Thornton et al. (2012, p. 14) suggest that "researchers that combine multiple levels of analysis in their research are more likely to observe a more accurate picture". Casasnovas & Ventresca (2019, p. 149 -150) put out that:
To account for multilevel processes, researchers need to broaden their design ambition and also think about data sources in more analytically aggressive way. For example, they will need to look To advance the understanding of innovation in service systems, we emphasise the need to analyse several service logics. We remember that, although the emergence and evolution of SDL [START_REF] Vargo | Evolving to a new dominant logic for marketing[END_REF][START_REF] Gummesson | Quality, service-dominant logic and many-to-many marketing[END_REF], 2016) was important, SDL has overshadowed other service logics. Therefore, it is important to study the different service logics co-existence, e.g., public-service logic [START_REF] Osborne | The SERVICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of coproduction and value co-creation[END_REF], customer-dominant service logic [START_REF] Heinonen | A Customer-Dominant Logic of Service[END_REF]. The co-existence and refinement of several logics are necessary to contribute to the understanding of technological and non-technological innovations and value creation or destruction within service systems. In this sense, the institutional theory can help to combine several logics [START_REF] Boesen | Local food and tourism: an entrepreneurial network approach[END_REF]
Scopus database
57 articles ( TITLE-ABS-KEY ( "service science" ) OR TITLE-ABS-KEY ( "Service science, management, and engineering" ) AND TITLE-ABS-KEY ( "service system*" ) OR TITLE-ABS-KEY ( "science of service systems" ) AND TITLE-ABS-KEY ( "complex service system" ) OR TITLE-ABS-KEY ( "smart service system" ) TITLE-ABS-KEY ( "urban service system") ) AND DOCTYPE ( ar OR re ) AND SUBJAREA ( mult OR ceng OR CHEM OR comp OR eart OR ener OR engi OR envi OR mate OR math OR phys OR mult OR arts OR busi OR deci OR econ OR psyc OR soci ) AND PUBYEAR > 1986 AND PUBYEAR > 2015 AND ( LIMIT-TO ( LANGUAGE , "English" ) )
1,378 articles (TITLE-ABS-KEY ("service innovation" ) OR TITLE-ABS-KEY ( "innovation in services" ) OR TITLE-ABS-KEY ( "innovation in service systems" ) OR TITLE-ABS-KEY ( "digital innovation" ) OR TITLE-ABS-KEY ( "smart service system" ) AND TITLE-ABS-KEY ( service ) OR TITLE-ABS-KEY ("service system" ) ) AND DOCTYPE ( ar OR re ) AND SUBJAREA ( mult OR ceng OR CHEM OR comp OR eart OR ener OR engi OR envi OR mate OR math OR phys OR mult OR arts OR busi OR deci OR econ OR psyc OR soci ) AND PUBYEAR > 1986 AND PUBYEAR > 2015 AND ( LIMIT-TO ( LANGUAGE , "English" ) )
Web of Science database 319 articles TOPIC: ("service science") OR TOPIC: ("Service science, management, and engineering") AND TOPIC: ("service system") OR TOPIC: ("urban service system") OR TOPIC: ("science of service systems") OR TOPIC: ("complex service system") OR TOPIC: ("Smart Service System") Refined by: DOCUMENT TYPES: (ARTICLE OR EDITORIAL MATERIAL OR REVIEW) AND LANGUAGES: (ENGLISH) AND DOCUMENT TYPES: (ARTICLE OR REVIEW) Indexes=SCI-EXPANDED, SSCI, A&HCI, ESCI Timespan=1986-2015 685 articles TOPIC: ("service innovation") OR TOPIC: ("innovation in services") OR TOPIC: ("innovation in service systems") OR TOPIC: ("digital innovation") OR TOPIC: ("smart service system*") AND TOPIC: (service) AND TOPIC: ("digital innovation") AND TOPIC: ("service system") Refined by: DOCUMENT TYPES: Factor 12 Adoption and diffusion 0,186 -0,192 0,339 -0,09 0,09 0,044 -0,099 -0,053 -0,144 0,061 -0,117 0,546 Service sector/Service industries 0,144 -0,213 -0,097 -0,05 -0,243 -0,356 0,317 -0,067 -0,168 -0,313 0,049 -0,287 Consumer/customer/user0,132 -0,046 -0,071 -0,25 0,094 -0,334 0,337 -0,101 -0,196 0,126 -0,47 0,012 Inter-organizational 0,109 -0,12 -0,007 -0,097 -0,011 0,244 -0,005 0,029 -0,058 -0,006 0,145 -0,308 Service provider/provision 0,104 0,312 0,074 -0,078 -0,108 -0,357 -0,173 -0,09 -0,059 0,102 0,25 -0,066 Product-service innovation 0,086 -0,04 -0,051 -0,07 -0,072 0 -0,009 0,114 0,008 -0,039 0,236 0,764 Innovation in services 0,085 -0,141 -0,039 -0,242 -0,194 0,467 0,318 -0,078 -0,098 -0,292 0,283 -0,368 neo-Schumpeterian theory 0,077 -0,022 -0,062 -0,097 -0,053 0,498 -0,097 0,019 -0,057 0,07 0,052 -0,18 Service encounter 0,076 -0,099 -0,03 -0,061 -0,002 -0,476 0 0,01 -0,022 0,002 0,014 -0,183 New service development 0,054 -0,078 -0,088 -0,073 -0,112 -0,207 -0,085 -0,107 -0,088 0,005 -0,179 0,088
Service system 0,049 -0,016 -0,02 -0,007 0,026 -0,044 0,009 0,051 0,018 -0,005 -0,574 -0,097 Service platform 0,048 0,005 -0,036 -0,043 -0,067 -0,036 -0,058 -0,011 0,019 0,047 -0,211 -0,035 Institutional 0,037 0,328 0,767 -0,077 -0,107 -0,008 -0,045 -0,047 -0,019 0,038 0,072 -0,038 Value co-creation 0,035 -0,034 -0,021 -0,025 0,007 0,007 -0,008 -0,023 0,97 0,014 0,012 -0,022 Service-Dominant Logic 0,035 -0,034 -0,021 -0,025 0,007 0,007 -0,008 -0,023 0,97 0,014 0,012 -0,022 KIBS 0,034 -0,045 -0,039 0,958 -0,001 -0,007 -0,005 -0,028 -0,032 0,023 0,073 -0,048 Knowledge-intensive services (KIS) 0,034 -0,045 -0,039 0,958 -0,001 -0,007 -0,005 -0,028 -0,032 0,023 0,073 -0,048
Technological innovation 0,031 -0,031 -0,033 -0,102 -0,345 -0,142 0,699 -0,015 -0,005 0,017 0,101 -0,104 Digital/ICT/ITservice innovation 0,023 -0,039 0,023 -0,014 0,042 0,061 0,007 0,791 -0,037 0,023 -0,094 0,04 Lancasterian 0,013 -0,008 -0,016 -0,014 0,005 0,015 -0,025 -0,021 -0,008 -0,978 0,035 -0,018 Consumer Theory 0,013 -0,008 -0,016 -0,014 0,005 0,015 -0,025 -0,021 -0,008 -0,978 0,035 -0,018 Value co-production 0,01 0,014 0,003 0,467 -0,268 0,097 0,689 -0,025 0,006 0,03 0,013 0,021 Sustainability 0,009 -0,129 -0,114 -0,16 0,073 0,136 0,049 -0,059 -0,101 0,11 0,222 -0,093 Health-care 0,007 0,37 -0,057 -0,038 0,744 -0,045 0,252 0,032 0,006 0,008 0,05 -0,021 Public services 0 0,96 0,035 -0,031 0,074 0,021 0,005 -0,028 -0,028 0,007 0,022 -0,02 Public service innovation 0 0,96 0,035 -0,031 0,074 0,021 0,005 -0,028 -0,028 0,007 0,022 -0,02
Service management -0,016 0,025 -0,019 0,015 -0,331 0,085 0,834 -0,025 -0,008 0,02 -0,001 0,046 (Continued... P2 Factors/themes.)
Standard
-0,03 -0,104 0,901 -0,004 0,052 0,009 0,016 -0,025 0 0,017 0,063 0,045 Business model -0,036 -0,035 -0,042 -0,055 -0,067 0,001 -0,003 0,711 0,005 -0,023 0,183 0,557 Competition -0,048 0,01 -0,046 -0,013 -0,033 -0,024 -0,062 0,818 0,013 0,037 0,018 -0,055 Resource-based theory -0,067 -0,033 -0,029 -0,057 0,821 -0,091 0,307 -0,015 0,013 0,024 0,041 0,013 ICT/digital technology/IT -0,074 0,104 -0,195 -0,106 0,17 -0,532 -0,041 0,194 -0,162 0,161 0,542 -0,014
Intra-organizational -0,085 -0,004 -0,116 -0,059 -0,103 -0,001 -0,024 -0,125 0,007 0,053 -0,147 0,145 Web service innovation/web innovation/web based -0,092 -0,085 0,016 -0,019 0,093 0,04 0,031 -0,022 -0,045 -0,008 -0,68 0,001
Service innovation -0,096 0,26 -0,1 -0,091 -0,129 0,174 -0,335 0,154 -0,185 0,151 -0,543 0,334 Market -0,183 0,028 0,188 -0,25 -0,339 -0,347 -0,163 0,146 0,272 0,112 -0,177 0,094 Management -0,417 -0,049 -0,293 -0,023 0,121 0,085 0,193 -0,003 -0,117 0,139 0,112 0,417 e-service/m-service -0,574 -0,112 0,511 -0,014 -0,011 -0,041 -0,012 0,455 -0,039 -0,003 -0,073 -0,046 Complex/complexity -0,612 0,154 -0,045 -0,072 0,445 0,117 0,139 -0,147 -0,011 0,003 -0,067 0,038 e-Service innovation/Mobile service innovation -0,734 -0,028 -0,002 -0,016 -0,07 -0,037 -0,056 0,611 -0,01 0,009 0,057 -0,027
Service delivery system -0,891 -0,03 0,004 -0,016 -0,077 -0,055 -0,083 -0,026 -0,013 0,007 0,061 -0,055
Cumulative Variance 2,039 3,839 5, 575 7,216 8,806 10,331 11,813 13,257 14,648 15,986 17,291 18,556 19,79 21,018 22,226 23,409 24,547 25,668 26,781 27,859 28,919 29,963 cocreation 0,749 0,05 0,061 -0,058 -0,003 -0,013 -0,024 -0,026 0,013 -0,004 -0,023 0,029 -0,041 0,015 0,028 0,058 -0,004 -0,022 -0,033 0,045 -0,001 0,01 -0,068 Servicedominant logic 0,726 0,05 0,051 0,005 -0,096 -0,007 -0,05 -0,009 -0,083 -0,076 -0,056 0,028 0,041 0,026 -0,118 0,008 -0,111 -0,037 0,004 -0,047 0,139 0,067 -0,066 Resource integration 0,456 0,025 -0,003 0,047 0,059 0,014 -0,02 -0,014 0,022 -0,053 0,122 0,003 0,023 0,003 0,061 0,08 0,025 0,033 -0,045 0,056 -0,102 -0,08 0,029 Value proposition 0,433 -0,017 0,009 -0,084 -0,193 -0,009 -0,056 0,05 -0,058 0,072 -0,049 0,004 0,004 0,018 0,056 0,123 -0,072 -0,002 0,027 -0,141 0,089 0,012 -0,044 Value-in-use 0,372 -0,043 -0,032 -0,025 0,008 0,021 -0,049 -0,025 -0,072 -0,004 -0,145 -0,054 -0,09 0,072 -0,022 -0,029 -0,449 0,033 -0,032 -0,025 -0,022 -0,008 -0,021 Service logic 0,329 -0,042 0,007 -0,039 0,041 -0,01 -0,009 -0,002 0,038 0,058 -0,034 0,005 0,015 -0,023 0,035 0,017 0,055 -0,036 -0,005 -0,037 0,005 -0,03 -0,034 Consumer/ customer/user 0,315 0,009 0,038 -0,031 0,088 0,013 0,161 -0,064 0,041 -0,173 -0,172 0 0,013 0,09 -0,044 -0,078 -0,001 0,068 -0,058 0,41 0,108 0,062 -0,01 Service ecosystem 0,3 0,408 0,029 -0,027 -0,008 0,049 -0,047 0,082 -0,094 -0,108 0,099 0,093 0,087 0,001 -0,013 -0,021 -0,028 0,186 0,173 -0,089 -0,014 -0,116 -0,067 Value-in-context 0,277 -0,061 -0,021 0,033 0,046 0,018 -0,013 -0,025 -0,007 -0,109 0,021 0 0,031 0,028 -0,035 -0,1 0,012 -0,001 -0,038 0,158 -0,073 0,034 -0,001 Value codestruction 0,235 0,054 0,092 0,03 0,032 -0,002 -0,027 -0,01 -0,078 0,002 0,026 -0,019 -0,019 -0,032 0,091 -0,007 0,017 0,046 -0,041 -0,075 0,107 0,046 0,004 Marketing Theory 0,228 0,049 0,166 0,032 0,01 -0,017 0 -0,022 -0,044 0,074 -0,032 0,008 0,021 -0,04 -0,005 -0,083 0,098 -0,038 -0,001 -0,147 0,381 0,115 0,013 Value-inexchange 0,213 -0,05 -0,082 -0,036 -0,036 0,008 -0,041 0,009 -0,039 0,035 0 -0,055 -0,04 0,029 -0,028 0,07 -0,135 -0,048 -0,013 -0,04 -0,008 -0,014 -0,017 Actor-to-actor (A2A) 0,19 0,113 0,006 0,096 0,06 -0,024 0,009 0,014 0,087 0,025 0,137 0,018 0,011 -0,014 -0,022 0,004 -0,049 0,095 0,026 0,116 -0,052 -0,04 0 Ecosystem 0,183 0,536 -0,012 -0,039 -0,12 0,011 -0,042 0,014 -0,006 -0,042 0,037 0,15 0,005 0,062 0,053 -0,115 -0,038 0,068 0,288 -0,107 -0,052 -0,108 -0,102 Structuration theory 0,18 -0,052 -0,046 -0,006 0,026 -0,01 0,001 -0,01 0,02 0,005 0,009 -0,012 0,033 -0,017 -0,008 0,188 0,072 -0,041 -0,001 0,036 -0,101 -0,028 0,044 Value coproduction 0,153 -0,006 -0,002 0,105 -0,074 0,015 0,096 -0,034 0,008 -0,101 0,058 -0,306 -0,05 -0,012 0,03 0,076 -0,141 0,101 -0,089 0,144 -0,042 0,013 -0,002 Customer-Dominant Logic 0,148 0,008 0,003 -0,027 0,003 -0,006 -0,014 -0,018 0,028 0,037 -0,007 0,013 0,026 -0,004 0,014 -0,057 0,05 -0,042 0,03 0,134 -0,007 -0,044 -0,017 Customer service system 0,131 -0,028 0,099 -0,007 -0,006 -0,004 0,078 -0,012 0,012 0,089 -0,08 0,004 0,016 -0,025 0,021 -0,046 0,054 -0,027 0,042 -0,062 0,461 0,151 0,017 Service system 0,131 0,04 0,005 0,002 0,005 0,091 -0,021 0 -0,044 -0,028 -0,424 -0,01 0,05 -0,006 0,048 0,579 -0,154 0,005 -0,035 -0,084 0,004 0,074 -0,06 Self-service 0,13 0,118 0,031 0,057 0,043 0,341 0,071 -0,014 -0,05 -0,251 0,028 -0,04 0,019 0,019 -0,084 0,043 0,036 0,085 -0,078 0,096 0,005 -0,016 0,032 Complexity theory 0,101 -0,044 0,006 0,019 -0,073 -0,043 -0,103 -0,062 0,02 -0,15 -0,039 -0,039 0,019 -0,008 -0,114 0,153 0,01 -0,355 -0,07 -0,068 -0,046 -0,132 0,062 Mobile technology 0,09 0,018 0 -0,04 0,015 0,106 -0,032 -0,01 -0,075 -0,03 0,001 0,103 0,03 0,021 -0,063 -0,108 -0,055 0,194 0,003 0 -0,027 -0,027 0,028 Smart tourism 0,085 0,017 -0,029 -0,104 -0,582 -0,003 -0,042 -0,003 -0,005 0,039 0,021 0,049 0,014 -0,024 -0,112 -0,082 0,013 0 0,016 -0,06 -0,014 0,006 -0,003 Well-being 0,083 -0,008 0,015 -0,035 -0,001 0,024 -0,016 -0,021 0,004 0,105 -0,011 -0,038 -0,009 0,006 0,705 -0,048 0,008 0,011 0 -0,054 -0,014 -0,01 -0,03 Service platform 0,083 0,44 -0,033 0,153 0,075 0 0,021 0,093 0,077 -0,027 0,056 -0,09 -0,023 -0,003 -0,004 0,041 0,014 0,1 -0,07 0,175 0,025 -0,034 -0,002 Service provision 0,079 0,029 -0,067 0,016 0,046 -0,024 0,072 -0,052 0,119 -0,144 -0,058 -0,071 0,023 0,007 0,016 0,062 0,048 0,027 -0,047 0,095 -0,08 -0,099 0,041 Service provider 0,077 -0,018 -0,164 0,021 -0,006 0,021 0,032 -0,001 -0,017 -0,118 -0,111 -0,235 0,053 0,029 -0,026 0,043 0,046 0,137 0,065 0,377 0 0,172 0,012 Practice theory 0,075 0,403 0,105 0,1 0,046 0,006 -0,024 -0,008 -0,108 0,008 0,008 0,022 -0,006 -0,081 0,002 0,101 0,097 0,171 -0,09 -0,034 0,123 -0,002 0,058 Service design 0,072 0,003 -0,003 0,029 0,042 -0,005 0,078 -0,046 0,25 0,009 -0,258 -0,047 0,033 0,035 0,334 -0,003 0,088 -0,039 0 0,152 0,116 -0,059 -0,101 Complex service system 0,065 -0,017 -0,036 -0,023 0,018 0,002 -0,017 -0,001 0,072 0,012 -0,013 -0,035 0,033 -0,013 -0,034 0,389 0,037 -0,114 -0,044 -0,021 -0,119 -0,065 0,022 Context-aware service / spacetime 0,061 -0,03 -0,025 -0,015 0,024 -0,029 -0,026 -0,034 0,054 0,118 -0,05 0,037 0,052 0,051 0,039 0,067 0,038 -0,133 0,004 0,219 -0,047 -0,099 0,176 Self-organising 0,051 0,034 0,028 -0,023 -0,071 0,004 -0,019 -0,026 0,253 0,09 -0,017 0,009 -0,012 0,025 -0,024 -0,024 0,006 -0,161 -0,008 0,052 -0,019 0,034 -0,053 Institutional Theory 0,046 -0,013 -0,004 -0,025 -0,012 -0,017 -0,001 0,748 0 -0,003 0,026 0,013 0,063 0,018 0,003 -0,001 0,005 -0,006 0,016 -0,103 -0,034 -0,02 -0,005 Intraorganizational 0,044 -0,012 -0,014 0,032 0,026 -0,004 0,012 -0,033 0,039 0,05 -0,052 0,037 -0,693 0,018 -0,05 -0,033 0,042 -0,016 -0,04 -0,025 -0,018 0,036 0,004 Collaboration 0,043 0,056 0,001 -0,078 0,026 -0,069 0,172 -0,007 0,022 0,052 0,236 0,083 -0,115 0,001 0,145 -0,028 -0,099 0,156 0,098 - 0,038 -0,004 0,034 -0,036 -0,052 -0,026 0,004 -0,014 0,069 0,078 -0,074 -0,018 0,009 0,001 0,591 0,008 0,048 0,006 0,01 -0,07 0,016 -0,06 -0,048 e-service 0,036 -0,009 0,036 -0,047 0,014 -0,001 0,613 -0,001 0,09 -0,155 -0,023 -0,14 0,022 -0,021 -0,064 -0,069 0,021 0,024 -0,017 -0,006 0,21 0,09 0,098 Emerging economies 0,032 -0,031 0,002 0,019 0,017 0,002 -0,053 -0,017 -0,094 0,051 0,027 -0,125 -0,181 -0,018 0,144 -0,006 -0,091 0,043 -0,048 -0,053 -0,035 0,031 -0,045 Service encounter 0,03 -0,049 0,048 0,032 0,064 0,019 0,027 -0,006 -0,041 -0,376 -0,023 0,009 0,08 0,061 -0,049 -0,061 0 0,029 -0,018 -0,014 0,023 -0,014 -0,139 Servitization 0,028 -0,029 0,035 0,014 -0,031 -0,039 -0,016 -0,008 0,009 0,081 -0,593 0,039 -0,024 0,06 0,029 0,012 -0,063 0,047 0,021 0,009 -0,044 -0,034 0,042 Market-oriented 0,027 -0,068 0,043 -0,1 -0,053 -0,023 -0,071 -0,049 -0,089 0,017 0,114 0,053 -0,012 0,011 -0,087 -0,056 0 -0,165 0,017 0,342 -0,023 -0,075 -0,034 Work system 0,025 -0,018 0,011 -0,002 0,002 -0,001 0,003 -0,007 -0,014 0 -0,002 0,017 0,007 0,006 -0,027 0,049 -0,002 0,002 0 -0,006 0,003 0,013 -0,009 Tourism 0,025 -0,003 -0,011 -0,132 -0,467 0,005 -0,039 -0,026 0 0,052 0,017 0,046 0,043 -0,031 -0,136 -0,125 0,022 -0,071 0,02 -0,016 -0,022 0,009 0,02 Modularity theory 0,022 -0,013 -0,737 0,017 0,019 0,001 0,03 0,047 -0,059 0,036 -0,117 0,023 0,007 -0,026 -0,028 0,053 0,023 0,041 0,02 -0,032 0,002 0,012 0,002 Complex adaptive system 0,015 0,009 0,009 0,06 -0,018 -0,01 -0,027 -0,026 0,044 0,064 0,048 -0,01 -0,005 -0,022 -0,047 0,013 0,043 -0,195 -0,006 0,027 -0,029 -0,024 -0,025 Interorganizational 0,013 -0,01 -0,053 0,001 0,01 -0,004 -0,058 -0,017 -0,011 -0,086 0,003 -0,062 -0,671 0,006 -0,003 0,004 -0,042 -0,012 -0,033 0,001 -0,028 -0,004 0,134 neo-Institutional Theory 0,01 -0,041 -0,008 0,027 0,003 0,003 -0,016 0,339 -0,06 -0,005 -0,014 -0,004 0,053 0,01 -0,033 0,001 0,036 0 0,007 -0,073 -0,04 0,001 0,045 Layered modular architecture 0,01 -0,089 -0,083 0,195 0,038 -0,005 0,035 -0,04 0,084 0,034 -0,007 0,037 0,084 0,093 -0,017 -0,06 0,004 -0,042 0,494 -0,056 -0,057 -0,035 -0,077 Public policy 0,009 -0,063 0,017 0,164 0,012 -0,015 -0,049 0,066 -0,093 0,096 0,041 -0,029 -0,08 -0,393 0,031 0,034 0,069 -0,063 0,036 -0,031 0,015 0,039 0,01 Stakeholder 0,009 0,086 -0,013 -0,038 0,012 -0,059 -0,016 -0,04 0,13 -0,087 -0,042 0,031 -0,001 0,075 0,052 0,025 -0,299 -0,102 -0,072 -0,029 -0,018 -0,08 0,277 Generativity 0,007 0,176 -0,018 0,44 0,068 -0,015 -0,009 -0,019 0,042 0,119 0,101 0,023 0,028 0,193 -0,078 0,024 -0,004 -0,068 0,214 -0,029 -0,073 -0,046 0,016 Lancasterian 0,005 -0,001 -0,018 -0,034 0,001 -0,019 0,019 -0,031 0,12 0,086 0,028 -0,695 0,022 0,022 -0,036 -0,01 0,033 0,294 0,046 -0,024 -0,014 0,003 0,014 Big Data 0,002 -0,008 0,033 0,105 -0,687 -0,036 0,023 -0,009 0,096 0,012 -0,02 0,021 -0,005 0,058 0,22 0,045 0,012 0,053 -0,006 0,019 0,008 -0,038 -0,018 neo-Schumpeterian 0,002 -0,029 -0,018 0,038 0,009 -0,002 -0,017 0,058 -0,058 0,027 0,022 -0,02 -0,001 -0,081 0,038 0,032 0,055 -0,008 0,014 -0,024 -0,023 0,041 0,033 Everyday activities/ everyday life 0 0,028 0,021 0,092 -0,091 0,009 -0,032 -0,02 -0,054 0,001 0,032 0,043 -0,025 0,022 -0,07 0,071 0,028 0,194 -0,084 0,011 -0,061 -0,006 0,028 Agency theory -0,002 -0,008 -0,692 -0,026 -0,001 0 0,01 0,003 -0,066 0,039 0,027 0,005 0,015 -0,036 0,019 0,045 0,011 0,022 0,005 -0,022 0,012 0,011 0,021 Characteristicsbased approach -0,003 -0,023 -0,017 -0,04 0,003 -0,022 -0,003 -0,032 0,076 0,138 0,022 -0,703 0,058 0,025 -0,05 -0,04 0,05 0,297 0,131 0 -0,031 0,024 0,037 Modular systems theory -0,004 -0,05 -0,082 0,068 0,03 -0,006 0,015 -0,009 0,001 -0,03 0,015 0,023 0,023 0,027 -0,004 -0,019 0,024 -0,017 0,026 -0,046 -0,048 -0,032 -0,047 Organizational change -0,006 0,025 0,018 0,064 0,037 -0,024 -0,025 -0,059 0,046 -0,019 0,016 0,134 0,025 -0,261 -0,014 0,027 0,081 0,245 -0,055 -0,031 -0,036 -0,068 0,006 Product-Service system -0,006 -0,034 -0,042 0,033 0,053 -0,019 -0,013 0,013 -0,01 0,063 -0,641 0,055 0,019 0,034 0,003 0,136 -0,038 0,066 -0,009 0,011 -0,066 -0,011 0,004 Healthcare service innovation -0,007 -0,036 -0,007 0,001 0,011 -0,002 0,005 0 -0,116 0,054 0,057 0,042 0,037 0,016 0,299 -0,026 -0,032 -0,018 0,003 0,026 -0,022 -0,007 0,324 Transit/trafic/transport management -0,007 -0,044 0,045 -0,035 0,013 -0,015 -0,018 -0,046 0,072 0,057 -0,431 0,013 0,001 0,016 -0,096 -0,002 -0,25 -0,048 0,244 0,006 0,027 -0,052 0,022 Modular architecture -0,008 -0,052 -0,445 0,421 0,094 -0,014 0,013 -0,013 0,059 -0,104 -0,082 0,038 0,069 0,158 0,005 -0,121 0,061 -0,067 0,188 -0,064 -0,057 -0,081 -0,129 Digital divide -0,009 0,003 -0,002 -0,025 0,004 -0,026 -0,011 -0,003 0,032 -0,103 0,02 0,035 0,019 0,007 0,04 0,035 0,003 0,006 -0,005 -0,024 -0,282 0,779 -0,009 Local government -0,01 -0,034 -0,002 0,261 0,012 -0,005 0,008 -0,048 0,006 0,078 0,013 0,041 0,025 -0,74 -0,024 -0,004 0,029 -0,007 0,012 -0,016 -0,032 -0,01 -0,022 Social Capital Theory -0,01 -0,019 0,006 -0,01 0,004 -0,01 0,008 -0,012 -0,009 -0,104 0,035 0,017 0,005 0,014 -0,02 0,016 0,005 0 0,008 -0,003 -0,017 0,019 0,006 Digitalization -0,01 -0,036 0,033 0,582 -0,209 -0,009 -0,04 0,02 -0,042 -0,071 -0,162 -0,069 -0,02 0,202 0,016 -0,065 -0,001 -0,029 -0,077 -0,028 -0,01 -0,003 0,018 Web -0,011 0,017 -0,184 -0,009 0,038 0,006 0,018 -0,02 0,032 -0,007 0,055 0,047 0,013 0,052 -0,07 -0,13 -0,044 0,03 -0,072 0,068 0,549 0,199 0,03 User-driven innovation -0,012 -0,031 0,026 0,004 0,03 0 -0,011 -0,009 -0,02 -0,087 0,008 -0,014 0,023 0,024 -0,037 -0,046 0,012 0,017 -0,026 -0,038 -0,01 -0,013 -0,063 Viable Systems Approach -0,013 0,008 -0,026 -0,037 -0,025 -0,007 -0,031 -0,005 0,007 -0,004 0,013 -0,022 0,024 -0,012 -0,048 0,241 0,022 -0,17 -0,037 -0,018 -0,065 -0,075 -0,024 APIs -0,013 -0,008 -0,029 -0,035 -0,019 -0,004 -0,013 -0,022 0,008 0,058 -0,016 0,063 -0,045 0,015 -0,05 -0,071 0,002 0,112 -0,035 -0,04 0,014 0,035 -0,017 Flexible standard -0,013 -0,012 -0,004 -0,018 0,005 -0,021 -0,007 -0,005 0 -0,024 0,003 0,018 0,026 -0,013 0,058 0,043 0,012 -0,077 0,002 -0,094 -0,034 -0,006 0,038 Digital innovation -0,014 -0,094 0,065 0,29 0,066 -0,036 0 -0,014 0,254 0,239 0,261 0,182 0,004 0,251 -0,084 0,077 -0,05 0,134 0,095 -0,06 -0,062 -0,018 0,099 e-service innovation -0,014 -0,009 -0,004 -0,019 -0,006 0,018 0,832 -0,011 -0,021 -0,021 0,018 -0,007 0,009 0,004 -0,009 -0,005 0,001 -0,002 -0,012 -0,028 -0,042 -0,007 -0,026 (Continued... P3 Factors/themes.) 57 Table .4: P3 Factors/themes (2006-2015) Platform -0,014 0,486 -0,086 0,085 0,049 -0,022 -0,063 -0,069 0,32 0,053 0,021 0,038 0,031 0,102 0,051 -0,254 0,027 -0,15 0,025 0,021 -0,052 0,152 -0,013 Technological innovation system -0,014 -0,029 -0,03 -0,09 -0,002 -0,025 -0,036 0,077 -0,045 0,033 -0,063 -0,026 0,118 0,014 -0,1 -0,18 0,039 -0,023 -0,035 -0,173 -0,055 -0,023 0,158 Queueing theory -0,015 0 -0,003 -0,008 0,014 0 -0,004 -0,002 -0,025 0,001 -0,015 0,001 0,013 0,001 0,058 0,101 0,003 0 -0,012 -0,02 -0,011 0,025 0,017 Service delivery -0,016 0,04 -0,002 -0,05 0,006 -0,02 0,067 -0,003 0,015 -0,604 0,053 -0,061 -0,043 0,009 -0,012 0,116 -0,022 -0,03 0,121 0,012 -0,023 0,136 0,089 Open platform -0,016 0,031 0 -0,002 0,011 -0,002 -0,012 -0,012 0,013 0,022 0,007 0 0,006 0,01 -0,003 -0,034 0,012 -0,03 -0,019 -0,021 -0,012 0,016 -0,021 Open data -0,018 0 -0,012 -0,029 0,029 0,618 -0,017 0,016 -0,043 0,055 0,028 0,109 0,04 0,042 0,085 -0,087 -0,015 0,156 0,016 -0,024 -0,015 0,002 0,173 Institutional -0,019 0,063 -0,061 0,053 0,041 0,016 -0,02 0,565 0,049 -0,081 -0,006 0,043 -0,059 -0,077 0,008 0,07 0,017 0,017 0,093 - -0,075 0,022 0,188 0,031 0,001 -0,063 0,1 -0,152 0,126 -0,103 -0,084 0,053 0,067 -0,067 -0,084 0,053 -0,004 0,033 0,073 -0,085 0,054 0,081 Innovation system -0,026 -0,029 -0,067 -0,134 0,008 -0,018 -0,039 0,36 0,037 0,013 -0,011 0,046 0,132 0,002 -0,055 -0,223 -0,028 0,018 -0,066 -0,175 -0,06 -0,061 0,151 Citizen -0,026 -0,059 0,077 0,005 0,045 -0,058 -0,021 0,26 0,508 -0,011 0,119 0,035 0,04 -0,041 -0,1 0,031 -0,185 0,001 0,033 -0,028 0,008 0,174 0,116 Nature conservation -0,028 -0,087 0,068 0,037 0,081 -0,049 0,004 -0,005 0,15 0,145 0,209 0,12 0,005 0,124 -0,065 0,181 -0,114 0,102 0,059 -0,046 -0,029 0,024 0,109 Hospitality -0,029 -0,009 0,007 -0,053 -0,062 0 -0,039 -0,021 -0,073 0,011 0,018 -0,008 0,034 -0,004 -0,052 -0,042 0,024 -0,062 0,008 0,122 -0,035 0,011 0,025 Eco-innovation in services -0,03 -0,009 0,009 -0,011 0,007 0,006 -0,021 -0,007 -0,021 -0,001 0,017 -0,034 -0,005 0 -0,037 -0,01 0,014 0,015 -0,026 -0,05 -0,008 -0,019 0,015 SaaS -0,03 0,01 -0,67 -0,007 -0,02 -0,001 -0,044 0,007 0,019 0,01 0,053 0,04 0,031 0,01 -0,004 -0,041 -0,005 0,044 -0,024 -0,027 0,209 0,056 -0,017 Health-care -0,03 -0,045 -0,057 0,017 -0,043 -0,011 -0,012 0,008 -0,039 -0,05 0,055 0,038 0,018 -0,011 0,509 0,057 0,013 -0,001 -0,032 -0,076 -0,066 0,07 0,421 Service management -0,03 -0,02 0,01 -0,008 0,065 0,013 0,289 -0,006 -0,078 0,01 -0,18 0,08 0,033 0,022 -0,049 0,052 0,049 0,032 -0,002 0,019 0,292 0,098 0,03 Knowledgebased theory -0,032 0,036 0,013 0,004 0,004 -0,006 -0,015 -0,008 -0,002 -0,002 -0,026 0,001 0,002 -0,005 0,024 -0,065 0,024 -0,045 -0,022 -0,017 0,001 0 0,015 Nation Health Service -0,032 -0,041 0,017 0,049 0,025 0,087 0,013 0,018 -0,165 0,091 0,027 0,056 0,043 0,04 0,158 -0,031 -0,012 0,021 0,028 -0,006 0,012 0,031 0,476 Data-driven -0,032 -0,023 0,003 -0,014 -0,013 0,931 -0,008 -0,008 0,077 0,031 -0,007 -0,026 -0,013 0,002 -0,011 0,043 0,002 -0,066 0,009 -0,031 -0,009 -0,009 -0,016 Self-service systems -0,032 -0,023 0,003 -0,014 -0,013 0,931 -0,008 -0,008 0,077 0,031 -0,007 -0,026 -0,013 0,002 -0,011 0,043 0,002 -0,066 0,009 -0,031 -0,009 -0,009 -0,016 Digital artifact -0,033 0,025 -0,037 0,585 0,024 -0,004 -0,042 0,004 -0,014 -0,062 -0,025 -0,01 -0,003 0,239 -0,024 -0,142 -0,024 -0,046 -0,052 -0,091 -0,007 -0,017 -0,049 Reasoned Action Theory -0,034 0,021 0,036 0,031 -0,004 0,022 -0,039 0,236 -0,009 0,05 -0,025 -0,051 -0,059 -0,009 0 -0,001 0,026 0,037 -0,03 0,262 0,051 0,059 -0,021 Governance mechanisms -0,035 0,005 0,036 -0,019 -0,005 -0,004 -0,054 0,098 -0,022 -0,122 -0,009 0,031 -0,672 -0,051 0,009 -0,017 0,033 -0,022 0,316 0,038 0,024 -0,007 0,01 Service delivery system -0,035 -0,009 0,02 -0,032 -0,005 -0,012 0,033 0,008 -0,002 -0,54 0,004 -0,007 -0,138 0,01 -0,021 0,075 0,009 -0,05 0,114 -0,083 -0,021 -0,02 0,116 Industrial service innovation -0,037 -0,035 -0,004 0,021 0,049 0,015 -0,041 0,025 -0,069 -0,03 0,028 -0,013 0,021 0,005 0,09 -0,121 -0,014 0,04 -0,006 0,095 -0,056 0,037 -0,028 Public service innovation -0,037 -0,032 0,001 0,291 0,008 0,001 0,002 -0,024 0,057 0,042 0,028 0,017 0,036 -0,796 -0,023 -0,028 -0,135 0,004 0,001 -0,004 -0,031 -0,011 -0,035 Public service system -0,038 0 0,016 0,055 0,004 0,004 0,019 -0,008 0,004 0,029 -0,023 -0,026 0,023 -0,146 -0,014 -0,003 -0,792 0,01 0,002 0,005 0,007 0,013 -0,02 Competition -0,038 0,047 0,002 -0,013 0,053 -0,015 0,12 -0,044 0,081 0,182 -0,105 0,039 -0,156 -0,036 -0,083 -0,221 0,08 -0,148 0,121 -0,089 -0,031 0,008 -0,096 Actor network theory (ANT) -0,04 0,042 -0,009 -0,008 0,007 -0,007 -0,021 -0,01 -0,006 0,006 0,016 0,004 0,011 0,002 0,08 0,019 0,006 -0,025 -0,02 0,004 -0,013 0,027 0,017 (Continued... P3 Factors/themes.) 58 -0,04 0,052 -0,003 0,017 0,012 -0,062 -0,008 0,544 0,012 0,026 0,052 0,024 0,043 0,035 0,021 0,028 0,065 -0,005 -0,052 0,026 -0,017 -0,076 Co-opetition -0,043 -0,027 0,016 -0,037 -0,003 -0,001 -0,04 -0,009 -0,026 0,005 0,063 -0,003 -0,182 0,001 -0,006 -0,033 -0,008 0,066 0,03 0,006 -0,006 -0,038 -0,068 New product development -0,043 0,005 -0,027 -0,003 0,052 -0,003 0,058 0,016 -0,011 0,064 0,021 0,06 -0,058 0,047 0,039 -0,086 -0,007 -0,083 -0,014 0,061 0,031 -0,08 -0,171 Flexibility -0,043 0,044 -0,403 -0,019 0,001 -0,004 -0,046 -0,01 0,03 0,029 0,088 -0,05 -0,139 0,004 0,032 0,075 -0,05 -0,062 -0,05 0,109 0,08 -0,06 0,016 Internet of Things (IoT) -0,044 -0,01 0,015 0,149 -0,774 0,008 0,006 0,002 0,003 -0,043 -0,03 -0,001 -0,01 0,024 -0,003 0,029 0,009 0,104 -0,046 0,023 -0,02 -0,006 0 Action theory -0,045 0,064 0,041 0,033 0,005 0,008 -0,005 0,644 0,039 0,056 -0,001 -0,053 -0,13 -0,025 0,007 0,009 0,028 0,01 -0,029 0,256 0,085 0,058 -0,072 Diffusion of innovation theory -0,045 -0,015 0,034 -0,005 0,01 0,011 -0,032 0,001 -0,066 -0,029 0,023 0,007 0,012 0,001 - -0,063 0,063 0,057 -0,094 -0,028 0,006 -0,067 0,022 -0,059 -0,063 0,017 -0,008 -0,169 -0,066 0 -0,011 0,001 -0,018 0,542 0,105 0,041 0,008 0,033 Customer service innovation -0,063 -0,023 0,022 -0,012 0,023 -0,01 -0,035 -0,02 -0,024 0,004 -0,027 0,032 0,039 0,014 -0,01 -0,011 0,012 0,009 -0,001 0,289 0,009 -0,02 -0,02 Digital/ICT/ITservice innovation -0,064 -0,008 0,011 0,052 -0,152 0,007 0,779 -0,018 -0,036 -0,014 0,034 -0,008 0,022 0,025 0,005 0,037 -0,012 -0,005 -0,017 0,028 0,001 0,004 0 Cloud computing -0,064 -0,043 0,04 -0,034 -0,065 -0,054 -0,054 -0,025 0,27 0 0,058 0,201 0,023 0,02 0,051 0,056 0,072 0,335 -0,007 -0,021 -0,004 -0,091 -0,019 Sociomateriality -0,064 0,385 0,052 0,46 0,023 0,017 0,01 0,026 -0,121 0,064 0,022 -0,019 -0,007 0,153 -0,074 0,105 0,003 0,053 -0,064 -0,046 0,014 0,001 0,03 Open service innovation -0,065 0,002 -0,022 -0,065 -0,093 0,032 -0,057 -0,031 0,017 -0,028 -0,121 0,244 0,002 -0,009 -0,105 -0,08 0,028 0,5 -0,045 0,018 -0,101 -0,079 0,015 Innovation diffusion theory (IDT) -0,066 -0,005 -0,002 -0,032 0,005 0,005 -0,038 -0,005 0,017 -0,024 0,056 -0,029 -0,128 -0,048 -0,036 -0,014 -0,028 -0,008 0,003 0,019 0,083 0,01 0,165 Boundary resources -0,071 0,744 0,02 0,026 -0,003 0,016 -0,001 -0,001 -0,081 0,037 -0,008 -0,014 0,024 0,014 -0,038 0,1 -0,009 -0,023 0,178 -0,032 0,004 0,015 0,047 Interoperability -0,071 -0,049 -0,141 0,043 0,011 -0,004 -0,114 0,001 0,258 -0,081 0,103 0,06 0,016 0,013 0,095 0,042 0,028 -0,005 -0,005 -0,07 0,233 0,021 0,03 Absorptive capacity/ACAP -0,072 -0,017 -0,006 -0,065 0,027 -0,002 0,305 0,282 -0,04 0,134 0,071 0,03 -0,094 0,04 -0,008 -0,095 -0,021 -0,054 -0,052 0,012 -0,043 -0,082 -0,187 Smart Service System -0,075 -0,016 0,028 0,111 -0,499 0,004 0,128 0,013 -0,016 0,011 0,025 -0,025 0,016 0,021 0,067 0,107 -0,032 -0,014 0,011 0,063 0,007 0,011 -0,016 Pervasive/ubiquitous -0,08 -0,093 -0,009 0,428 -0,228 -0,016 -0,003 -0,034 0,091 0,097 0,102 0,004 0,037 0,211 0,064 0,033 -0,027 -0,017 0,066 0,039 0,103 0,038 0,104 Digital platform -0,082 0,298 -0,025 -0,075 0 -0,019 -0,013 -0,071 0,146 0,079 0,015 0,016 0,025 0,012 -
P2 FREEMAN'S DEGREE CENTRALITY MEASURES:
- ------------------------------------------------------------------------------
----------------------------------- 2
Figure 1 :
1 Figure 1: Quantitative-qualitative methodological process performed for illuminates longitudinal intellectual / conceptual structure and institutional service logics.
Figure 3 :
3 Figure 3: Data structure. Individual service logics.
Figure 4 :
4 Figure 4: Data structure. Collective service logics.
Figure 5 :
5 Figure 5: Data structure. Societal service logics.
3. 10 .
10 Step 10: Report & multilevel perspective of service logics institutional pluralism framework By considering the methodology as an iterative process, authors enhanced the longitudinal analysis report, section 5 presents and discuss the final transformation of static data structures (figures 3, 4, 5), into a grounded theory integrative framework to service logics co-existence institutional pluralism and complexity analysis. reminds us that the stage of development of an intellectual structure can be determined through the consensus of the scientific community on the theoretical structures of the research themes and the methodological approaches used to explore them. Therefore, this section presents the evolution of intellectual structure (Figure6, 7, and 8) of service innovation, service logics and service system research. This structure includes the links between themes, theories, and authors over three periodsbetween 1986 and 2015: P1 (1986- 1995); P2 (1996-2005); P3 (2006-2015). Aligned with the interpretative epistemological paradigm and the becoming ontology, and based on a historical approach by the sociology of science to analyse intellectual structure, the authors present the results of quantitative and qualitative corpus analysis.
pus. This corpus was present in the following WoS categories: Management, Planning Development, and Economics. Similarly to the Carlborg et al. (2014) analysis, our findings show that in this period, service innovation research is present but marginal. On the other hand, service systems and service logic research are almost inexistent except for the work of Kingman-Brundage et al.
The widespread notion of innovation from the Schumpeterian perspective covers five fields: 1) the entrance of a new good or product characteristic (product innovation); 2) introduction of a new production method (process innovation); 3) the opening of a new market (market innovation); 4) the introduction of a new supplier of raw materials (innovation of inputs); and (5) the establishment of a new organisation (organisational innovation) (Schum-
innovation in manufacturing industries to the service industries. In this period, a large part of researchers treated the services as another offer entirely comparable to tangible products. Our research shows that in periods P2 and P3, researchers have argued for the non-compatibility of transposition of product-oriented theories to the service context. P1 dataset analysis shows that Kingman-Brundage et al. (1995) provides the first model for service logic in service systems. These authors suggest three fundamental logics for resource integration in a service system: customer, technical, and employee logics. A factor analysis of P1 corpus whit the VantagePoint thesaurus-based topic modelling highlights 4 factors, which represent the corpus intellectual structure of this period (figure 6a and 6b). Complete FA P1 matrix Table
Lancaster's theory of 1966 and the approach of Saviotti and Metcalfe to propose their "Characteristics-based approach (C-B A)". These authors have considered that the characteristics of innovation processes play an important role in service innovation. Furthermore, their model has added two additional components: 1) the competence of the supplier to interact with the user and produce the desired output of service and 2) the competence of the customer which indicates the skills of the final user to understand and interact with the competence and technical characteristics of the supplier. Also, their work illuminates various modes of innovation: radical innovation, innovation based on improvement, innovation involving the addition of new characteristics, ad hoc innovation, recombinant/architectural innovation, and innovation through formalisation. As will be seen later in the period P3, De Vries (2006) and Savona & Steinmueller (2013) extended Gallouj & Weinstein (1997)'s approach for better contextualisation to the digital service research (cf. P3).
(
2000) has attempted to develop a comprehensive model to conceptualise service innovation. Their model includes four dimensions of service innovations: 1) new service concept; 2) new customers interface; 3) technological options, and 4) new service delivery. According to this perspective, each innovation in the service context consists of a combination of previous dimensions at different de-grees. Den Hertog's model implies that not only new services are being discussed, but also new organisational frameworks, new processes, and new technologies that improve the service offer. Den Hertog concluded that this permits to map, analyse, and manage the diversity of innovations in more detail and a structured manner through discussions with decision-makers and service providers. Another reported factor into P2 was the "knowledgeintensive services (KIS)". This factor covers the themes "knowledge-intensive business services (KIBS)", "knowledge-intensive services (KIS)", and "value-coproduction". In this factor, Miles et al. (2000) examine the challenges of innovation and KIBS. Besides, for Gallouj & Weinstein (1997), innovation in services may appear as an ad hoc innovation in KIS, where service providers customise services for each customer and develop new service portfolios for a market. In this way, the search of Wood (2002) addresses several issues in the context of the development of KIS in urban areas. For this author, the identified issues are related to the competitive basis of cities and the degree to which they possess distinctive sources of innovation. Also, Den Hertog (2000) proposes an analysis of the role played by KIBS in innovation in organisations. According to Den Hertog, KIBS are external companies enabling an organisation to acquire and develop new knowledge for developing a competitive advantage. This author points out that KIBS play an important role either as a facilitator or as a source of innovation.
tion in services were based on technological approaches and focused exclusively on the adoption of technological innovations, to the detriment of non-technological innovations. These authors argued that technological approaches could be interpreted in a variety of practical and theoretical ways. The authors stressed that the boundary between products and services is becoming blurred. Furthermore, in this factor, Lyytinen & Rose (2003) have discussed disruptive IT-based innovations in services and processes.
analyse the role of standards in the promotion, activation, and barriers of innovation in mobile services. Their study shows how standards have shaped specific configurations of stakeholder networks that have enabled the rapid and aggressive development and deployment of 2G mobile infrastructures and a transition to 3G services. These networks of actors span three distinct and critical areas: the regulatory regime, the innovation system, and the market place. They point out that standards play essential roles because they serve as mediators of interests and motivations of different actors.
) mobilised the resource-based view to analyse the e-services. This author stressed that in the hypercompetitive and rapid market of new IT, a large number of new business models are needed for the competitive advantage of tech companies and organisations offering e-services. The author points out that e-services offer new opportunities for companies, which must respond quickly to maintain their market position. In the field of e-services,[START_REF] Michalski | E-service innovations through corporate entrepreneurship[END_REF] discusses the possibilities of innovations at the level of business models. He examined means that could lead global technology companies to a rapid and systematic generation of business models for e-services to increase their competitiveness. Furthermore, their paper suggested integrating an R&D department within companies focusing on business models and technological innovations.[START_REF] Walker | Managing technology-enabled service innovations[END_REF] considered several managerial implications associated with the development and adoption of technological services. These authors examined the impact of technology on service delivery using the theory of innovation diffusion. Also, in this factor, Van Birgelen et al. (2005) investigated the effects of e-service quality on customer satisfaction, complementing traditional quality service frameworks. They proposed a model to analyse traditional services and eservices through the combination of traditional theoretical frameworks. In the factor "product-service innovation" the model of Lievens & Moenaert (2000) examined specific services characteristics during innovation processes: intangibility, inseparability, heterogeneity, and perishability.4.3. P3 intellectual structure (2006 -2015 forward)
hyay et al. (1995, p. 138), these technologies are characterised by the vertical integration of information among trading partners along their value chain. Regarding the evolution of IT in P3, the context is marked for the development of smart technologies, Internet of Things and big data (Medina-Borja, 2015; Maglio & Lim, 2016), which need the activation of new features (Yoo, 2010) and new digital capabilities. Some examples of such characteristics are ubiquity, self-organisation, and the sensitivity to the context of the daily activities of the users. These features inherent to smart technologies go beyond the simple digitisation of service by traditional IT.
searchers focused on digital innovation and digital service innovation research areas. Some transpositions of service innovation, service logics and service system disciplinary research into information system management field were illustrated by the articles of[START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF];[START_REF] Eaton | Distributed tuning of boundary resources: the case of Apple's iOS service system[END_REF];[START_REF] Barrett | Service Innovation in the Digital Age : Key Contributions and Future Directions[END_REF];[START_REF] Srivastava | Bridging the Service Divide Through Digitally Enabled Service Innovations: Evidence from Indian Healthcare Service Providers[END_REF];[START_REF] Huang | IT-Related Service: A Multidisciplinary Perspective[END_REF]. In this context,Huang & Rust (2013, p. 257) concluded that "IT transforms and renovates service into two seemingly polarized directions: making service more goods-like (more tangible, separable, homogeneous, and storable) or even more service-like (less tangible and separable, but more personalized and perishable)". According to these authors, the two directions provide two evolution paths for service and manufacturing companies, which may perhaps end up blurring the distinction between goods and services. Furthermore, Huang and Rust have invited researchers from various disciplines to contribute to service research to broaden and enrich our understanding of IT services (e.g. e-services, m-services), and to advance towards a better comprehension of ser-vice systems. 4.3.1. P3 (2006 -2015) theoretical point of view Concerning theories in the P3 corpus intellectual structure, data reveals that there has been a marked number of articles based on the following theories: Systemic innovation theory; complex adaptive systems theory; Miller's theory of living systems and systemic thinking; modular systems theory; modularity theory; collective action theory; institutional theory; neo-institutional theory; action theory; contingency theory; design theory; dynamic capability theory; structuration theory; practice theory; activity theory; agency theory; consumer theory; Lancaster's theory; the characteristics-based approach/theory; competence-based theory; knowledgebased theory; resource-based theory; conservation of resources theory; absorptive capacity theory; actor-network theory (ANT); behavioural reasoning theory; consumer culture theory; consumer resistance theory; disruptive innovation theory; queueing theory; reasoned action theory; service-oriented theory; social capital theory; Abrahamson's management fashion theory; innovation diffusion theory; user innovation theory.
4. 3
3 .2. P3 (2006 -2015) factor analysis The application of NLP techniques to P3 corpus allowed to extract 19,607 terms. Data reduction techniques allowed to synthesise them into 164 latent terms used for factor analysis. The execution of factor analysis illuminated the 23 factors representing the P3 period structure (Figure 8a and 8b). Despite the popularity of the SDL, the other logics revealed by the P3 corpus did not have enough weight to appear after the factor analysis. Therefore, they are shown in the complete factor analysis matrix in the appendix .4.
For
example, Tsou & Hsu (2011); Tsou & Chen (2012); Tsou et al. (2014) analyse the effects of market and technology orientations on innovations in service delivery. According to these authors, IT became indispensable in service management. In this context, the dynamic capability perspective also emphasises that the company's internal and external sources can be adapted, configured, and coordinated quickly and efficiently through service innovation processes geared to the IT new context. Tsou & Hsu (2011) emphasise the need to move towards the development of "e-service innovation" based on the dynamic capacity of resources absorption: an internal driving force capable of combining internal and external resources to increase their operating capacity through internal cooperation, coordination, and relational capacities. In P3 Savona & Steinmueller (2013) have extended the characteristics-based approach of Gallouj & Weinstein (1997)), by considering the client's time in the service experience. Their contribution on functionalities associated with the use of ICTs in defining and delivering services based on technical and competency characteristics.
has used network theory to analyse the complexity of a service system. Additionally,Chae (2012a[START_REF] Chae | A complexity theory approach to IT-enabled services (IESs) and service innovation: Business analytics as an illustration of IES[END_REF] have analysed KIBS applying complexity theory.Chae (2012a) proposed an analytic framework that suggests an ambidextrous view of research (a combination of local and global research). Other authors[START_REF] Barile | Linking the viable system and manyto-many network approaches to service-dominant logic and service science[END_REF][START_REF] Barile | Reflections on service systems boundaries: A viable systems perspective. The case of the London Borough of Sutton[END_REF][START_REF] Barile | Service economy, knowledge, and the need for T-shaped innovators[END_REF] dealing with the complex service system and the viable service system. They argued that complex service systems are often ICT-based because technology facilitates the re-configuration and smartness behaviour in complex environments[START_REF] Demirkan | Service-oriented technology and management: Perspectives on research and practice for the coming decade[END_REF].They pointed out that complex service systems are everywhere, e.g., health services, traffic management, smart grids energy sources, and waste management. Moreover,[START_REF] Wu | Applying complexity theory to deepen service dominant logic: Configural analysis of customer experience-and-outcome assessments of professional services for personal transformations[END_REF] point out that complexity theory provides a sound basis for SDL advancing. Furthermore, Akaka et al.(2013) argued that the multiplicity of institutions influences the complex nature of a service system at micro, meso and macro levels.
2014; Arduini & Zanfei, 2014) examined the notion of a smart city from digital innovation in public services. For these authors, the growing attention given to electronic public services is only partially reflected in the literature. As a result, their study developed a meta-analysis of the literature on the delivery, dissemination, adoption, and impact of public electronic services in five categories of services: e-government, e-education, e-health, info-mobility, and e-procurement. Their research has revealed a significant heterogeneity of scientific production in the service categories. Their work highlights that ICTs en-able the availability of service offers and accessibility of high-quality, real-time value-added services virtually anywhere, enabling unprecedented participation of citizens, businesses, and other institutions. These authors pointed out that it is necessary more research analysing of institutional changes and public policies. Misuraca et al. (2011) seek to contribute to the European academic and institutional debate on interoperability as a means of enabling governments to collaborate within member states and across frontiers. For these authors, debates are necessary to formulate an interdisciplinary framework for assessing the different dynamics of service systems emerging from ICT based service innovations in European cities. Further, the "Internet of Things" (IoT) factor regroup the themes "Big Data", "Smart /tourism", "Smart Service System", and "Tourism". This factor is linked to the IT Service Innovation factor. Inside this factor, Perera et al. (2014) argued that IoT plans to connect and use billions of sensors to the Internet for efficient service systems management in smart cities. According to Xu (2012, p. 716), IoT's smart capabilities and features enable innovation in services by expanding human and nonhuman-centric data collection and providing smart services through integrated sensor networks.
Figure 9 :
9 Figure 9: Patterns of service logics college evolution. Source: adapted from Vogel (2012).
4.4. Intellectual Structure Evolution DynamicsThe P1, P2, P3 factor maps presented in the previous sections (figures 6, 7, 8) and complete factors maps in appendices(.2, .3, .4) have offered a broadened view of service innovation, service logics and service system research over thirty years. Our examination suggests that, while various aspects of the structure remained stable over time, some notable changes are reflecting technological evolution and complexity of service systems regarding sustainable, public, intrinsic and economic value co-creation. Similarly to[START_REF] Vogel | The Visible Colleges of Management and Organization Studies: A Bibliometric Analysis of Academic Journals[END_REF], figure9, presents seven dynamics patterns of intellectual structure service logics evolution colleges:
with a higher degree of specialisation. It is thus a pattern of divergent evolution (p. 1032). Public Service-Dominant Logic decided to separate from the Service-Dominant Logic of the Market, to become known as Public Service Logic and to be based more on the creation of public value.• Fusion: occurs when two or more previously independent colleges merge into a single college (p. 1032). Practically the SDL and the SSMED are part of the same college. SDL has even been declared to be the pillar of SSMED.• Implosion: is what happens when a college disappears without a successor. Like the appearance of new, so the disappearance of extant colleges is a common feature of evolving scientific fields. Only a few colleges survive longer than a decade, and 'mortality' is particularly high among peripheral colleges.
logics resulting from the analysis indicated how institutional logics shape the capture of value by a subset of actors (Figure10and Table4). While most of the corpus publications focused on one level of market analysis does not mean that what happens at other levels is less important. For example, in P3, many of the articles are based on SDL. However, a few studies are based on other service logics allowing to consider the individual and collective level of analysis. Results have shown that few studies were positioned from the perspective of the analysis of antagonistic and complimentary service logics. Also, outside of the corpus few researchers (Silva-Morales, 2017) was found that includes public[START_REF] Osborne | The SERVICE Framework: A Public-service-dominant Approach to Sustainable Public Services[END_REF][START_REF] Osborne | From public service-dominant logic to public service logic: are public service organizations capable of coproduction and value co-creation[END_REF] and private[START_REF] Lusch | Service Innovation: A Service-Dominant-Logic perspective[END_REF][START_REF] Vargo | Institutions and axioms: an extension and update of service-dominant logic[END_REF] antagonist and complementary service logics analysis. As well, several investigations concluded that value creation was not clear in the literature. For example,[START_REF] Hietanen | Against the implicit politics of service-dominant logic[END_REF] suggest that value is an enduring chimaera in managerial and social sciences, and that value becomes thoroughly subjective, a matter of consumer's personal experiences. In this regards, institutional pluralism and complexity in service logics can allow advancing in the analysis and understanding of value creation or destruction. Table
ability to generate new ideas. According to this author, recent studies indicate a misunderstanding of this concept by some institutionalist researchers in institutional logics on what Weber recommends concerning the construction of ideal-types. Such researchers ignore the question of axiological neutrality and the impossibility of finding standard ideals empirically. In this regard, this author suggests that the manner as ideal types were used in the institutional logic studies should be reconsidered. Without further delay, below we present some latent service logics in the analysed corpus 5.1. Service logics and the individual level of analysis Employee, client, customer logics (figure 10a) are linked whit value co-production, value co-creation, value co-destruction, value-in-use, value-in-context, value-inexperience, value-in-exchange, intrinsic-value, publicvalue, sustainable-value. The value can only be created or destroyed by the user of the service. In some cases, employees can contribute to the destruction or creation of value, especially front-line employees. Figure 10b suggests that a research design can consider a) that individuals adhere to a single dominant logic; b) that individuals straddle between two or more logics within their organisation, amidst contradictory institutional demands. According to D'Adderio et al. (
(a) Service logics at individual, organisational and field level. Source: Authors.
(b) Institutional logics research designs. Source: adapted from[START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF].
Figure 10 :
10 Figure 10: Individual, organisational and field level service logics (co-existence / domination), artefacts and routines (intensive / preformative) to value creation / destruction.
Direction 1 :
1 [START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF], key dimensions of logical multiplicity in organisations, dependent on heterogeneity concerning how multiple logics manifest within organisations are compatibility and centrality. The first dimension describes the degree of compatibility between multiple logic organisational instances. These authors have defined compatibility as the extent to which logical instances imply coherence and reinforcement of organisational actions. They pointed out that consistency with respect to the objectives of organisational action is more critical for compatibility than consistency with respect to the means by which the objectives achieved must be achieved. The second dimension describes the extent to which multiple logics manifest central characteristics that are central to organisational functioning.5.3. Service logics and the field level of analysisForLusch & Nambisan (2015, p. 158), the servicedominant logic is based on a fundamental idea developed by the economic scholar Frederic Bastiat: all actors in an exchange deploy skills and competences when making an offering of their service to one another. Thus, value is the "comparative appreciation of reciprocal skills or services that are exchanged to obtain utility; value[means] 'value in use'". The SDL evolved from 2004 to 2008, passing from the notion of co-production to the notion of cocreation of value. WhenVargo & Lusch (2004, p. 1) put out that: "Marketing inherited a model of exchange from economics, which had a dominant logic based on the exchange of "goods," which usually are manufactured output", we suggest that perhaps there was a misunderstanding in their understanding of long socio-economic business cycle notion. Silva-Morales (2017) highlights paraphrasing Kondratiev and Schumpeter that, every new socio-economic cycle is related to a new kind of technology (product or service) that transforms society to offer novelties. For example, consumer products/services, production methods, standards, markets, types of industrial organisation, academic formations, job skills, works. Also, each business long cycle or Kondratiev wave has four phases: 1) depression, 2) recovery, 3) prosperity or expansion, and 4) recession. In other words, if, Vargo & Lusch (2004) argued that service logic replaces the 'old' goods logic, this research agrees with the criticism of Hietanen et al. (2017) for its tacit neo-liberalism, also with Saarij ärvi et al. (2017) article that concluded that SDL recent emphasis on studying markets instead of marketing. In line with Kondratiev waves, if good-dominant logic represents the old economic cycle based on products after the industrial revolution (1st, 2nd, 3rd and 4th Kondratiev waves), the new service-dominant logic of the market represents the capitalist societal market logic (5th and 6th Kondratiev waves). Indeed, in the 1930s, the Russian economist Nikolai Kondratiev proposed a heuristic on socio-technical structural change in terms of development cycles. Kondratiev pointed out that modern economies fluctuate in cycles of 40 to 60 years, called waves of Kondratiev. Each Kondratiev wave is based on its technological innovation logic which generates a dominant (Dosi, 1982) technological paradigm, penetrating existing economic and social systems. In this context, the organisations or the pre-established actors are either displaced by new entrants, or in competition/coopetition, sometimes managing to survive and prosper in this new socio-economic cycle. Although there is no consensus on the dating of waves since different authors rely on slightly different chronologies (Wilenius & Casti, 2015, p. 339), the Kondratiev waves show several disruptive technologies which have influenced the transformation of economic cycles since 1822. The industrial revolution marked the first cycle; the second by railways and steel; the third by the appearance of chemicals and electrification; the fourth cycle through petrochemicals, the automotive industry and the start of mass production, even today; the fifth by the appearance of information and communication technologies; the sixth by the appearance of intelligent or smart technologies.Without regard to the Kondratiev waves, at the individual, organisational and field level, the number of professional institutions established within an institutional field and the relationships between them determine the degree of compatibility of logic[START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF] and their influence (centrality). When multiple professional groups are active within a field and compete for resources and power, members of each profession emphasise the legitimacy of their knowledge base versus that of other professionals to maintain control[START_REF] Dunn | Institutional Logics and Institutional Pluralism: The Contestation of Care and Science Logics in Medical Education, 1967-2005[END_REF].[START_REF] Besharov | Multiple institutional logics in organizations: Explaining Their Varied Nature and Implications[END_REF] suggest that this situation reduces compatibility since each professional group affirms its logic as unique with the logic of other professional groups. In contrast, when only one professional group is active in a field when several professional groups are active, but one is dominant, or when the claims of different professional groups do not overlap, fewer battles for control occur, resulting in greater compatibility of logic. Also, these authors define centrality as the degree to which multiple logics are relevant to organisational functioning. Said centrality is higher when various central logics are created for organisational functioning. It is less when a logical only guide core operation, while others are not directly linked to the functioning of the organisation. Likewise, these authors suggest four types of logic multiplicity in organisations according to a high or low degree of centrality and compatibility: 1) contested (extensive conflict); 2) aligned (minimal conflict) 3) estranged (moderate conflict) 4) dominant (no conflict). This provides a basis for linking institutional approaches to multiplicity within organisations with other theoretical traditions that address issues of multiple goals, values and identities within organisations. Their field, organisation and individual level of analysis framework lay the foundation to consider additional factors at each of these levels and the relationships between them.Future researchIn line with the research of[START_REF] Casasnovas | Constructing organizations as actors: Insights from changes in research designs in the study of institutional logics. In Agents, Actors, Actorhood: Institutional Perspectives on the Nature of Agency[END_REF] that proposed various research designs on 1) coexistence, 2) domination, at the field, organisation or individual level of institutional logic (Figure10b), future research on service logics and service systems innovation may analyse institutional change at the field, organisational or individual level: 1) from a co-existence perspective or 2) from a dominant perspective. Future studies may analyse how an organisational field changed from having a single dominant logic to having another. The second genre of research in figure 10b, concern a constellation of logic or institutional complexity. This encompasses processes at the field level, where it is seen that two or more logics have a continuous and robust presence. The third type of investigation in figure 10b concerns how multiple logics can coexist peacefully or in permanent conflict, or can also be combined into a new hybrid field-level logic. According to these authors, a "hybrid organisation" refers to those organisations that are integrated into multiple logics, and the focus is on the strategies they carry out to respond to contradictory demands, such as oscillation, synthesis, response or selective coupling. The fourth genre of research is called individual coping and represents the individual intersections of coexistence and the individual level the figure 10b. This includes research focused on how individuals within organisations deal with the conflicting demands of multiple institutional logics. These studies generally present different mechanisms by which individuals cope with this complexity, such as compartmentalisation, hybridisation, negotiation, or bridging. Complexity and multi-dimensionality of service logics at the field level
cal implications: Internet of Things (IoT) and Mobile technologies (IoT-based service system, PSS), Data Science, Big Data, Open Data, Digitalisation, Digital Infrastructure, Interoperability, Platform-ecosystem, Mobile Ecosystems, Digital platform, Generativity/governance, Boundary resources (APIs, standards) and Digital divide. Agility in the evolution of technologies has highlighted opportunities for innovation in service systems. For Demirkan et al. (2015); Maglio & Lim (2016), due to the confluence of large amounts of data, mobile applications, the cloud and the Internet of Things, service innovation has captivated the attention of many firms in recent years. According to these authors, this represents promising ways for organisations to provide news services in a fast and efficient way. The combination of physical and digital components (Yoo et al., 2010) through platforms (Ghazawneh & Henfridsson, 2013) and digital infrastructures (Tilson et al., 2010) has increased the generative nature of digital technologies through service innovation (Henfridsson & Bygstad, 2013) within service systems. This combination makes it necessary to integrate and coordinate new actors with new resources, characteristics and capabilities, such as competencies and knowledge in digital infrastructures such as sensors, IoT, big data, smart technologies. Our results showed the evolution from service systems to smart service systems needs analyse innovation in service systems based on real-time data of sensors, IoT artefacts and smart technologies
Figure 11 :Direction 3 :
113 Figure 10 consider in the field level sustainable logic concerning the environmental dimension. In this sense, in view of nature conservation and transformative service research (Anderson et al., 2013; Kuppelwieser & Finsterwalder, 2016) we propose the following research questions: How service logics co-existence can integrate longterm initiatives for the development of sustainable service systems at local, regional, national and global levels of analysis? How can sustainable logic encourage citizens to contribute to urban service systems transformation to sustainable cities (Anderson et al., 2013; Silva-Morales, 2017)? Direction 2: Complexity and multi-dimensionality of service logics at the organisational level One of the main challenges of collaboration between private organisations and research centres with the government is the environmental issue. Public and private organisations are very different, which makes it challenging to work in inter-organisational teams. For example, the goals public organisations are public services, and they are accountable to the public on a large scale while, on the other hand, private organisations are concerned about their shareholders and for these reasons their interests may become different in many ways. Such a difference of interests can cause clashes in the collaboration between public and private organisations (Silva-Morales, 2017). Also, Mikhaylov et al. (2018) suggest more research direction on artificial intelligence for the public sector. Public organisations work according to state orders, while private organisations work according to market demand. This separates the real interests of the two organisations on many points. Due to this problem, owners of private
lowing questions: Which aspects of the dialogic process that occurs during service delivery facilitate or undermine the notion of the well-being of service users (Anderson et al., 2013) ? How can resources be orchestrated in service systems to create service innovations through service logics in the context of the developing countries (Srivastava & Shainesh, 2015)? How ostensive and performative routines can contribute to the creation of value through artefacts D'Adderio et al. (2011)? Direction 4: ecological service logics and dynamic service system transformation Researchers in the field of public management and other disciplines have highlighted the lack of analytic frameworks to understand the logic of digital innovation in public service systems, their evolution, modes of configuration
at field-level data to understand what institutional logics are present in a certain setting, at organizational processes to observe whether the overlap of different logics results in challenges or opportunities (or both), and at inter-organizational relations to keep track of the role that different types of actors have in the field dynamics. This often means including different types of data, from policy documents and industry reports to interviews with a wide range of organizations, and from data on organizational performance to observation of field-events.
and to understand resource integration in service systems at individual and collective levels. This research direction motivates the following research question: To advance the understanding of different service systems, how can the combination or co-existence of various service logics support innovations in complex service systems? Also, it is necessary to understand the boundaries and relationships between co-production, value cocreation[START_REF] Prahalad | Co-creation experiences: The next practice in value creation[END_REF][START_REF] Leclercq | Ten years of value co-creation: An integrative review[END_REF], value co-destruction[START_REF] Echeverri | Co-creation and co-destruction: A practice-theory based study of interactive value formation[END_REF][START_REF] Smith | The value co-destruction process: a customer resource perspective[END_REF], value-in-use, value-in-context, value-inexperience, value-in-exchange within a service system and multilevel service logics. Moreover, it is essential to understand the specificity of public value and economic value. Thus, future research will address the following questions: How can multilevel service logics encourage service systems transformations ? According to public service logic, how do public service organisations integrate and coordinate the heterogeneity of resources and new characteristics of "smart service systems" ? How the balance between different logics evolves in pluralist fields; how do the different actors, public, private and citizens, negotiate different logics and, in so doing, contribute to maintaining the balance between the logics within the different fields and level of analysis, for example, at societal, collective, individual? Limitations This research has methodological and conceptual limitations. From a methodological point of view, a known limitation in the area of machine learning and factorial analysis is the judgement of the number of factors. In this context, another number of factors can not reveal the same outputs. In fact, according to Evangelopoulos et al. (2012) terms selection and dimensionality selection remains an open issue in analytic data algorithms. Also, the estimation of an appropriate number of dimensions, clusters, factors, or predefined categories remains an open issue in data science. Also, quantitative and qualitative phases with data, methodological, analytical and theoretical triangulation is a very long process uninviting for researcher triangulation. In this context, data is available in a repository. Future research would be interesting to complementary analysis: 1) Use other longitudinal scientific mapping tools -for example, CiteSpace R or SciMAT R . 2) Use other bibliometric techniques (e.g., bibliographic coupling). 3) Set additional 5-year periods to analyse changes in research fronts. 4) Include other documents such as books or conference proceedings. While our research was based on triangulation of data and efforts were made such as dividing the corpus into several periods to reduce some biases in a scientific publication such as authority bias, our research was limited to articles in peer-reviewed English. From an artificial intelligence point of view, it is still a challenge to analyse texts in different languages for issues such as multilingual polysemy and synonymy. Although more complex, future research may include multi-language research, book chapters, and conference communications. Conclusions Being this research in agreement with the vision about the literature review proposed by Boell & Cecez-Kecmanovic (2014), as a process for: "Generally, the term 'literature review' can refer to a published product such as literature reviews presented as parts of research reports (e.g. in papers or theses) or a stand-alone literature review publication. Literature reviews examine and critically assess existing knowledge in a particular problem domain, forming a foundation for identifying weaknesses and poorly understood phenomena, or enabling problematization of assumptions and theoretical claims in the existing body of knowledge... Literature reviews typically provide: an overview, synthesis and a critical assessment of previous research; challenge or problematize existing approaches, theories and findings; and identify or construct novel research problems and promising research questions (p. 258). Maps and classifications help in analyzing connections and disconnections, explicit or hidden contradictions, and missing explanations, and thereby identify or construct white spots or gaps... A critical assessment of the body of literature thus demonstrates that literature is incomplete, that certain aspects/phenomena are overlooked, that research results are inconclusive or contradictory, and that knowledge related to the targeted problem is in some ways inadequate... Ultimately such a mapping and classification allows the researcher to critically assess the body of literature, reveal weaknesses and underresearched problems and/or to problematize dominant knowledge claims (p. 267)". In this sense, combining network analytic, quantitative and qualitative approaches allowed to answer the following research questions: From a longitudinal analysis of digital innovation, service innovation and service system research, what service logics are latent in the academic literature (English journal articles)? How has this intellectual structure evolved (1986-2015)? This research has methodological and theoretical implications. From a methodological point of view, this research provides a longitudinal quantitative-qualitative methodology for the revision of literature as well as the bases to a new inclusive and pluralist paradigm based on co-existence of service logics research. From a theoretical point of view, this article proposed an integrative framework for designing pluralistic research on service logic and innovation in service systems. The analysis of the P1, P2 and P3 corpus showed the need for a change in the focus of service research: from a dominant perspective to a pluralistic perspective. A multilevel pluralistic perspective approach can provide opportunities for empirical resolution of theoretical discussions of service logic, service systems, innovation, and value creation. This article calls for expanding institutional research on service logics, based analytical tools and pluralistic theoretical lenses. From the sociology of science point of view, our results corroborate the ideas of Whitley (1984) on fragmented adhocracies. This author affirms that the political structures of fragmented adhocracies are pluralistic and shifting with dominant alliances being formed by temporary and unstable groups controlling resources and charismatic leaders (p. 341). This author has drawn attention to certain contextual features of management fields that are likely to prevent the establishment of a more unified and integrated intellectual field. The findings show that intellectual structure evolution insinuates that service innovation, service logics and service system research is moving slowly from a fragmented adhocracy towards integrated interdisciplinary service research. Service research needs to be redirected to propose pluralist approaches regarding service logics. It is necessary to offer other alternatives based on the service logics co-existence, in parallel to approaches based on a service-dominant logic, the pillar of the Service Science. By considering the limitation of mostly analysing a dominant logic approach, a reorientation of service logics research is essential. The study several service research communities over time through this article put out the need for new paradigms or research programs. In this sense, the philosophy of science perspectives of Popper, Kuhn, Lakatos or Feyerabend could be useful to restructure service logic and innovation in service systems research. Then, research in service, service logics and service system innovation will have come a long way: 1) from a fragmented and fragmented knowledge into disciplinary silos; 2) to a dominant logic; 3) towards an interdisciplinary, pluralist and ecological thought based on the co-existence of inclusive (individual and collective) service logics as an alternative to a dominant logic research design. Data repository Raw and processed data are available in the following repository https://github.com/milensys/Article_ Data. Bibliography Abernathy, W. J., & Utterback, J. M. (1978). Patterns of industrial Innovation. Technology Review, 80, 40-47. URL: http://teaching.up. edu/bus580/bps/Abernathy%20and%20Utterback%2C%201978.pdf. Akaka, M. A., Vargo, S. L., & Lusch, R. F. (2013). The Complexity of Context : A Service Marketing. Journal of International Marketing, 21, 1-20. URL: https://doi.org/10.1509/jim.13.0032. doi:10.1509/jim.13.0032.
Figure
Figure B.18: P1 Force Atlas 2 authors X themes network.
Table 1 :
1 Top ten English peer-reviewed journals of the P1 corpus(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)
Journals
sively on technological innovation. In this model, TICS enables service innovation when the service sectors adopt it. The data corpus used in this research show that this model was criticised in the P2 period for being based only on the technological characteristics of innovation, thus underestimating non-technological innovations.
Table 2
2
presents an overview of the top ten journals
composing the P2 corpus. Total P2 corpus consists of
57 peer-reviewed English journal articles representing the
intellectual structure of service innovation, service log-
ics and service system research. Part of P2 corpus is
available in the following WoS categories: Management,
Business, Engineering, Industrial, Operations Research,
Planning Development, Engineering, Multidisciplinary, In-
formation Science Library Science, Political Science, and
Public Administration. According to the P2 corpus, the five most cited articles are:
[START_REF] Vargo | Evolving to a new dominant logic for marketing[END_REF][START_REF] Gallouj | Innovation in Services[END_REF]
,
[START_REF] Atuahene-Gima | Market orientation and innovation[END_REF]
,
[START_REF] Sundbo | Management of Innovation in Services[END_REF] Evangelista &[START_REF] Evangelista | Innovation in the Service Sector: Results from the Italian Statistical Survey[END_REF]
.
shows that there are few papers about service logics and service systems, except for
[START_REF] Vargo | Evolving to a new dominant logic for marketing[END_REF]
's article about a new dominant-logic for marketing, Lee (2003)'s paper about smart product and service systems for e-business transformation, Tukker (2004)'s study regarding product-service systems sustainability, and Mont (2002)'s research to clarify the concept of product-service systems. Factor analysis with MDS has reduced these 41 constructs into 12 factors representing the latent intellectual structure of P2 dataset (Figure 7a and 7b). Complete FA P2 matrix Table .3 is available in appendices. The twelve representing factors are detailed in the following paragraphs.
Table 2 :
2 Top ten English peer-reviewed journals of the P2 corpus(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)
Journals
Table 3 :
3 Top thirty English peer-reviewed journals of the P3 corpus(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)
Journals
Table . 1
. : Query research criteria in Scopus and Web of Science databases
(ARTICLE OR EDITORIAL MATERIAL OR REVIEW) AND LANGUAGES: (ENGLISH) AND DOCUMENT TYPES: (ARTICLE OR REVIEW) Indexes=SCI-EXPANDED, SSCI, A&HCI, ESCI Timespan=1986-2015
Cumulative Variance Cumulative Variance 9,098 16,261 22,642 28,45 34,069 24,406 39,233 44,368 42,819 49,341 55,558 53,62 65,875 57,69 61,5 64,961
Variance Variance 9,098 7,162 6,38 5,808 5,618 24,406 5,164 5,135 18,413 4,972 12,738 4,278 4,07 10,316 3,81 3,46
Factor/themes Factor/themes Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 1 Factor 6 Factor 7 Factor 2 Factor 8 Factor 3 Factor 9 Factor 4 Factor 10 Factor 11
Institutional 0,719 -0,641 -0,141 -0,205
National government 0,719 -0,641 -0,141 -0,205
National systems of innovation 0,719 -0,641 -0,141 -0,205
Technology policy 0,719 -0,641 -0,141 -0,205
Adoption and diffusion 0,148 0,497 0,364 0,017
Service innovation 0,133 0,641 -0,198 -0,424
Service sector/Service industries 0,091 0,294 0,581 0,055
Innovation in services 0,081 0,261 0,48 0,702
Absorptive capacity/ACAP 0,048 0,101 -0,204 0,592
Technological innovation 0,043 0,16 -0,052 0,093
Generation of new types of services 0,033 -0,003 0,908 -0,118
Schumpeterian 0,033 -0,003 0,908 -0,118
Information Technology 0 -0,024 0,006 0,934
Product-service innovation -0,008 0,141 -0,259 -0,118
Reverse product cycle theory -0,008 -0,09 0,717 0,481
Consumer/customer/user -0,149 0,106 0,667 -0,132
Service logic -0,333 -0,119 0,008 -0,334
Complex service systems -0,663 -0,502 -0,159 -0,113
Quality of service -0,663 -0,502 -0,159 -0,113
Service system -0,738 -0,461 -0,113 -0,332
Table .
.
4: P3 Factors/themes (2006-2015)
Service science (SSMED) -0,04 0,083 -0,083 -0,061 0,015 -0,005 -0,056 -0,018 0,054 -0,084 -0,046 0,026 0,056 0,07 0,022 0,597 0,013 0,021 -0,074 -0,029 0,163 0,071 -0,119
Open innovation Smart City/smart -0,041 -0,042 -0,01 -0,09 -0,13 -0,139 0,031 0,072 -0,023 0,118 0,106 0,043 0,213 -0,06 0,025 -0,04 -0,198 0,03 0,364 -0,059 -0,091 -0,069 -0,072 -0,063
city services
(a) P1 FactorsXAuthors (b) P1 FactorsXThemes/Theories Figure6: P1 intellectual structure themes and theories over time(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995) with latent service logics and service system research.
(a) P2 FactorsXAuthors (b) P2 FactorsXThemes/TheoriesFigure 7: P2 intellectual structure themes and theories over time (1996-2005) with latent service logics and service system research.
(a) P3 FactorsXAuthors (b) P3 FactorsXThemes/Theories Figure 8: P3 intellectual structure themes and theories over time (2006-2015) with latent service logics and service system research.
Figure A.12: P1 network. Degree centrality of themes (in red) by authors (in blue).
Figure A.13: P2 network. Degree centrality of themes (in red) by authors (in blue).
Figure A.14: P3 network. Degree centrality of themes (in red) by authors (in blue).
Figure B.16: Country of authors' affiliation P2.
Figure B.17: Country of authors' affiliation P3.
Maniatopoulos, GregoryMarandi, Ebi Martin, Mike Martin, Patrick Matei, Sorin Mattelmaki, Tuuli Mattsson, Jan McCole, Patrick McHugh, Patricia McKechnie, Sally McLoughlin, Damien McLoughlin, Ian Meijerink, Jeroen Melton, Horace Menendez-Benito, H D Menéndez-Benito, H. D. Mercado, Cecilia Miceli, Lucia Miller, H. G. Min, Choo Zhi Minton, G. Miyazaki, Kumiko Moeller, Miriam Mohd Isa, S. Morgan, Felicia N Morgan, Tyler R Mueller, Benjamin Multisilta, J. Myllärniemi, V. Service innovation Yoo, Youngjin Gallouj, Faiz Bouwman, Harry Chae, Bongsug (Kevin) Choi, Jisun Hurmelinna-Laukkanen, Pia Avellaneda, Claudia N Barile, Sergio Boland, Richard J., Jr. Carlo, Jessica Luo Chae, B. Chang, Ching-Hsun Chang, Yuan-Chieh Consoli, Davide Deokar, Amit V Djellal, Faridah El-Gayar, Omar F Goul, Michael Gruhl, Daniel Janssen, Marijn Kajikawa, Yuya Kim, Jieun Majchrzak, Ann Misuraca, Gianluca Rahman, Syed Abidur Rajagopal Ritala, Paavo Rose, Gregory M Sakata, Ichiro Saviano, Marialuisa Shih, T.-Y. Stare, Metka Taghizadeh, Seyedeh Khadijeh Walker, Richard M Wang, Zhongjie Abdous, M. Ageron, B. Ahmad, Noor Hazlina Akiyama, Masanori Alfano, G.
Kuusik, Andres Kyoya, Y. Lages, Luis Filipe Lai, Jung-Yu Lai, Kee-hung Laine, E. Lambert, D. M. Lanestedt, Gjermund Laukkanen, T. Lee, B. C. Y. Lee, Changyong Lee, E.-J. Lee, Hakyeon Lee, Hyejung Lee, Hyunju Lee, Jae-Nam Lee, Jaegul Lee, Min Kyung Lee, Wen-Yee Lempinen, Heikki Lenka, Sambit Leonardi, Paul M. Lewis, Mark O Liao, Chien Hsiang Liao, Shuling Liao, Wen-Hsuan Lim, K. H. Lim, Ming K Lin, Hong-Nan Lin, Jwu-Rong Lin, L. H. Lin, Shang-Ping Lin, Sheng-Chih Lin, Yong Little, E. Liu, Ren-Jye Lofberg, Nina Loukis, E. Love, James H Lu, I. Y. Lu, T. Luo Junwei Lyons, M. H. Lyu, C. Löbler, H. Löfberg, N. Lüthje, C. Mahr, Dominik Maniatopoulos, Gregory Marandi, Ebi Martin, Mike Martin, Patrick Martini, M. Matei, Sorin Mattelmaki, Tuuli Mattsson, Jan McCole, Patrick McHugh, Patricia McKechnie, Sally Service innovation Gallouj, Faiz Yoo, Youngjin Bouwman, Harry Chae, Bongsug (Kevin) Choi, Jisun Hurmelinna-Laukkanen, Pia Rose, Gregory M Walker, Richard M Avellaneda, Claudia N Barile, Sergio Boland, Richard J., Jr. Carlo, Jessica Luo Chae, B. Chang, Ching-Hsun Chang, Yuan-Chieh Consoli, Davide Deokar, Amit V Djellal, Faridah Dos Santos, Brian L El-Gayar, Omar F Goul, Michael Gruhl, Daniel Janssen, Marijn Kajikawa, Yuya Kim, Jieun Majchrzak, Ann Misuraca, Gianluca Rahman, Syed Abidur Rajagopal Ritala, Paavo Sakata, Ichiro Saviano, Marialuisa Shih, T.-Y. Stare, Metka Taghizadeh, Seyedeh Khadijeh Wang, Zhongjie
P1 FREEMAN'S DEGREE CENTRALITY MEASURES:
- Blau Heterogeneity = 1.38%. Normalized (IQV) = 0.45% Note: For valued data, the normalized centrality may be larger than 100.
Also, the centralization statistic is divided by the maximum value in the input dataset.
Actor-by-centrality matrix saved as dataset P2_authors_X_authors-deg ------------------------------------ Network Centralization = 0.27% Blau Heterogeneity = 0.12%. Normalized (IQV) = 0.05% Note: For valued data, the normalized centrality may be larger than 100.
Also, the centralization statistic is divided by the maximum value in the input dataset.
Actor-by-centrality matrix saved as dataset P1_P2_P3_authors_X_authors-deg ----------------------------------------Running time: 00:00:01
Output generated: 17 nov. 16 18:07:26 UCINET 6.620 Copyright (c) 1992-2016 Analytic Technologies |
00410098 | en | [
"shs.socio",
"shs.stat",
"stat.me"
] | 2024/03/04 16:41:20 | 2010 | https://hal.science/hal-00410098/file/EJSS_2010.pdf | Nicolas Delorme
Julie Boiché
Michel Raspaud
Relative Age Effect in Elite Sports: Methodological Bias or Real Discrimination?
Keywords: Relative Age Effect, Soccer, Discrimination, Bias
Sport sciences researchers talk about a relative age effect when they observe a biased distribution of elite athletes' birthdates, with an over-representation of those born at the beginning of the competitive year and an under-representation of those born at the end. Using the whole sample of the French male licensed soccer players (n = 1,831,524), our study suggests that there could be an important bias in the statistical test of this effect. This bias could in turn lead to falsely conclude to a systemic discrimination in the recruitment of professional players. Our findings question the accuracy of past results concerning the existence of this effect at the elite level.
Relative Age Effect in Elite Sports: Methodological Bias or Real Discrimination?
During the last two decades, the Relative Age Effect (RAE) has been a widely studied and commented phenomenon in the sport sciences literature [START_REF] Musch | Unequal competition as an impediment to personal development: A review of the Relative Age Effect in Sport[END_REF][START_REF] Cobley | Annual age-grouping and athlete development: A meta-analytical review of relative age effects in sport[END_REF]. Among elite athletes, this effect is illustrated by an over-representation of players born during the first two quarters (i.e., three consecutive months' period) of a competitive year, and an under-representation of players born during the last two quarters.
This biased distribution would be due to the age categories determined by sport organisations.
Those institutions traditionally gather young participants in categories of two consecutive years of birth. Even if such system is settled so as to balance the competition between players, it generates important differences in relative age: two children competing in the same category can have up to 23 months of difference in age if they are not born in the same year, while children who are born in the same year can have up to 11 months of difference.
Consequently, children born early in the competitive year are more easily identified as "talented" or "promising" than their counterparts born later in the year, during the detection sessions organised by sport instances [START_REF] Helsen | The relative age effect in youth soccer across Europe[END_REF]. Indeed, their initial advantage in relative age is accompanied by a more advanced physical (Delorme & Raspaud, in press) and cognitive (Bisanz, Morrisson & Dunn, 1995) development. Thanks mostly to their more developed physical attributes (e.g., in height, weight or strength), those children and adolescents benefit from a "biased" vision on their potential, which facilitates their recruitment in high level structures. Once this first step is achieved, they can take advantage of an early exposure to elite practice with highly qualified technicians. This access to top level competition is a key element for their future sport career [START_REF] Ward | Perceptual and cognitive skill development in soccer: The multidimensional nature of expert performance[END_REF][START_REF] Williams | Perceptual skill in soccer: Implications for talent identification and development[END_REF], considering the technical and strategic skills it may bring. 1 This asymmetrical distribution of elite players' birthdates has been observed in various activities, including baseball, cricket, tennis, football or rugby union [START_REF] Musch | Unequal competition as an impediment to personal development: A review of the Relative Age Effect in Sport[END_REF][START_REF] Cobley | Annual age-grouping and athlete development: A meta-analytical review of relative age effects in sport[END_REF]. Most of the research nevertheless concerned ice-hockey and soccer. Regarding soccer, a RAE has been reported for the professional championship of numerous countries located in different continents: United-Kingdom [START_REF] Dudink | Birth date and sporting success[END_REF], Belgium [START_REF] Helsen | The influence of relative age on success and dropout in male soccer players[END_REF]), Spain (González Aramendi, 2007), France and the Netherlands [START_REF] Verhulst | Seasonal birth distribution of West European soccer players: A possible explanation[END_REF], Australia, Brazil, Germany, and Japan [START_REF] Musch | The relative age effect in soccer: Cross-cultural evidence for a systematic discrimination against children born late in the competition year[END_REF]. Faced to this consistent set of results in elite sport, Musch and Grondin (2001, 163) conclude their review of the literature stating that "taken together, a growing body of research reviewed in this article suggests that RAEs are a pervasive phenomenon in competitive sport". Some authors qualify this effect as discriminatory for players born late in the competitive year. In this vein, [START_REF] Edgar | Season of birth distribution of elite tennis players[END_REF]O'Donoghue (2005, 1014) underline the potential gains for a tennis player's career, in terms of money, television coverage, recognition, and celebrity lifestyle, and suggest that "it would be desirable that everyone would have an equal opportunity to become a professional tennis player regardless of season of birth". For those authors, even if this discrimination is inadvertent, it needs to be cautiously examined, given the lucrative nature of certain sports. Other arguments concern the fact that sport should enable every child's blossoming and health [START_REF] Musch | Unequal competition as an impediment to personal development: A review of the Relative Age Effect in Sport[END_REF]. Indeed, the system seems detrimental for certain children's motivation, which may lead them to dropout and does not contribute to the physical activity habits they shall adopt during adulthood. On a more pragmatic plan, certain authors note that the RAE observed, as an artificial consequence of the youth competition structure, generates a loss in potentially talented players, which in the long run contributes to a decrease of level among professional and national teams (e.g., Pérez [START_REF] Pérez Jiménez | Relative age effect in Spanish association football: Its extent and implications for wasted potential[END_REF].
Because of those potential economical, psychological and health-related outcomes of RAE , the majority of authors agree about the necessity of carrying actions aiming at reducing this phenomenon or even make it disappear. With this regard, it was proposed to establish among young participants new categorisation systems, either based on biological (e.g., Baxter-Jones,1995) or chronological age (e.g., [START_REF] Boucher | The Novem System: A practical solution to age grouping[END_REF][START_REF] Hurley | A proposal to reduce the age discrimination in Canadian minor ice hockey[END_REF][START_REF] Hurley | Equitable birthdate categorization systems for organized minor sports competition[END_REF], so as to deal with the negative correlates of the differences in relative age. As sport organisations apparently ignore this phenomenon, some authors even called for a direct intervention of the government (e.g., [START_REF] Hurley | A proposal to reduce the age discrimination in Canadian minor ice hockey[END_REF].
In sum, the RAE is thus qualified of discriminatory because it put at disadvantage players born late in the competitive year, by reducing their chances to reach the elite. The accumulation of studies reporting such an effect among high level samples is likely to discourage anyone of doubting of the existence of this systemic discrimination. The purpose of the present work is however to test the empirical reality of this discrimination. Indeed, it looks like an important methodological bias appears in the studies reporting a RAE among elite athletes, which questions the validity of the conclusions presented. In the literature, the presence of RAE is traditionally determined by examining whether or not there is a significant difference between the theoretical expected distribution of players by month (or quarter) and the observed distribution, which is done by performing a chi-square goodness-of-fit test or a Kolmogorov-Smirnov one-sample test. Depending on the studies, four strategies can be found in calculating the expected distribution:
(a)
An even distribution of birthdates by month or quarter is posited (e.g., [START_REF] Barnsley | Family planning : football style. The RAE in football[END_REF]. This choice is frequent when the research concerns an international sample.
(b) An even distribution is posited, controlling for the number of days in a month/quarter (e.g., [START_REF] Edgar | Season of birth distribution of elite tennis players[END_REF]. Once again, an international sample is often the justification for this choice.
(c)
The birthdates statistics by month, gender and year for the corresponding national population are considered, using weighted mean scores (e.g., [START_REF] Helsen | The influence of relative age on success and dropout in male soccer players[END_REF].
(d)
The statistics of a European country are considered, using weighted mean scores, in a study concerning a European sample (e.g., [START_REF] Helsen | The relative age effect in youth soccer across Europe[END_REF]. This procedure is based on the work by [START_REF] Cowgill | Season of birth in man. Contemporary situation with special reference to Europe and Southern Hemisphere[END_REF] suggesting that the distribution by month/quarter is similar among European countries.
In the absence of specific methodological constraints -for instance, an international sample -in order to obtain results as accurate as possible, it is recommended to use the third procedure to calculate the expected birthdates distribution. Whatever the strategy chosen, all of these four procedures may lead to a biased final interpretation. Indeed, whether the reference considered is an even distribution -procedures (a) and (b) -or national standardsprocedures (c) and (d) -there is an implicit postulate according to which the birthdates distribution of the licensed players for a given activity is similar to the corresponding national population taken as reference. Yet, to our knowledge, this postulate has never been tested before, for any sport. It is noteworthy that, except a handful cases, elite athletes are issued from the national population of licensed athletes. As it has been underlined, the differences in relative age are accompanied by significant disparities between players of a same age category, in terms of cognitive and physical development. It thus seems logical that sports where physical attributes represent advantages, such as ice-hockey or soccer, will be less attractive for young people born late in the competitive year, and less physically mature.
As indicated above, the unequal distribution of birthdates among elite players would thus be partly due to a selection system that values physical development and discriminates players born late in the competitive year. Though, in order to definitely conclude to the presence of a kind of discrimination, it is necessary to show that the birthdates' distribution of licensed players by month or quarter is identical to the distribution observed in the corresponding national population. If it is shown that there already exists an unequal distribution among the whole population of licensed players in a given activity, the selection system and the recruitment operated by the different professional channels should not be pointed out as responsible for the unequal distribution among the elite. Conversely, a "selfrestriction" process inhibiting certain young and preventing them from even beginning the activity, as well as a quick dropout of players born at the end of the competitive year, could in this case account for this phenomenon at the highest level.
Based on the birthdates of all French male players affiliated to the French Soccer Federation (FSF) for the 2006-2007 season, the purpose of this study was twofold. First, we aimed at examining if the distribution of the birthdates in this sample was identical to the one observed in the French population for the corresponding years of birth. Next, we tested whether using all the licensed players versus the national population as reference to calculate the expected birthdates distribution has an impact on the conclusion drawn regarding the distribution observed among elite players (i.e., French players of the first division championship).
Material and Methods
Data collection
For the purpose of the present study, the birthdates of all male French players affiliated to the FSF (n = 1,831,524) during the 2006-2007 season were collected from the federation database. Foreign players were excluded from the sample, in order to make sure that all participants were subject to the same cut-off date for age categorisation, and to enable a relevant comparison with the birthdates' distribution observed among the French population. The birthdates of players from the French first division championship (n = 351)
were collected through the rosters of the Professional Soccer League (PSL). Once again, foreign players were excluded from the sample. Among males, the FSF distinguishes 7 age categories: "less than 7 years", "less than 9 years", "less than 11 years", "less than 13 years", "less than 15 years", "less than 18 years" and "adults".
Data Analysis
For each of the 7 FSF age categories, as well as for the sample of players from the PSL, the players' birthdates were classified into 4 quarters. Since the cut-off date used to form age categories has been modified by the FSF [START_REF] Jullien | Influence de la date de naissance sur la carrière professionnelle des joueurs de football français [Influence of birth date on the career of French professional soccer players[END_REF], the players born before 1982 were classified from Q1 (August-October) to Q4 (May-July) and the players born in 1982 and after were classified from Q1 (January-March) to Q4 (October-December).
Concerning the players affiliated to the FSF, for each age category, the expected distribution was calculated based on the national birth statistics by month and year for males, using weighted mean scores. Those data were obtained through the National Institute of Economical Statistics and Studies. Regarding professional players, two chi-square goodnessof-fit tests were carried out with the two following procedures for the calculation of expected birthdates distribution:
(a)
The national birth statistics by month and year for males, using weighted mean scores.
(b)
The statistics by month and year observed for the whole corresponding population of male players affiliated to the FSF, using weighted mean scores.
The tests were conducted with the Statistica 6.1 Software (StatSoft Inc.) with a signification threshold fixed at .05.
Results
Table 1
Discussion
Because the method traditionally used to determine the presence of a RAE among elite athletes does not evacuate a bias linked to the choice of national standards in the calculation of the expected distribution for the statistical test, the first aim of this study was to compare the birthdates distribution for the whole population of male players affiliated to the FSF, and in the French population. This comparison allowed testing the postulate according to which both are, as a matter of fact, similar.
This study reveals a systematic significant RAE for all age categories distinguished by the FSF, that is to say, an over-representation of players born in Q1 and Q2 and an underrepresentation of players born in Q3 and Q4. Those results thus indicate that the "classical" methods used to assess and interpret the presence of a RAE may not always be relevant but instead might introduce bias in the conclusions drawn relatively to this phenomenon among the elite. Indeed, the presence or absence of this effect is most of the time examined by looking at the players' birthdates distribution, comparing them to national standards. Such strategy is based on the implicit premises that the population of licensed players for one activity is similar to the national distribution. However, the present results suggest that, at least in the case of French soccer, there already is a disparity in the players' birthdates distribution, from the less than 7 to the 'adults' category.
This has crucial implications and calls for a methodological change in the calculation of expected distributions in studies investigating the RAE. A study aiming at demonstrating this effect with certainty should use as theoretical distribution the one of the population of licensed players, instead of the national corresponding statistics. Indeed, one could hastily conclude that an asymmetrical distribution of birthdates among elite players results from a RAE, whereas in reality it is only representative of the distribution observed in the population of licensed players. The over-representation of players born at the beginning of the competitive year, and the under-representation of those born at the end, may not be a consequence of the selection system valuing physical development of young players, which may put at advantage children and adolescents born during the first months of the year, but could simply be the mimetic expression of the mass of licensed players.
In this perspective, it would be erroneous to conclude to a discrimination toward players born at the end of the competitive year. The potential impact of this bias is far from being negligible and might even lead to opposite conclusions about this phenomenon. In this vein, the results concerning the birthdates distribution of the 351 French soccer players of the first division championship during the 2006-2007 season varies according to the reference used to calculate the expected distribution for the chi-square goodness-of-fit test. When the French population is used, there is a biased distribution, but with the population of soccer licensed players a similar distribution is observed. In the first case, the researcher will be likely to conclude to a discriminative effect due to the mode of recruitment of the various professional instances, whereas in the second case he/she should conclude to the absence of such effect. Because a great majority of the players that reach the elite level comes from the population of licensed players, and went through all the detection and selection sessions operated by federal organisms, we would recommend using the birthdates distribution of this population, and not national data, in order to calculate the expected distribution. This precaution would permit to avoid a bias likely to drastically distort the results obtained.
Conclusion
If the present results, in themselves, do not demonstrate the absence of a RAE in elite sport, they nevertheless question the accuracy of the tests performed in past studies investigating this phenomenon and consequently the conclusions drawn. With this regard, if a biased distribution already exists among the whole population of licensed players in one activity, it is normal that, by mimicry, such asymmetry also arises among the elite. Taking the national population as reference, one could be prone to hastily conclude to a discrimination operated by the way professional sport organisations proceed to recruit their athletes.
This study further reveals a significant RAE for all age categories distinguished by the FSF. The fact that a biased distribution was systematically observed for regular players calls for additional work examining what mechanisms lead to such effect. We assume that it results from two major processes: first, a phenomenon of "self-restriction" that prevents children and adolescents born at the end of the competitive year to begin to practice this sport; second, higher rates of dropout among those who begin to play but encounter a temporary physical inferiority, compared to players born early in the year who belong to the same age category.
Indeed, if the asymmetrical distribution observed at the higher level is not the result of a biased selection, it looks like the one observed for all licensed players reflect a systemic discrimination for young people born late in the competitive year. Given the multiple benefits of moderate sport participation on social acceptance, psychological self-perceptions and health, such discrimination and its mechanisms deserve to be cautiously examined by future research conducted on RAE. Note: ∆ is the difference between the observed distribution and the theoretical expected distribution.
presents the birthdates' distribution by quarter for each age category identified for male players by the FSF, during the 2006-2007 season.If the national population is used as reference to calculate the expected distribution, the results indicate a significant RAE among professional players from the PSL (χ 2 = 8.31, d.f.= 3, P<.05). Conversely, if the population of licensed players is used to calculate the
expected distribution, one cannot conclude to a biased distribution among first division
players (χ 2 = 4.69, d.f.= 3, P<.20).
2 = 346.07,
d.f.=3, p<.0001), less than 15 (χ 2 = 619.08, d.f.= 3, P<.0001), less than 18 (χ 2 = 752.99, d.f.=
3, P<.0001) and adults (χ 2 = 1721.87, d.f.= 3, P<.0001). The results reflect a classical RAE
with an over-representation of players born in Q1 and Q2, and an under-representation of
players born in Q3 and Q4.
Tables 2 and 3 present the analyses of birthdates distribution by quarter for
professional players using as theoretical distribution either the corresponding national
population or the whole population of licensed players.
**** Please insert Table 2 near here ****
**** Please insert Table
1
near here **** A chi-square goodness-of-fit test taking the distribution of licensed players as observed distribution and national values as expected distribution reveal statistical differences for all age categories: less than 7 (χ 2 = 566.75, d.f.= 3, P<.0001), less than 9 (χ 2 = 237.90, d.f.= 3, P<.0001), less than 11 (χ 2 = 269.97, d.f.= 3, P<.0001), less than 13 (χ **** Please insert Table
3
near here ****
Table 1 .
1 Season of birth of French male soccer players(2006)(2007).
Category Q1 Q2 Q3 Q4 Total χ 2 P
Adults 182,945 180,229 175,775 176,111 715,060 1721.87 <.0001
(∆) (+7,257) (+8,388) (-1,922) (-13,723)
Under 18 43,920 43,771 41,799 37,430 166,920 752.99 <.0001
(∆) (+3,712) (+956) (-719) (-3,949)
Under 15 38,476 37,155 36,361 32,257 144,249 619.08 <.0001
(∆) (+3,358) (+582) (-857) (-3,083)
Under 13 42,432 43,535 42,586 39,185 167,738 346.07 <.0001
(∆) (+2,518) (+744) (-637) (-2,625)
Under 11 53,187 54,469 54,861 50,967 213,484 269.97 <.0001
(∆) (+2,337) (+754) (-241) (-2,850)
Under 9 55,847 57,220 57,378 52,840 223,285 237.90 <.0001
(∆) (+2,256) (+660) (-180) (-2,736)
Under 7 50,965 51,749 52,298 45,776 200,788 566.75 <.0001
(∆) (+2,742) (+1,340) (+247) (-4,329)
Table 2 .
2 Season of birth of first division players(2006)(2007). ∆ is the difference between the observed distribution and the theoretical expected distribution.
Q1 Q2 Q3 Q4 Total χ 2 P
League 1 108 89 81 73 351 8.31 <.05
(∆) (+23) (-4) (-8) (-11)
Note:
Table 3 .
3 Season of birth of first division players(2006)(2007).
Q1 Q2 Q3 Q4 Total χ 2 P
League 1 108 89 81 73 351 4.69 <.20
(∆) (+16) (-1) (-6) (-9)
Footnotes
1 For an exhaustive presentation on the mechanisms, factors and moderators of the RAE, see the review of the literature of [START_REF] Musch | Unequal competition as an impediment to personal development: A review of the Relative Age Effect in Sport[END_REF]. |
04101037 | en | [
"info",
"math"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04101037/file/A_Channel_Selection_Model_based_on_Trust_Metrics_for_Wireless_Communications.pdf | Graduate Student Member, IEEE Claudio Marche
email: [email protected]
Senior Member, IEEE Michele Nitti
email: [email protected].
A Channel Selection Model based on Trust Metrics for Wireless Communications
Keywords: Countermeasure In Wireless Networks, Communications Channel Selection, Trustworthiness Management, Jamming Attacks
Dynamic allocation of frequency resources to nodes in a wireless communication network is a well-known method adopted to mitigate potential interference, both unintentional and malicious. Various selection approaches have been adopted in literature, to limit the impact of interference and keep a high quality of wireless links. In this paper, we propose a different channel selection method, based on trust policies. The trust management approach proposed in this work relies on the node's own experience and trust recommendations provided by its neighbourhood. By means of simulation results in Network Simulator NS-3, we demonstrate the effectiveness of the proposed trust method, while the system is under jamming attacks, in respect of a baseline approach. We also consider and evaluate the resilience of our approach in respect of malicious nodes, providing false information regarding the quality of the channel, to induct bad channel selection of the node. Results show how the system is resilient in respect of malicious nodes, keeping around 10% of throughput more than an approach only based on the own proper experience, considering the presence of 40% of malicious nodes, both single and collusive attacks.
I. INTRODUCTION
The ubiquitous connotation of wireless devices, pushed by the advent of 5G and new technologies such as Artificial Intelligence (AI), is contributing to making wireless services a daily life presence.
The wireless network capacity has been drastically boosted, with the advent of new services and the constant evolution of wireless towards 802.11ax for the Wi-Fi and the 5th generation (5G) of cellular systems [START_REF] Miao | Fair and dynamic data sharing framework in cloudassisted internet of everything[END_REF], [START_REF] Amuru | On jamming against wireless networks[END_REF]. The high-speed services based on wireless technologies are expected to become still more ubiquitous and support a massive deployment of wireless communicating objects, enabling the Internet of Everything (IoE) paradigm. The IoE concept encompasses things, processes, people and data [START_REF] Langley | The internet of everything: Smart things and their impact on business models[END_REF]. The inherent openness of wireless technology, together with its increasing use, makes security threats increase as well. In particular, wireless communication networks are very sensitive to interference: a mitigation approach adopted to face this interference effect is channel hopping. It is a method that allocates in a dynamic way the frequencies to the nodes in a wireless communication system. Different approaches have been proposed in the literature, with the main objective of mitigating interference, both unintentional and malicious.
In this last case, we talk about Denial of Service (DoS) attacks, which can be realized as a jamming/adversarial attack against a victim receiver. The effectiveness of this kind of attack is increased exponentially with the newly developed techniques, such as reactive jamming, where the jammer decides where to focus for maximum impact by performing a cognitive radio sensing [START_REF] Chang | Fast ip hopping randomization to secure hop-by-hop access in sdn[END_REF].
In these terms, we propose a channel selection model able to assist wireless nodes in choosing the best channel to transmit on, that is suitable for wireless devices and does not require standard modifications. The proposed model, developed for wireless technologies based on the association phase, can be implemented in several wireless technologies, spanning from IEEE 802.15.4 to Z-Wave, just to cite a few. As follows, we were inspired by the concept of trustworthiness and took advantage of the well-known trust management techniques. In this scenario, the communication between nodes involves two different roles: the first represents the trustor, and it has to trust the other one, which depicts the trustee and provides the required data. However, misbehaving devices can perform different types of attacks and can disrupt communications for their own gain. The trustworthiness management techniques have to solve the essential issue to detect which channel is affected by malicious behaviours and so lead the nodes to successful collaboration. Our paper works in this direction, intending to estimate the best wireless channel and avoid jamming interference, and thus provides the following contributions:
• First, we propose a trust management model, based on experience and recommendations, able to assist wireless nodes in channel selection, that does not require any standard modification. Thanks to the model, the nodes should select the best reliable channel so as to prevent jamming attacks or other interference. • Second, we analyze different behaviours of jamming attacks and propose a new dynamic one, which is then used to test the resiliency of our model and the common wireless approaches. • Third, we conduct extensive evaluations by comparing the proposed channel selection algorithm with two models, i.e. the classical approach described in the 802.11 standards and another one that considers only the past experiences of nodes. The evaluation results show the importance of the experience and recommendation to prevent jamming attacks and, moreover, the influence of the time windows in dynamic jamming ones. The rest of the article is organized as follows: Section II presents a brief survey on channel selection, the possible types of jamming attacks and the importance of trust mechanisms in wireless networks. In Section III, we describe the scenario and introduce the used notations. Section IV illustrates the proposed trust management model, while Section V provides details of simulations and results. We conclude the paper with a brief discussion and some final remarks in Section VI.
II. RELATED WORKS
In this Section, we focus on the most representative works related to three different key aspects of our approach. The first aspect revises the most representative jamming attacks described in the literature, while the second one evaluates the channel selection mechanisms used to mitigate the interference impact, which could be effective also against jamming. Moreover, we finalize this Section by reporting the most recent works on trust algorithms adopted in wireless networks.
A. Jammer Attacks
The massive use in our daily life of wireless services makes security threats an important concern to be considered, above all in terms of availability of wireless communications, data integrity, etc [START_REF] Yang | Mac technology of ieee 802.11 ax: Progress and tutorial[END_REF]. The interference from other networks produced by simultaneous transmissions, i.e., inter-network interference, significantly reduces the network throughput and affects all the ongoing transmissions [START_REF] Liu | Throughput analysis of ieee 802.11 wlans with internetwork interference[END_REF]. Radio jamming is certainly one of the major threats to which wireless networks are particularly prone. With the advancement of the software-defined radio approaches, it has become quite easy to launch a jamming attack [START_REF] Pirayesh | Jamming attacks and anti-jamming strategies in wireless networks: A comprehensive survey[END_REF]. Despite the increasing evolution of wireless communication technologies, most of them are vulnerable to jamming attacks, due to the lack of adequate countermeasures.
There are several types of jamming attacks, some of them were initially conceived for Wi-Fi technology but have been then proven to be effective also for other types of wireless networks. Among them, we can count constant jamming attacks [START_REF] Pelechrinis | Denial of service attacks in wireless networks: The case of jammers[END_REF], where the jammer constantly broadcasts a signal over time. Even though this type of attack is really effective, by reaching a 100% of packet error rate, its main weakness is the energetic inefficiency. Another type of well-known attack in literature is reactive jamming, relying on the knowledge of the channel from the attacker, that sends an interference based on the detection of a legitimate transmitted packet [START_REF] Cai | Joint reactive jammer detection and localization in an enterprise wifi network[END_REF]. This type of attack is more energy-efficient than constant, but it requires a tight timing constraint, on the order of 4µs for OFDM, in order to make the switching between listening and transmitting. Another weakness aspect is related to the length of the detected packet. This type of attack is ineffective for short packet sizes.
Other types of attacks are random and periodic jamming attacks. The former ones are considered memoryless attacks and consist of sending signals at random times, and then the offender switches to sleep mode [START_REF] Bayraktaroglu | On the performance of ieee 802.11 under jamming[END_REF]. In the periodic version, the attacker sends signals at precise and predefined times. They are certainly more energy efficient than constant attacks, but less effective. Other types of attacks have been expressly conceived for Wi-Fi networks, and in particular for the physical and MAC layers. One of them is represented by a timing synchronization attack. Several attacks have been proposed, able to thwart the synchronization signal time, with the main aim of disrupting the start-of-packet procedure. In particular, the authors of [START_REF] Pan | Jamming attacks against ofdm timing synchronization and signal acquisition[END_REF] have proposed preamble spoofing attacks, by injecting the same preamble as the legitimate user, in order to make the receiver incapable of decoding the legitimate data. Generally, this type of attack is based on a very good knowledge of the network timing. Another type of jamming attack is represented by the frequency synchronization jamming attacks, where an offset of the carrier frequency may cause a deviation from the orthogonality and introduces a phase deviation, with an important degradation of the SNR and the demodulation performance. Channel estimation jamming attacks are another type of jamming based on generating malfunctioning channel estimation and channel equalization. If the accuracy of the channel estimation is impacted as shown in [START_REF] Clancy | Efficient ofdm denial: Pilot jamming and pilot nulling[END_REF], the degradation of the network can be very high. Anyway, the results in [START_REF] Clancy | Efficient ofdm denial: Pilot jamming and pilot nulling[END_REF] have been proved via simulation, but nulling attacks in real-world scenarios seem complicated to be realized, due to the mismatches between the attacker and the legitimate device, both in terms of timing and phase.
The proposed approach is tested against a complex jamming strategy, namely reactive, according to which an attacker disturbs only communications that have already started and so targeting packets that are already on the air. Two different behaviours are implemented: a static behaviour, where the jammer can attack only a specific channel, and a new proposed dynamic one, thanks to which the jammer can change the target by jumping into different channels.
B. Channel Selection Models
This subsection provides an overview regarding the background of channel selection in wireless technologies. In recent years, the community has strongly focused on the issue of interference and jamming attacks in wireless networks, and several works have been proposed. Below, we provide a brief survey of some of the most appreciated approaches in the literature without pretending to be exhaustive.
In these terms, two well-known channel selection models based on machine learning techniques are illustrated in [START_REF] Wang | Channel selective activity recognition with wifi: A deep learning approach exploring wideband information[END_REF] and [START_REF] Kurasawa | A high-speed channel assignment algorithm for dense ieee 802.11 systems via coherent ising machine[END_REF]. In the first work, the authors propose an advanced deep-learning mechanism to select available wireless channels with good quality and avoid interference from external communications. The Wi-Fi channel is selected based on the signal strength and the channel quality in terms of Channel State Information (CSI); the model proposes discarding the most crowded wireless environments. In the second work, the authors illustrate a channel assignment approach using a neural network, namely Coherent Ising Machines (CIM), operating at the quantum limit. The proposed centralized controller selects the best channel by evaluating all the information periodically sent by the Access Points (APs); the optimization function is formulated in order to maximize the throughput and minimize the interference between APs.
Other two approaches based on machine learning techniques are illustrated in [START_REF] Dakdouk | Reinforcement learning techniques for optimized channel hopping in ieee 802.15. 4-tsch networks[END_REF] and [START_REF] Hamdi | Lora-rl: Deep reinforcement learning for resource management in hybrid energy lora wireless networks[END_REF]. In the first work, the authors propose a combined mechanism that integrates specific machine learning algorithms and Time Slotted Channel Hopping (TSCH) in order to select high-performance channels in a ZigBee scenario. In specific, the authors evaluate 9 different Multi-Armed Bandit (MAB) algorithms and illustrate how their combination can improve the packet delivery ratio. In the second work, the authors depict a channel and spreading factor assignment to minimize the grid energy cost in a green LoRa network, powered by both a renewable energy source and the conventional grid. Based on machine learning approaches, the proposed model is then tested under different scenarios.
Moreover, an approach based on advanced machine learning algorithms is proposed in [START_REF] Davaslioglu | Deepwifi: Cognitive wifi with deep learning[END_REF]. The authors illustrate a protocol based on a deep learning technique that proposes to mitigate interference through the analysis of the spectrum. The channel is sensed, and then its spectrum is analyzed and classified by a deep neural network that is responsible for detecting unusual behaviours, such as jamming attacks. Another two approaches concerning machine learning are presented in [START_REF] Li | A lightweight decentralized reinforcement learning based channel selection approach for high-density lorawan[END_REF] and [START_REF] Jeunen | A machine learning approach for ieee 802[END_REF]. The first work depicts a decentralized learning-based channel selection approach for IoT systems. The approach allows IoT devices to select appropriate channels based on Acknowledge (ACK) information among devices, with low computational complexity. While the second work illustrates an approach for performing the channel allocation based on graph analysis and regression techniques to minimize the overlap among APs. The interference is reduced through the combination of passive measurements on the medium, such as the Received Signal Strength Indicator (RSSI), and the analysis of the behaviour of the neighbours and the community.
Moreover, channel selection models based on different techniques are depicted in the literature; among them, two works developed for a Bluetooth scenario are presented in [START_REF] Pang | Bluetooth low energy interference awareness scheme and improved channel selection algorithm for connection robustness[END_REF] and [START_REF]A probability-based channel selection algorithm for bluetooth low energy: A preliminary analysis[END_REF]. The first work illustrates an adaptive frequency hopping technique based on linear programming, to prevent interference while keeping the communication process going. The authors propose an interference scheme based on the packet status of a BLE connection and an algorithm that helps to choose a channel based on probability. In the second work, the authors investigate various interference levels and depict an improved channel selection algorithm combining different channel maps gathered from the environment; the model is then tested analysing the relationship between transmission failure probability and packet loss rate. Another recent approach is presented in [START_REF] Shraideh | Joint channel and spreading factor selection algorithm for lorawan based networks[END_REF]. The work illustrates a model that supports assigning the best channel and selecting the spreading factor to achieve the rate demand of end devices in LoraWANbased networks. The algorithm, simulated using Matlab, proposes to improve throughput, reduce power consumption and guarantee link reliability.
The last group of articles mainly focuses on the analysis of collaboration between devices. Among them, in [START_REF] Zakrzewska | Dynamic channel bandwidth use through efficient channel assignment in ieee 802.11 ac networks[END_REF], the authors propose a selection and allocation channel method for wireless networks for two typical scenarios, i.e. enterprise and residential. A bonding matrix is created to represent the channel usage for a considered Access Point (AP) and its neighbours. Then, a specific bandwidth is allocated for the transmission and the channel with the lowest utilization is selected. Another approach where the collaboration is analyzed is presented in [START_REF] Gramacho | A game theoretical approach to model the channel selection dynamics in non-coordinated ieee 802.11 networks[END_REF]. The authors map the process of interference minimization into a competitive game of Game Theory, where the APs represent the players and the channels depict the possible strategies. The competition of the wireless network, i.e. the game, is tested with two different behaviours of nodes, where the first demand lower collaboration, while the other one assumes the collaboration between all the nodes in order to reach a maximum global benefit.
However, to the best of our knowledge, even though such advanced techniques depict acceptable results, these exhibit several gaps. For example, many of the presented works are not suitable for devices that are usually based on restricted and low computation capabilities, and so, often, they require the use of central entities or controllers, where complex algorithms are implemented. Furthermore, the standard modifications represent another problem: several models propose the optimization of physical or MAC frames format, which are actually already well examined and accepted by the community. Moreover, two other gaps are exhibited; the first one regards that many of these works need an additional radio unit, which should be configured in a monitor mode and can be used as support for the master unit used to give network access to the nodes, while the other lack considers that many works do not test their approaches with interference and attacks, and so authors can not estimate the resiliency in adverse scenarios. To sum it up, in Table I, we summarize the more representative contributions described above.
The approach proposed in this work aims to select the channels based on their reliability obtained considering a node's experience and the recommendations from its neighbours. The needed information to compute the channel's trust is integrated into the standard, so the approach does not need additional messages to be exchanged. Moreover, no central controllers are required, and each device can independently estimate the trustworthiness of the channel and its neighbours without an additional radio unit.
C. Trust Mechanisms in Wireless Networks
Most of the contributions on trust approaches applied in wireless networks are integrated into the routing mechanisms. Very few papers focus on spectrum allocation or channel selection based on the trust concept. One of the first contributions in this direction is [START_REF] Pei | A trust value-based spectrum allocation algorithm in cwsns[END_REF], where the authors propose a trust algorithm that combines the trust value and the method of spectrum allocation. During the spectrum allocation, the reputation value is fixed and cannot be changed. In [START_REF] Changiz | Trust management in wireless mobile networks with cooperative communications[END_REF], authors combine relay selection with channel conditions information to obtain a modified trust model, that will be applied along with the source, the relay and the destination. In [START_REF] Nitti | Using an iot platform for trustworthy d2d communications in a real indoor environment[END_REF], authors take advantage of the trust concept in order to improve device-to-device (D2D) communications by gathering
Wi-Fi ✓ ✓ - - [14] Neural Network Wi-Fi - - ✓ - [15] Probability Theory ZigBee ✓ ✓ ✓ - [16] Reinforcement Learning LoRaWAN ✓ - ✓ - [17] Deep Learning Wi-Fi ✓ ✓ - ✓ [18] Reinforcement Learning LoRaWAN - ✓ ✓ - [19] Regression Analysis Wi-Fi ✓ ✓ - - [20] Linear Algorithm Bluetooth - ✓ ✓ - [21] Channel Map Bluetooth - ✓ ✓ - [22] Mathematical Optimization LoraWAN - - ✓ - [23] Linear Algorithm Wi-Fi - - ✓ - [24] Game Theory Wi-Fi ✓ - ✓ - Our solution Trustworthiness Management Multi Wireless Technologies ✓ ✓ ✓ ✓
both Quality of Service (QoS) and spectrum sensing data and weighting the received information using a social algorithm. Another approach is illustrated in [START_REF] Sun | Reputation-based spectrum sensing strategy selection in cognitive radio ad hoc networks[END_REF], where the authors propose a reputation-based scheme for cooperative spectrum sensing. The approach is based on the proper knowledge of the spectrum and also relies on neighbourhood information. They also consider Spectrum Sensing Data Falsification (SSDF) attacks, based on false information regarding the sensing with the main objective of deteriorating the network's performance. The method proposed in that work is close to the approach we developed in this work, but the main important difference is that we rely on a wireless network, and we do not consider a cognitive radio context, with a distinction between primary and secondary users. In general, studies have proved the validity of the trust concept in wireless networks; however, it is necessary to investigate the attack introduced with trust management, i.e. attacks on recommendations, thanks to which the reputation of good nodes is ruined when numerous malicious objects act alone or collude together to start disseminating bad recommendations intentionally [START_REF] Khan | Trust management in social internet of things: Architectures, recent advancements, and future challenges[END_REF].
III. SCENARIO
This paper proposes a trust management model able to assist wireless nodes, both static and mobile, to choose the most trustworthy channel to transmit on. The requirements for the proposed approach are based on the distribution of the nodes and the adoption of the Frequency Hopping Spread Spectrum technique. For these reasons, technologies such as IEEE 802.15.4, ad-hoc IEEE 802.11 and Z-waves can be candidates for the trust-based framework. The innovative part stands in involving all objects in the risk assessment to allow the transmitter to select the best channel to communicate on so as to avoid any possible jammer in the network.
In our modelling, the set of wireless nodes is represented by N = {n 1 , ..., n i , ...n I } with cardinality I, where n i is the generic node. We can then describe the subjective topology of the network by making use of the set of distances of all the nodes in the network from node n i as
D i = {d ij : j ̸ = i}.
The neighbours of the generic node n i are represented in our model by N i = {n j ∈ N : d ij < R i } that is the set of nodes that are within the transmission range R i of node n i .
In the evaluated scenario, we are considering a wireless spectrum as the resource to be monitored, so let C = {c 1 , ..., c x , ...c X } be the set of X possible channels. The goal of our paper is, for each node, to obtain a complete and trustable vision of the spectrum usage thanks to the neighbours' recommendations to avoid malicious nodes that could affect the transmission, i.e. jammers. Nevertheless, the transmitter continuously monitors the transmission in terms of Packet Delivery Ration (PDR) so that if its quality is below a certain threshold due to interference from other nodes, the communication is immediately suspended. Figure 1 shows in detail the wireless network association procedure for the nodes and the contribution of the proposed trust management model.
The whole process starts whenever an application is installed on a physical node, let us suppose node n i , needs to transmit data to another node n j . At first, node n i sends probe requests to discover wireless nodes within its proximity to send data to, and if a response is received, the procedure moves to the authorization phase. After the discovery and authorization phases, node n i has to decide on which channel to transmit its data: the proposed system makes use of the neighbours of n i , i.e. the nodes in N i , to identify the most reliable channel. The selection algorithm takes into consideration the sensing power and the experience of neighbours' nodes and evaluates their recommendations, represented by n z in Figure 1. Recommendations are integrated into the beacon frames, which are continually exchanged by wireless nodes. More details about beacon frames and recommendations will be illustrated in the next Section. As soon as the best channel is selected, the association phase starts and so node n i communicates the chosen channel to the receiver. The last phase depicts the communication, in which the nodes transfer data and the channel is continuously monitored to guarantee the When the transmission is over, the trustworthiness values are updated. Node n i computes the trustworthiness of its N i neighbours on the basis of its own experience and of their channel recommendations; in particular, node n i evaluates the communication over the chosen channel e w i,x for transaction w so that E i,x is the set of all the evaluation transaction of node n i on channel c x . Moreover, node n i assigns a feedback f w ij to all its neighbours that provide information about the channel c x , so that F ij is the set of all feedback assigned from node n i to node n j . Both e w i,x and f w ij are associated with a timestamp t w , so that it is possible to know when they were generated and eventually discard them if they are outdated.
Finally, node n i updates the neighbours' trustworthiness values based on the assigned feedback: we refer to this trustworthiness with T ij , i.e. the trustworthiness of node n j seen by node n i . The details on how T ij is computed are explained in Section IV.
IV. TRUST MANAGEMENT MODEL
According to the presented scenario, we propose a decentralized model, where each node calculates and stores information regarding its own channel experiences and the feedback needed to calculate the trustworthiness level of its neighbours locally, so to have its own opinion about the channels' status. This is intended to avoid a single point of failure and infringement of the values of trustworthiness and to easily identify malicious attacks that change their behaviours based on the requester, such as the Discriminatory Attack. Whenever a node n i has data to transmit, it first needs to establish a connection with the recipient node on a set channel. In order to select the most reliable transmitting channel, node n i senses the received power P i on each channel c x of interest, namely P i,x , and also consider its neighbours' evaluations regarding their past experience, integrated into the probe requests, on all the channel in order to evaluate the risk R i,x associated to the transmission on each channel. ...
w = |E * z,x | e |E * z,x | z,1 e |E * z,x | z,x e |E * z,x | z,X
Node n i is then able to weight the received data and compute the resulting power for channel c x as follows:
P x = P i,x + R i,x (1)
where the computed risk is used as an adjustment to the perceived power, to take into account the possibility of jammer nodes operating in that channel. Node n i will consider the channel as free for transmission if the combined received power is lower than a threshold. The risk assessment is computed taking into account both node n i 's experience and the experience of its neighbours:
R i,x = U i,x + U Ni,x (2)
where U i,x expresses the average experience of node n i while U Ni,x accounts for the experiences of all its neighbours, when using channel c x over a limited time window. This is useful to take into account that channel conditions can vary over time, so we can discard any outdated evaluations. Let E * z,x = ∀e w z,x ∈ E z,x : (t act -t w ) < T H be all the evaluations received within the last TH seconds in channel c x for the generic node n z . We can express its average experience as follows:
U z,x = |E * z,x | w=1 e w z,x |E * z,x | (3)
where w indexes from the latest transaction (w = 1) to the oldest one (w = |E * z,x |) within the considered time limit as shown in Table II. Obviously, the number of transactions in each channel is hardly the same, so the resulting table will not be a matrix.
Node n i will then receive and store the experiences U z,x from all its neighbours related to the different channel in C, as shown in Table III and has to aggregate them in order to derive the risk associated to each channel. To this, node n i will weight the received recommendations based on the trust level of its neighbours, as follows:
U Ni,x = |Ni| k=1 T ik U k,x |Ni| k=1 T ik (4)
The experiences U k,x can be integrated into the body of beacon frames that are transmitted periodically by the wireless standard, for example in the optional fields depicted in the IEEE 802.11 [START_REF]IEEE Standard for Information Technology-Telecommunications and Information Exchange between Systems -Local and Metropolitan Area Networks-Specific Requirements -Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications[END_REF]. In specific, the number of octets needed to
U 1,1 U k,1 U |Ni|,1 ... c x U 1,x U k,x U |Ni|,x ... c X U 1,X U k,X U |Ni|,X
send the recommendation for the channels strictly depends on their accuracy. Finally, node n i will select the channel with the minimum resulting power P x to communicate its data. During the transmission, node n i verifies the quality of transmission by computing the relative PDR. If the transmission is degraded, i.e. the computed PDR is below 60%, node n i immediately suspends the transmission. In this case, node n i checks for other available channels and, if any, changes the transmission channel so that the communication between the two nodes can continue in the new channel.
When the transmission is over, node n i evaluates the used channel based on the PDR value. Evaluation is represented by e w i,x , which refers to each transaction w and it is expressed in a continuous range (e w i,x ∈ [0, 1]): n i rates 1 if it is fully satisfied by the transaction, i.e. if the PDR is 100%, and 0 otherwise, i.e. if it has to switch channel due to PDR less than 60%. However, in a realistic scenario, the PDR is hardly 100%, so in order to evaluate the communication, it is possible to implement a listening phase so that the transmitting node can obtain a reference PDR of the environment and then it can re-scale the feedback taking into account the reference PDR as the maximum value.
Intermediate values of the evaluation e w i,x are computed considering the line through these two points, i.e. maximum and minimum allowed PDR, as follows:
e i,x = 2.5P DR -1.5 (5)
After the evaluation of the channel, n i computes the feedback f w iz to be assigned to the neighbours that have contributed to the computation of the resulting power P x by providing their average experience on the channel U z,x , so as to reward/penalize them for their advice. According to Equation 6, if a node gave a positive experience of the channel, it receives the same evaluation as the channel, namely a positive feedback if the communication was satisfactory, e i,x ≥ 0.5, and a negative one otherwise, e i,x < 0.5; instead, if the generic neighbour n z gave a negative evaluation, then it receives negative feedback if the communication was satisfactory and a positive one otherwise. Note that the feedback generated by node n i are stored locally and used for future trust evaluations.
f w iz = e i,x if U z,x ≥ 0.5 1 -e i,x if U z,x < 0.5 (6)
According to the proposed model, let
F * iz =
T iz = |F * iz | w=1 f w iz |F * iz | (7)
where w indexes from the latest transaction (w = 1) to the oldest one (w = |F * iz |) within the considered time limit.
V. EXPERIMENTAL EVALUATION
In this Section, we will test the proposed trust algorithm in a network with one or more jammers and show how it is able to prevent disturbance and help nodes select the best wireless channel.
A. Simulation Setup
A simulation setup using the NS-3 network simulator has been developed to generate a peer-to-peer network of objects in a (40x40)m area. Each node randomly communicates with others, and each interaction consists of 50 packets with a dimension of 1.5 KB each, for a total of 75 KB of data. Information is exchanged according to the Wi-Fi 802.11g protocol in the 2.4 GHz microwave band, which makes use of 13 channels with a bandwidth equal to 22 MHz. We are considering an ad hoc scenario, where only peer-to-peer communications are allowed, i.e. there is no presence of an Access Point. The physical layer implements the AARF Rate control algorithm [START_REF] Lacage | Ieee 802.11 rate adaptation: a practical approach[END_REF] in order to provide multi-rate capabilities, so each device is able to adapt its transmission rate dynamically. To test the validity of our approach, we analyze 1568 communications that correspond to 56 communications per node; all the following results consider a process with this value of total communications. Table IV summarizes the used configuration for the parameters in the NS-3 simulator.
Each node can play the role of either a requester or a provider, and the information travels from the provider to the requester in the selected channel. In these terms, the communication involves two different nodes: the node that sends the information, i.e. the provider, and the other one that uses the data, i.e. the requester. If the quality of the transmission drops, e.g. due to a jammer attack or a high interference, a new channel is selected according to the implemented algorithm, and the interaction starts from the last received packet. In order to test the performance of the algorithm, we make use of different jammers. All the jammers implement a reactive strategy, targeting only packets that are already on the air and disturbing only communications that have already started. Therefore, two different behaviours are implemented: in the first one, called static behaviour, a jammer can attack only a specific channel, while in the second one, namely dynamic behaviour, the jammer can change the attacked channel by jumping into different channels. In this work, we focus on a random selection of the channels to be attacked.
As described in Section IV, each node is able to evaluate the trust level of its neighbours based on the received recommendations. Two main types of recommendation nodes are implemented in the network: one is always benevolent, so a node n z provides only good recommendations, based on its experience e z,x , while the other one is malicious, according to which a node tries to disrupt the network by sending false recommendations, i.e. (1 -e z,x ). The trust value of each recommendation node is calculated in an interval of [0,1], where if trust reaches 0, the neighbour is classified as malicious; otherwise, it is considered as benevolent, with trust equal to 1. The neighbours' reputation, i.e. their trust value, is used to weigh their recommendations, and it is useful to help the channel selection algorithm in the trust model.
We evaluate the performance by analyzing three metrics: a) the packet delivery ratio (PDR), b) the number of channel failures and c) the percentage throughput (THR). The PDR is expressed as the ratio of the total number of packets delivered to the total number of packets sent from the source node to the destination, and it is used to check the quality of each interaction. The number of channel failures refers to the number of times the device is forced to change the Wi-Fi channel due to a jammer attack or high interference. Finally, the THR represents how information can be delivered in a given amount of time and is usually presented as bit-persecond. We want to clarify that, in our simulations, the THR is influenced only by the time necessary for the communication between the two nodes, and the amount of information does not affect the score because every dropped packet is retransmitted. Therefore, all the required data reach a destination at the end of each simulation, notwithstanding the Wi-Fi channel selection method used. In each experiment, we express the THR in terms of percentage regarding its highest value.
B. Trust Model Functioning
This section illustrates the functioning of the proposed channel selection algorithm and shows how communications can be disrupted by jamming attacks. We introduced it to show the rationale of our work. As follows, the scenario is developed with a limited number of 8 nodes that communicate with each other in a Wi-Fi area, which is busy with other external communications. Only 3 free channels are available for devices, and the channels can be affected by one or more reactive static jammers or by noise or high interference (e.g. different communications, such as Bluetooth). Each node produces data with a rate of 17 Kpbs and does not consider experiences older than T H = 700s. Table V summarizes the specific scenario parameters used as motivation and explanation of the functioning of the system. The proposed trust model is compared with two other approaches, i.e. a random approach, where the channel to communicate on is selected randomly and an approach where the channel selection is based only on the direct experiences of each node. Figure 2 shows a comparison from the perspective of a single node. The first graph illustrates the initial channel powers measured by the node: only channels 5, 10 and 13 are available and free, i.e. they have a power below -93 dBm, at time 0 s. Moreover, a reactive static jammer is employed in this simulation, which affects channel 5, while channels 10 and 13 are affected by other no Wi-Fi communications starting from 90 s and 170 s, respectively. All the approaches are able to discard the channels with low performance, i.e. with a PDR lower than 0.6, and select another channel. Each approach selects a new Wi-Fi channel: the random model picks a free random channel, while the other two approaches are based on the node's direct experience and on the combination of experience and recommendations from neighbours. The graphs illustrate how the random approach is the worst in terms of performance due to the random selection of channel 5 affected by the jammer. The trust approach, based only on experience, can learn about past interactions and selects channel 5 only once. The last graph shows how the trust model, based on direct experience and recommendation, discards channel 5 from the beginning and selects a new channel rather than 10 and 13. Thanks to recommendations from neighbours, the node takes advantage of other nodes' past experiences and it is able to select another channel, i.e. channel 12, which offers it better performance even if it is involved in another Wi-Fi communication.
The next set of simulations examines the models' behaviours by increasing the number of jammers. Figure 3 illustrates the cumulative THR for different experiments with 1, 2 and 3 reactive static jammers that attack channels 5, 10 and 13, respectively. The graphs exhibit how the random approach significantly degrades its performance with the increasing number of jammers; this is due to the frequent selection of the channels affected by jammers. The two trust approaches that consider the experience and the combination of experience and recommendations show the best performance. In the first one, each node chooses the attacked channels only once, and thanks to the mechanism of experience, other channels are selected for the next interactions. Concerning the trust approach that considers recommendations, a node that selects an attacked channel informs its neighbours so that the same information is shared among all the nodes and this approach is able to reach 100% of transferred data in less time w.r.t. the other two approaches.
C. Model Performance
In this Section, we evaluate the performance of the proposed trust approach in a complete scenario. We make use of a network of 28 nodes that communicate with each other in a free WiFi area without external communications; all 13 channels can be used, and each node evaluates the best channel based on the employed approach. The performances are analyzed with different data rates and several numbers of jammers. Moreover, different jammer strategies are adopted, i.e. static and dynamic ones and various temporal limits to computing experience and recommendations are examined. Finally, in the last set of simulations, several percentages of malicious recommendation nodes are considered in order to evaluate the impact of attacks on recommendations for the proposed trust approach. Table VI summarizes the configuration of the simulation parameters in the general scenario and the different values that can be assigned to each one. The focus of the first set of simulations is to test how the three different approaches perform while increasing the number of jammers. Each jammer presents a reactive static behaviour, so it can attack a specific channel only if there are packets in the air. We analyze the behaviours for different values of the temporal limit to compute experience and recommendations. Figure 4 illustrates how the random approach depicts the worst behaviour due to the high number of times that selects the channel affected by a jammer. On the other hand, thanks to the analysis of past experience, the Trust -Experience approach performs better; moreover, the higher the temporal limit, the higher the performance against a static jammer that does not change the attacked channel. These types of attacks are better managed by the Trust -Experience and Recommendations approach, in which each node communicates the attacked channel to neighbours through recommendations; this dissemination of information allows the fast detection of the compromise channels, and so the selection of the best channels.
The next set of simulations examines the impact of the data rate for the three analyzed approaches. To this, we make use of a temporal limit of 700s for the two approaches based on trust, which does not strictly impact this experimentation. Figure 5 illustrates how with the increase of the data rate, the throughput has a different impact for all the approaches. We consider the throughput in percentage, where each subplot corresponds to the specific scenario of data rate. In general, the data rate has a direct impact on the throughput, and, in the absence of jammers, the greater the data rate, the greater the throughput, thanks to the shorter time required to send information. On the other hand, every time a jammer damages a communication, the two affected nodes, i.e. the requester and the provider, have to change the channel and proceed to a new association phase accordingly. These phases directly impact the throughput, and the time needed for communication increases. However, the approaches based on experience and recommendations overcome the classic random approach and, with up to 5 affected channels, the proposed approach is able to keep the throughput over 80%. The focus of the next set of simulations is to test how the proposed model works with the dynamic behaviour of the jammers. We suppose that every 600s, a jammer changes its target to another channel, randomly selected. Figure 6 illustrates how the average of channel failures per node increases with the number of dynamic jammers for the three approaches and for different values of the temporal limit. The results exhibit how the trust approach, based on experience and recommendations, overcomes such attacks and shows how the approach is able to adapt to the changes in the jammer's behaviour quickly. The best performance is represented with a value of temporal limit equal to 700s, which is closest to the hop frequency of jammers and is the fastest one to recognize the dynamic behaviour rather than the other two values.
We now want to analyze the results varying the temporal limit needed to evaluate the experience and the recommendations for the two trust approaches. In order to analyze the relationship between the jammer hop frequency and the temporal limit, we make use of a higher data rate, i.e. 40 Kbit/s, since the throughput is more influenced by the temporal window. Figure 7 illustrates the percentage of throughput for different values of jammer hop frequency at increasing values of temporal limit. The graphs show how the temporal limit directly impacts only the trust approaches, while the random approach keeps a constant behaviour. As already demonstrated by the previous simulations, the best performance in terms of throughput is depicted for values of the temporal limit, which are close to the frequency hop of the jammers, while a temporal limit equal to 0 corresponds to a random approach, in which the nodes can not take advantages of the past information. Low values of temporal limit exhibit the worst performance due to the node that has to reset its memory and select the channels starting from the beginning every time. On the other side, values of limits much greater than the frequency hop can degrade the throughput in the same way. For this reason, a preliminary study of the attackers could improve the performance of the trust approaches, which, however, show the best results compared to the classical Wi-Fi approach.
Finally, the last set of simulations is aimed at understanding how the proposed approach reacts when neighbours nodes implement the two primary attacks on the recommendations. In the first one, namely single attack, a malicious node provides false recommendations to decrease the chance of good channels being selected for Wi-Fi communications. This is the simplest attack on recommendations, in which each node acts maliciously without considering the behaviour of its neighbours and provides false recommendations regardless of the destination node. The second attack, namely collusive attack, represents the worst behaviour. In this attack, a group of nodes works together to increase the reputation of a bad channel, i.e. attacked by jammers, and so increase its chances of being selected as the communication channel; this represents the worst attack on recommendations in which malicious nodes collaborate together to maintain their reputation. Figure 8 illustrates the impact of such attacks on recommendations for the three different approaches in a scenario with 4 static jammers. The graph depicts how the attacks affect only the trust approach based on the recommendations, while the other two approaches have a constant behaviour. We can see how the percentage of throughput decreases with increasing the percentage of malicious nodes and reaches the lowest value with respect to the approach based on experience, even if better than the classical random approach. This is due to the necessary time to detect the attacks; when a node detects attacks on recommendations, it discards the malicious nodes after 1 or 2 interactions or even more for the collusive attack. This needed time has a direct impact on the performance and so provokes a reduction of throughput. So, for a high percentage of collusive attackers greater than 70%, the mechanism of recommendation falls and substantially reduces the percentage of throughput. However, even if the percentage of malicious nodes is high, the proposed approach performs well in comparison to the classic random approach. Finally, we want to point out that security mechanisms could prevent nodes from becoming malicious, i.e. jammers or bad recommenders, but if this happens, the information sent by the node is false but legit as it is the response to a query from the requester and can not be discarded. For this reason, trustworthiness management models are required to identify such nodes and should work together with security mechanisms to protect the network.
VI. CONCLUSIONS
In this article, we have proposed a channel selection method for wireless communications based on trust policies. The illustrated approach, developed for objects with low computational capabilities, operates as a support for several wireless standards and does not require any additional device radio unit. In specific, the proposed model is developed for wireless technologies based on the distribution of nodes, in which an important role is represented by the association phase and the main requirements are based on the adoption of a channel hopping mechanism. Applicable to a generic wireless network, each node is able to select the most trustworthy channel to transmit on, thanks to the neighbours' recommendations and its own experience.
The proposed approach has been tested against different types of security threats, in specific concerning interference from other networks or by simultaneous transmissions, and moreover against the major attacks to which wireless networks are particularly prone, i.e. the jamming attack. All the jammers implement a complex reactive strategy, disturbing only communications that have already started, and two different behaviours are considered: a static behaviour, in which a jammer can attack only a specific channel, and a dynamic one, where the jammer can change the attacked channel by jumping into different channels.
Furthermore, we have compared our solution with two other approaches, i.e. the classical approach described in the 802.11 standards and another one that considers only the past experiences of nodes. Experiments evaluation has shown that our approach is able to outperform the other two approaches when considering a network with different types of interference and jamming attacks. Future further extensions that are worth studying include the modification of the approach in order to be implemented in an AP scenario, in which all the wireless devices communicate with each other through an access point.
Fig. 1 .
1 Fig. 1. Wireless association process and trust management model flowchart.
Fig. 2 .
2 Fig. 2. Initial channel power and PDR analysis for a communication adopting the three analysed algorithms.
Fig. 3 .
3 Fig.3. Cumulative THR increasing the number of jammers for the three approaches.
Fig. 4 .
4 Fig.[START_REF] Chang | Fast ip hopping randomization to secure hop-by-hop access in sdn[END_REF]. Channels failure per node increasing the number of static jammers for the three approaches with different values of temporal limit.
Fig. 5 .
5 Fig.5. Impact of Data Rate ad the variation of the number of static jammers for the three approaches.
Fig. 6 .
6 Fig.6. Channels failure per node increasing the number of dynamic jammers with a hop frequency equal to 600s and considering different values of temporal limit for the trust approaches.
Fig. 7 .
7 Fig. 7. Percentage of Throughput at the variation of temporal limit, for different values of jammer hop frequency.
Fig. 8 .
8 Fig. 8. Throughput percentage at increasing values of % of malicious nodes.
TABLE I ANALYSIS
I OF THE EXISTING CHANNEL SELECTION MODELS.
Ref Approach Scenario No Standard modification No central controllers No additional radio unit Jamming Attacks
[13] Deep Learning
TABLE II EXPERIENCES
II OF NODE nz
Channel
1 ... x ... X
Transactions w = 1 ... w e 1 z,1 e w z,1 e 1 z,x e w z,x e 1 z,X e w z,X
TABLE III AVERAGE
III EXPERIENCE OF NODE n i 'S NEIGHBOURS
Neighbours
n 1 ... n k ... n |Ni|
c 1
Channels
iz : (t act -t w ) < T H} be all the feedback assigned within the last TH seconds. For the generic node n z , the transmitting node can compute the trust value of another node as follows:
TABLE IV
NS-3 SETUP PARAMETERS
Parameter Value
Area of simulation (40x40)m
Number of packets 50
Packet dimension 1.5 kB
Protocol IEEE 802.11g
Frequency 2.4 GHz
Number of channels 13
Bandwidth 22 MHz
Number of communications 56 per node
{∀f w iz ∈ F |
04101075 | en | [
"info"
] | 2024/03/04 16:41:20 | 2020 | https://hal.science/hal-04101075/file/SMAetAl_SOCA_2020.pdf | Mohammed Ismail Smahi
email: [email protected]
Fethallah Hadjila
email: [email protected]
Chouki Tibermacine
email: [email protected]
Abdelkrim Benamar
email: [email protected]
A deep learning approach for collaborative prediction of Web Service QoS
Keywords: Web Service, QoS Prediction, Deep Autoencoder, Self-Organizing Map
Web services is the corner stone of many crucial domains, such as cloud computing and the Internet of things. In this context, QoS prediction for Web services is a highly important and challenging issue. In fact, it allows for building value-added processes as compositions and workflows of services. Current QoS prediction approaches, such as collaborative filtering methods, mainly suffer from the problems of data sparsity and cold-start obstacles. In addition, previous studies have not explored in depth the impact of geographical characteristics of services/users and QoS rating on the prediction problem. To address these difficulties, we propose a deep learning-based approach for QoS prediction. The main idea consists of combining a matrix factorization model based on a deep auto-encoder (DAE) and a clustering technique based on the geographical characteristics to improve the prediction effectiveness. The overall method proceeds as follows: First, we cluster the QoS data using a self-organizing map that incorporates the knowledge of geographical neighborhoods; by doing so, we allow for the reduction of the data sparsity while preserving the topology of input data. Besides that, the clustering step effectively handles the cold start problem. Second, for each cluster, we train a DAE that minimizes the squared loss between the ground truth QoS and the predicted one. Third, the missing
Introduction
Quality of service (QoS) is an important factor for performing Web service selection and recommendation over the internet [START_REF] Huang | An optimal qos-based web service selection scheme[END_REF][START_REF] Yu | Efficient algorithms for web services selection with end-to-end qos constraints[END_REF][START_REF] Zheng | Collaborative web service qos prediction via neighborhood integrated matrix factorization[END_REF]. In this context, QoS prediction constitutes a crucial step in building a recommendation system. In practice, QoS values of services constantly change due to environmental constraints (e.g., IT infrastructure and network load), and this fact makes QoS prediction a challenging task [START_REF] Chen | Web service recommendation via exploiting location and qos information[END_REF]. In addition, we observe that end users may have invoked only a small number of web services; consequently, the user-service QoS data are likely to be sparse [START_REF] Huang | Applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering[END_REF], which may have a significant impact on the accuracy of QoS prediction.
To address the QoS prediction issue, many existing works have leveraged Collaborative Filtering (CF) [START_REF] Adomavicius | Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions[END_REF] to infer the missing data. These approaches are based on the exploration of the historical QoS data of Web services recorded from previous interactions. They typically utilize a user-service QoS matrix to model all historical QoS records. The current CF methods can be divided into two classes: memorybased [START_REF] Chen | Collaborative filtering for orkut communities: discovery of user latent behavior[END_REF][START_REF] Deshpande | Item-based top-n recommendation algorithms[END_REF][START_REF] Zheng | Qos-aware web service recommendation by collaborative filtering[END_REF] and model-based [START_REF] Zheng | Collaborative web service qos prediction via neighborhood integrated matrix factorization[END_REF][START_REF] Koren | Matrix factorization techniques for recommender systems[END_REF]. Memory-based approaches consist of two stages: the first one computes the similarity between services (or users) through the use of QoS data, while the second stage computes the missing QoS using average weighting of the known values of similar services (or users). Model-based approaches learn a set of latent factors from the QoS matrix and make predictions. Matrix Factorization (MF) techniques are the most representative approaches of this class. Despite the advantages and the ease of use of CF methods, most of them do not handle contextual attributes (e.g., geographical information of users/services) to enhance the accuracy of prediction; additionally, they suffer from the cold-start problem [START_REF] Wei | Collaborative filtering and deep learning based recommendation system for cold start items[END_REF]. The cold-start problem rises when we want to predict the QoS for new services that have no past QoS records or only few QoS data. The sparsity of the QoS matrix also limits the effectiveness of CF methods and results in a poor accuracy performance. To address the above mentioned issues, we propose an extended CF-based approach that uses both Matrix Factorization (MF) and clustering to handle data sparsity and QoS fluctuations.
This paper is an extension and enhancement of a work originally presented in the European Conference on Service-Oriented and Cloud Computing [START_REF] Smahi | An encoder-decoder architecture for the prediction of web service qos[END_REF]. The proposed approach extends our previous work through the use of both a DAE and clustering. The contributions of this paper are summarized as follows:
(A) To consolidate the related works, we added new approaches that involve deep learning methods, matrix factorization techniques, and contextual recommendation. (B) To face the QoS sparsity issue, we perform a clustering of the original QoS matrix and yield a set of submatrices that share a subset of contextual characteristics (such as geographic information or service providers).
After that, each DAE is trained on a single sub-matrix. The clustering step is realized using a Self-Organizing Map (SOM) [START_REF] Kohonen | Self-organization and associative memory[END_REF]. This choice is motivated by the ability of SOM to preserve the topological properties of QoS data (see the motivation Section for more details). The learned neurons of the map are grouped using K-means to create more representative clusters. We notice that the use of SOM allows us to address the cold-start problem (i.e., estimation of QoS for new services or new users).
More precisely, we initialize the new service (or new user) with the QoS data of the closest cluster-heads (that share the same contextual attributes or the nearest ones). (C) To improve the optimization results, the different hyperparameters of DAEs are fine-tuned through the use of k-fold cross-validation [START_REF] Refaeilzadeh | Cross-validation[END_REF]. (D) To reduce the model sensitivity to over-fitting which can harm the final performances, we add a random noise to the input data during the training phase (i.e., we use a denoising autoencoder [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF] which is a variant of the standard autoencoder). (E) To evaluate the effectiveness and the accuracy of the proposed DAE, we perform several experiments by adopting different densities and different hyper-parameters.
The remainder of the paper is structured as follows. We first introduce in Section 2 some background material on both autoencoders and Self Organizing Maps. In Section 3, we present the proposed method for Web service QoS prediction. After that, we show in Section 4 the experiment design, the results and the analysis. In Section 5, we present the related works. Finally in Section 6, we summarize the contribution of the paper and put forward our future work.
Background
We introduce the basic background material on both autoencoders and Self Organizing Maps.
Autoencoders
The autoencoder [START_REF] Rumelhart | Learning internal representations by error propagation[END_REF][START_REF] Hinton | Reducing the dimensionality of data with neural networks[END_REF] is an unsupervised neural network. It allows to reproduce the original data from the input layer to the output layer through one or several hidden layers. This unsupervised neural network learns a representation of the inputs that yields the least deformation. Typically, the mechanism consists of the encoding data from the input layer until a central hidden layer, called latent factors layer, through one or more hidden layers. After that, the decoding of the central layer data is performed by a set of decoding layers (having the same size as in the encoding layers). As can be seen in Figure 1, the autoencoder's architecture consists of three parts: the encoder, code (or latent factors) and decoder. The latent factors part represents a single layer of artificial neural network (ANN) where the size should be less than the input layer (which is denoted by K). This hidden layer must be compact and meaningful. The encoder part is represented by a fully connected feed-forward non recurrent ANN. It can be viewed as an encoding function F e that takes a vector x ∈ R D as input data and maps it to a hidden representation z ∈ R K (z = F e (x)). Finally, the decoder part is the reverse mapping of the encoder operation. It is a fully connected feed-forward non recurrent ANN. It is considered as a decoding function F d that takes the resulting latent representation z then maps it back to a reconstructed vector x ∈ R D ( x = F d (z)). Thus, the output will be:
x = F d (F e (x)) (1)
If we assume a basic version of the autoencoder (which uses one hidden layer), the encoding decoding functions will be defined as:
x = σ (W z + b ) with z = σ (W x + b) (2)
such that: σ and σ are transfer functions which can be linear or non-linear 1 . W and W are two weight matrices having the dimensions (K, D) and (D, K) respectively. b and b are the offset vectors with K and D dimensions respectively.
In terms of learning, the autoencoder is trained to minimize the dissimilarity function (or the reconstruction error)
argmin W,W ,b,b n ∑ i=1 δ (x i , F d (F e (x i ))) (3)
where δ is a dissimilarity function, such as the square loss or the cross entropy loss, x i is an example that belongs to the learning dataset and n is the size of the dataset.
Denoising Autoencoders
The denoising autoencoder [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF][START_REF] Goodfellow | Deep Learning[END_REF] is a variant version of the basic autoencoder presented above. This kind of autoencoder aims to tackle the noise problem. The goal is to reconstruct the original input data using a corrupted version of it. Note that several corruption types can be applied on the input data, such as: (a) Gaussian noise: where an isotropic Gaussian noise is added to a subset of the input x. (b) Masking noise: A fraction v of the original input x is randomly forced to zero. (c) Salt-and-pepper noise: A fraction v of x is randomly set to their minimum or maximum possible value.
In terms of learning, the denoising autoencoder is still trained to minimize the same reconstruction error between the cleaned input and its reconstruction:
argmin W,W ,b,b n ∑ i=1 δ (x i , F d (F e (x i ))) (4)
where x is a copy of x that has been corrupted by some kind of noise cited above. In this case, the denoising training forces the encoding function F e and decoding function F d to implicitly learn the structure of the original input data [START_REF] Alain | What regularized auto-encoders learn from the data-generating distribution[END_REF].
Self-Organizing Map
Self-Organizing Map (SOM) or Kohonen feature map [START_REF] Kohonen | Self-organization and associative memory[END_REF] is a kind of unsupervised neural network based on a competitive learning paradigm. This method is usually used in performing tasks such as clustering, outlier detection and dimensionality reduction [START_REF] Kiang | Self-organizing map network as an interactive clustering tool-an application to group technology[END_REF]. The objective of this network is to reduce a high-dimensional input data space into a lowdimensional map of neurons (usually two dimensions) through the use of self-organizing neural networks while preserving their topological structure.
Self-Organizing Map principles
In terms of architecture (see Figure 2), the network is composed of two layers: the input layer and the map layer; the layers are fully connected. The map layer comprises a set of neurons M = {n 1 , . . . , n k } arranged in a two l-dimensional map of neurons (the map is usually represented graphically as a rectangle with k neurons (k = x dim ×y dim )). Each neuron has an associated weight vector W known as the code-book. The code-book describes the typical data profile at a given training step t, where T is the number of training time step instances 2 :
W (t) = { - → w 1 (t), . . . , - → w k (t)} t ∈ {1, . . . , T }.
The competitive learning process attempts to construct a nonlinear topology that ensures the mapping of the input vectors X = { - → x (t)|t ∈ {1, . . . , T }}, into a set of neurons M on the map. In terms of learning, the SOM Algorithm consists of two steps, a competitive and an update step. In the first one, each data vector -→ x (t), for a particular training instance t, is mapped to its best matching neuron on the map : bm( -→ x (t)) = n i ∈ M. The winning neuron is denoted as:
n i ( - → x (t)) = argmin j∈1,...,k - → x (t) -- → w j (t) (5)
where each neuron n i codes the subset of input data space, whose elements are closer to n i .
In the update step, the weight vectors of both the best match neuron n i and its whole neighborhood neurons Γ (n i ) are adjusted toward the presented input vector -→ x (t) using the following equation:
w j (t + 1) = w j (t) + η(t) • h i, j (t) • (x(t) -w j (t))
where j ∈ Γ (n i ), 0 < η < 1 is the learning rate, and h i, j is the neighborhood function. It is important to mention that the neighborhood of the wining neuron (n i ) is progressively reduced until it reaches a set of size one. A frequently used neighborhood function is the Gaussian:
h i, j (t) = exp - d(i, j) 2 2σ 2 (t) ) (6)
2 SOM training stops when the weight vectors are stabilized or the maximum number of iterations is reached.
where d(i, j) is the distance between the neuron n i and neuron n j on the map and the radius σ (t) = σ 0 • exp t T max decreases after each iteration to restrict the area of the neighborhood.
Batch learning
A batch algorithm is a useful variant of SOM learning process presented above, which is widely used in parallel implementations [START_REF] Wittek | somoclu: An efficient parallel library for self-organizing maps[END_REF]. If the whole training data is available beforehand, SOM has a batch formulation of update of all weights:
w j (T ) = ∑ T t =1 h i, j (t )x(t ) ∑ T t =1 h i, j (t ) (7)
It means that the best matching neurons for all input vectors are selected and hereafter their weights and the weights of their respective neighbors are updated (according to Equation 6).
Initialization Methods
The results and the performances of SOM method are strongly influenced by the initialization of neurons' weights. To this end, two types of initializations can be used. The first one consists of a random initialization of the code-book. In this method, random weight vectors are selected from the data points. Technically this method is easy to implement, but its main weakness lies in its high running time and its non deterministic side-effects on results. To tackle the previous problem, a linear initialization technique can be used. The initial values of code-book are selected from the subspace spanned by the Principal Component Analysis (PCA) algorithm [START_REF] Jolliffe | Principal component analysis[END_REF]. The behavior of SOM method using PCA initialization converges more rapidly towards a better clustering and ensures a deterministic clustering result [START_REF] Akinduko | Som: Stochastic initialization versus principal components[END_REF].
Proposed Method
First, we introduce the architecture model of the proposed approach. Then, we motivate the choices made in the design of this architecture model. At last, we present in depth how QoS prediction is made thanks to this model.
Architecture model
Figure 3 illustrates the architecture model of the proposed approach. First, we assume that there are m users and n Web services in our system. QoS values are collected from user's invocation of Web services. We use various sources to get this data (such as social networks, a monitoring system, or direct feedbacks). The collected values constitute a m × n Fig. 3 Overview of the prediction system architecture rating matrix M = {r u,s } m×n . We assume that QoS matrix contains missing values because it is impossible to invoke all services by all users. In order to predict the missing values (first challenge), we propose an approach that is composed of the following three stages:
1. First, we cluster the initial QoS rating matrix according to the geographical characteristics. More specifically, we will perform a clustering based on SOM Algorithm. As mentioned previously, this method allows us to create clusters whose members are closer in terms of contextual attributes. This step aims to produce a set of clusters that share the same contextual characteristics. Additionally, it allows to decrease the sparsity of the QoS matrix (i.e., the QoS matrix of each cluster is less sparse in comparison to the initial QoS matrix) by considering the top-k clusters. 2. Second, we perform the learning of latent factors for each cluster by leveraging a deep denoising autoencoder.
During our experiments, we used a deep autoencoder with three hidden layers (this choice is explained in the experimental study). We assume that all the clusters are trained with the same number of hidden layers. 3. Third, the learned deep autoencoder predicts missing QoS values and stores them in the initial dataset.
The second challenge that interests us is the cold-start problem. Based on the results of the Data Clustering and Autoencoder Building phases, this task describes the arrangements to ensure the best solution for this problem (later in this section, we will describe the detailed mechanism).
Motivations behind architecture design choices
Autoencoder
Our proposition uses a DAE that allows for predicting QoS values of services. This choice is mainly motivated by the high accuracy achieved by matrix factorization methods, such as autoencoders [START_REF] Li | Deep collaborative filtering via marginalized denoising auto-encoder[END_REF]. According to [START_REF] Koren | Matrix factorization techniques for recommender systems[END_REF][START_REF] Xu | Context-aware qos prediction for web service recommendation and selection[END_REF], matrix factorization techniques are more accurate (in terms of prediction performance) than the remaining CF methods. In addition, the flexibility of DAE (e.g., the number of layers and the activation functions) is leveraged to ensure the best performance. Indeed, it is possible to vary the activation functions of each layer (e.g, relu, so f t plus, leaky-relu, ELU) in order to maximize the prediction accuracy.
SOM
To boost the accuracy of our prediction model, we have to address the sparsity issue of the QoS matrix. A higher sparsity results in a lower quality of prediction. To handle this situation, we cluster the QoS data into groups using the SOM method.
We have chosen this method for different reasons, including the following:
-The SOM method has the ability to preserve the density and the topological structure (the form) of the original data; -SOM-based clustering allows us to address the cold-start problem. More specifically, if we handle a new service (or user) (without QoS values), we first compare its contextual attributes with the context of the cluster-heads; then we initialize the QoS of the new service (or user) with the QoS values of the closest cluster-head; -SOM network ensures a more faithful representation (in terms of contextual attributes such as country, provider and autonomous systems) of the services (or users), that is, the services (or users) having similar contextual attributes are more likely to belong to the same cluster; -Their ability to preserve the neighborhood relationships (i.e, two adjacent services/users are represented by two adjacent cluster-heads, and two neighboring cluster-heads share a part of their geographical characteristics while the other properties are different).
According to [START_REF] Chen | Service recommendation based on separated time-aware collaborative poisson factorization[END_REF][START_REF] Chen | Exploiting web service geographical neighborhood for collaborative qos prediction[END_REF][START_REF] Tang | Collaborative web service quality prediction via exploiting matrix factorization and network map[END_REF], QoS values are directly influenced by users and services geographical characteristics. The more the services share the same geographical characteristics the more the QoS values are similar.
In practice, the geographical characteristics of Web services can be categorized into three different groups: Provider group (P), AS3 group (A ) and Country group (C ). One group is a subset of another (i.e, P(i) ⊆ A (i) ⊆ (i), where i ∈ {service, user}). This implies that the more the group is restricted, the more the QoS values are likely to be similar.
As we will see in the experimental evaluation Section, an empirical analysis is conducted to demonstrate the influence of geographical characteristics on the clustering results of SOM.
Collaborative QoS prediction
The first algorithm (Algorithm 1) enables us to cluster the input data (services S or users U) according to the batch version of SOM principle. Firstly (line 1), we initialize the weight vector for all neurons on the map (code-book initialization). To this end, we use a personalized initialization mechanism to ensure a deterministic behavior and to preserve the neighborhood relationships (more details are given in Section 4.3.3). After that (from line 3 to 5), we determine for each input its winner neuron (BMU) on the map. In next steps (from line 6 to 12), we adjust the weight vectors of all BMUs and their neighborhoods according to Equation 7. The process is repeated as many times as necessary (epoch times as specified in line 2). At the end, we create the final clusters by processing the code-book with a k-means Algorithm [START_REF] Hartigan | Algorithm as 136: A k-means clustering algorithm[END_REF].
max = 8 14 < C 1 , • • • ,C max >= KmeansCluster(W, max) postprocess the code-books with k-means algorithm 15 return < C 1 , • • • ,C max > list of services clusters
In the second algorithm (Algorithm 2), we learn a model that achieves the highest QoS prediction performance. The goal is to infer the best learning model for our deep autoencoder. This algorithm builds a deep autoencoder for each cluster C i . In line 3, we split the current cluster C i data into k-f old folders. Each folder represents a given percentage (density%) of the available cluster data. This parameter represents the ratio between the training dataset and the entire dataset. For example, if density = 20% then the training set represents 20% of all cluster data and the validation set represents the remaining 80% of QoS data. After that in line 7, we learn a deep autoencoder for each folder and with respect to all training sets (∀S ∈ T ). The goal of this step is to minimize the squared error between the model prediction and the desired values. It is important to mention that the training is controlled by the noise parameter. If this parameter is null (i.e, no noise), we use the deep autoencoder architecture. Otherwise, the selected architecture of the model is a denoising autoencoder. In line 8, we calculate the deep autoencoder error achieved on the validation dataset. In line 11, the k-f older results are averaged to produce a single validation error (mve). Finally, we return the best training model as well as the mean validation error.
Algorithm 2: Autoencoder k-fold cross-validation
Inputs : C = {C 1 , • • • ,C max } , density, noise, k-f old 1 for i → max do 2 T = C i training set 3 < f older 1 , ..., f older k-f old >= Partition(C i , density, k-f old) 4 for k : 1 → k-f old do 5 T = T -f older k 6 V = f older k validation set 7 Model k = argmin ∀Sm∈T 1 |T | |T | ∑ m=1 |DAE(S m , noise, |C i |) -S m | 2 8 err k = 1 |V | |V | ∑ m=1 |Model k (S m ) -S m | 2 9 end 10 Model k = argmin ∀k∈k-f old (err k ) get the k-th best model in term of err k 11 mve = 1 k-f old k-f old ∑ k=1 err k 12 Θ [i] =< Model k , mve > 13 end 14 return Θ
In the last algorithm (Algorithm 3), we address the coldstart problem. To solve that, and taking into consideration that QoS values are highly influenced by geographical characteristics (as discussed in the end of Section 3.2.2), we compute the initial QoS values V (s) = {qos 1 , • • • , qos m } for each new service according to its geographical characteristics. Technically, we check if the actual provider of the new service (P(s)) belongs to the initial set of providers (of our dataset) (line 1). If so, in line 2, we compute the representativeness rate of the actual provider for each service cluster (α c,s , ∀c ∈ C S ). Afterward, we average that with respect to all cluster-heads (H (c), ∀c ∈ C S ). If not, we check if the AS number of the new service (A (s)) is in our initial set of AS numbers (line 3). If this is the case, we calculate the representativeness rate of the actual AS number for each services cluster, and we average the initial QoS values V (s) with respect to all cluster-heads. Otherwise, we proceed to compute the representativeness rate of the service country (C (s)) and we average its initial QoS values. In case where the geographical characteristics of the new service are not available, we search the provider having the nearest service (as mentioned in line 8), where D denotes a geodesic distance function, and we compute its QoS values as mentioned in line 9. This vector (V (s)) is then provided as an input to all the trained DAEs. Finally, we average the prediction results of the initial QoS value for the new service (line 11). Note that the same algorithm is used for a new user (V (u) = {qos 1 , • • • , qos n }), with the appropriate changes as well.
Algorithm 3: cold-start Algorithm
Inputs : s new Service C = {C 1 , • • • ,C max } list of services clusters H (C) = {H (C 1 ), • • • , H (C max )} list of cluster-heads Θ
best training models Data : P, A , C List of Providers, ASNs and Countries from dataset
1 if (P(s) ∈ P) then 2 V (s) = 1 ∑ c∈C α c,s ∑ c∈C α c,s H (c) with α c,s = 1 |c| |c| ∑ j=1 1, if P(s) = P(s j ) 0, otherwise 3 else if (A (s) ∈ A ) then 4 V (s) = 1 ∑ c∈C α c,s ∑ c∈C α c,s H (c) with α c,s = 1 |c| |c| ∑ j=1 1, if A (s) = A (s j ) 0, otherwise 5 else if (C (s) ∈ C ) then 6 V (s) = 1 ∑ c∈C α c,s ∑ c∈C α c,s H (c) with α c,s = 1 |c| |c| ∑ j=1 1, if C (s) = C (s j ) 0, otherwise
∀s i ∈S (D(s, s i ))) compute the provider of nearest service 9 V (s) = 1 ∑ c∈C α c,s ∑ c∈C α c,s H (c) with α c,s = 1 |c| |c| ∑ j=1 1, if P(s) = P(s j ) 0, otherwise 10 end 11 V (s) = 1 |Θ | ∑ Model ∈Θ
Model (V (s)) average prediction of the initial QoS values 12 return V (s) service initial vector Figure 4 shows an example that explains the principles of the proposed approach: The subplot (A) shows a l by k grid that covers the different clusters of the SOM map; Each line of the map represents a country and involves k neurons, and each neuron is a subset of services. The subplot (B) shows the initialization process of the neurons' weights of a given country, the neurons are assigned in a self-organized mode. The subplot (C) shows the training phase of the different DAEs that are related to the learned clusters. The subplot (D) details the cold start scenario.
Experimental Evaluation
In this section, we conduct experiments to evaluate the performance of the proposed approach and its different variants (deep autoencoder, clustered deep autoencoder and Top-k
- → n 1 • • • • • • • • • - → n i • • • • • • • • • - → n k
Compute the maximal geodesic distance:
D(S2, S4) = 25 km 1
Update the 1 st and k th neurons:
- → n 1 = - → S 2 - → n k = - → S 4 2 Compute the index of S5: i = D(S2,S5)×k D(S2,S5)+D(S4,S5) 3 Update the i th neuron: - → n i = - → S 5 4 (A) Grid of l × k neurons each line represents a country C 1 . . . . . . . . . C i . . . C l - → n 1 • • • • • • • • • - → n i • • • • • • • • • - → n k CLUSTER1 CLUSTER2 • • • • • • • • • CLUSTERn (C)
CLUSTER1 S P1 1 S P1 2 S P2 3 S P2 4 H (cl1)
. . .
S P1 new =? CLUSTERn S P2 5 S P1 6 S P2 7
H (cl1)
Compute the clusters heads:
H (cl1) = ( - → S P1 1 + - → S P1 2 + - → S P2 3 + - → S P2 4 )/4 . . . H (cln) = ( - → S P2 5 + - → S P2 6 + - → S P1 7 )/3 1
The representativeness rate of the ptovider P1:
αcl 1 = 1 + 1 + 0 + 0 = 2 . . . αcln = 0 + 1 + 0 = 1 2
Average initial vector for the new service:
V (Snew) = 1 (αcl 1 +•••+αcln ) αcl 1 H (cl1) + • • • + αcln H (cln) 3
Average initial vector for the new service:
V (Snew) = 1 n (Model1(V (Snew)) + • • • + Modeln(V (Snew))) 4
In this example, we assumed that the provider of S P1 new belongs to the initial set of our providers. Otherwise, we will follow the other steps mentionned in algorithm 3.
NB Fig. 4 Illustrative example describing the principles of the proposed approach clustered deep autoencoder). The objective is to answer the following questions: (1) How sensible are our proposed models to the hyper-parameters? (2) What is the impact of the geographical characteristics on the SOM clustering? (3) How do our approach compare to the state-of-the-art methods under different scenarios?
Experimental setup
The experiments were conducted on the MESO@LR-Platform of the University of Montpelier, France. To this end, we used 4 nodes (14 cores) with 128 GB of RAM. All the learning programs were implemented in Tensorflow Python. Note that, for the training of the SOM algorithm, we turn to a high-performance implementation called Somoclu 4 , which is considered as a massively parallel tool for training a batch formulation of self-organizing maps on large datasets [START_REF] Wittek | somoclu: An efficient parallel library for self-organizing maps[END_REF].
Data collection
To evaluate the proposed approach, we conducted experiments on a large-scale real-world Web service QoS repository named WS-Dream, released by [START_REF] Zheng | Investigating qos of real-world web services[END_REF]. This repository contains two datasets. Note that the dataset contains about 26% of missing values, which is a quite high date sparsity rate. This problem specific to this dataset may pose the risk to having an invalid clustering (a cluster with services with null values). In order to reduce this risk, we considered the following two rules for a given service invocation QoS matrix M c,t = {QoS u,s } m×n (for a selected criterion c and at given time slot t):
1. For each QoS c,t s,u value for a given invocation matrix M c,t , if this value is invalid, then we replace it by the average of the valid QoS c,t s,u values on all the previous matrices time slot: QoS c,t s,u =
1 t -1 t-1 ∑ t =1
QoS c,t s,u 2. For each QoS c,t s,u value for a given matrix invocation M c,t , if this value is invalid and all its previous values (regarding time slots) are invalid, then QoS c,t s,u = 0. Other changes are made on the first dataset, in order to reduce the missing values on the services attributes in terms of geolocational information (AS number, IP address, latitude and longitude). To do this, we used the geolocational information derived from GeoIP25 , and IP2Location6 databases.
Table 2 shows the different improvements operated on the initial dataset.
Evaluation metrics
To evaluate the performance of the method, we use two wellknown metrics frequently used in collaborative filtering: mean absolute error:
MAE V = 1 |V | ∑ (u,s)∈V
|r u,s -ru,s | and root mean squared
error: RMSE V = 1 |V | ∑ (u,s)∈V |r u,s -ru,s | 2
, where V represents the validation dataset. r u,s is the actual QoS score for service s given by user u, and ru,s the predicted one.
Since we used k-fold cross-validation, we assess the prediction on the average of MAE and RMSE too. The average MAE is defined as follows:
MAE AV G = 1 allV k-f old ∑ allV k-f old (|V k-f old | × MAE V k-f old ) (8)
The average RMSE (which is also denoted as mve in step 11 of Algorithm 2) is defined as:
RMSE AV G = 1 allV k-f old ∑ allV k-f old (|V k-f old | × RMSE V k-f old ) (9)
where V k-f old represents a validation set in both previous equations.
Effects of training parameters
Autoencoder models are full of hyper-parameters and setting a proper parameter initialization is a great challenge. This requires expertise and extensive trial and error. In our work, we focused on two important hyper-parameters: the size of each layer in the autoencoder and the choice of the activation function on these layers. In the following, we present a series of comparative experiments conducted in order to obtain the optimal parameters of our model.
Impact of autoencoder layer sizes
The size of the common layer (latent factor layer) is a hyperparameter that we must set before training the autoencoder. Like any feed-forward networks, it has been proven that deep autoencoders yield much better compression than corresponding shallow or linear autoencoders [START_REF] Hinton | Reducing the dimensionality of data with neural networks[END_REF]. So, additional layers can learn complex representations by approximating any mapping from input to code arbitrarily well. For this reason, and aside from the latent factor layer, we used a three layer net for both encoding and decoding networks. This choice is based on the power of three-layer neural network experimentally proven in [START_REF] Hecht-Nielsen | Theory of the backpropagation neural network[END_REF]. The final architecture of our autoencoder is composed of four fully connected layers (three layers + latent factor layer) for both encoding and decoding networks.
In order to determine the ideal number of neurons for each layer, many rules-of-thumb have been taken into account [START_REF] Panchal | Behaviour analysis of multilayer perceptrons with multiple hidden neurons and hidden layers[END_REF].
The code size should be: (1) between the size of the input layer and the size of the output layer; (2) two-thirds the size of the input layer, plus the size of the output layer;
(3) less than twice the size of the input layer. We considered these three rules as a starting point for determining the upper bound on this parameter. Figure 5(A) shows the results of four different values that operate on the layers sizes of the autoencoder. As we can see, the configuration that performs best on our 4-layer-model is 1024, 512, 256 and 128. Note that this configuration depends on the dimension of our input (with 4500 input services) for the autoencoder model. For the clustered autoencoder, this configuration is updated by the rule of three on the input size (which represents the cluster size).
Impact of activation functions
The second important hyper-parameter used in the training of our autoencoder is the target activation. To examine the influence of this parameter, we empirically used some of the most popular choices in deep learning. So, we used a sigmoid such as standard activation function and exponential linear units [START_REF] Clevert | Fast and accurate deep network learning by exponential linear units (elus)[END_REF] (ELU), standard rectified linear unit (ReLU) [START_REF] Sun | Deeply learned face representations are sparse, selective, and robust[END_REF] and finally, a leaky rectified linear unit (Leaky ReLU) [START_REF] Xu | Empirical evaluation of rectified activations in convolutional network[END_REF] known as non-standard activation functions. The effect of the different activation functions on RMSE training metric is shown in Figure 5(B). In order to accelerate the convergence speed we opted for a non-saturated activation function: Leaky ReLU.
Results and discussion
In deep learning, it has been experimentally proven that the classical model of autoencoders degenerate into identity networks and they fail to learn the latent relationship between data [START_REF] Glorot | Understanding the difficulty of training deep feedforward neural networks[END_REF]. To tackle this issue, we used a variant model of deep autoencoder by corrupting the inputs, training the model to denoise the final outputs.
To enhance the robustness of the model, we adopted a masking noise technique to set a random fraction of the input to zero. For this reason, we explore four rates of noise: 0% (i.e., no noise), 20%, 50% and 80%, applied on these models with the whole dataset in its input. Note that the same training process is applied on experiments for each clustered deep autoencoder after SOM clustering phase on all input data.
Deep autoencoder
To extract the latent factor from the dataset, we used a 4layer encoder network with the following configuration on the layers sizes: 1024-512-256-128. The same symmetric configuration is used for the decoder network to reconstruct the input data.
To ensure more robust prediction results, we lead our experiments with a multiple folders according to the k-fold cross-validation principle. Indeed, the experiments are performed using five partitions (folders) with different density values. Each partition represents a given percentage (density%) of the available dataset. The density represents the ratio between the training and the validation datasets. To this end, we explore three possibilities for density ratio: 20%, 50% and 80%. Finally, the validation results are averaged over the 5-folders through the application of equations 8 and 9. In Figure 6 (A), we illustrate the average RMSE/MAE after five runs operated on three different data densities. For each density value, we applied four noise variations on the input data. Note that the QoS prediction accuracy increases continuously when the matrix density increases (due to the increase of the noise rate).
In Figure 6 (B), we can see that the training converged for all the possibilities, after fewer than 100 epochs. Also, we can easily observe that there is a stability (no increase after converging), and no overshoot (no increase before converging).
Clustered deep autoencoder
In order to provide yet a better prediction accuracy, we used a Clustered Autoencoder. This autoencoder variant is designed after the partition of all services into eight clusters that are homogeneous with regard to their geographical characteristics. As mentioned above, we used the same architecture of our deep autoencoder (4-layer-model configuration: 4-layer sizes, and the activation functions are Leaky ReLU).
The main difference is that the size of the four layers that make up each clustered autoencoder is depending on the size of the inputs (number of services in the cluster). The layer sizes are updated by the rule of three on the initial configuration (1024-512-256-128) according to the cluster size (input size). To do so, we performed eight trainings for each cluster. We applied the same process on each cluster under various input data densities (20%, 50% and 80%). For each density case, we applied a set of noise variations (0%, 20%, 50% and 80%). Thereafter, we average the validation over those clusters. As shown in the results from Figure 7(A average RMSE/MAE for different densities increases continuously when the rate of corrupted input data increases (except for the case when density is 20%). In Figure 7 (B), we notice that the training converges a little slower than the deep autoencoder model.
SOM Initialization
This subsection demonstrates the performance of SOM as a method for geographical clustering instead of using other alternative methods like k-means with Euclidean distance, Hierarchical clustering, etc. As already noted, the final results of clustered autoencoder (see Figure 7 (A)), are the average result for the training of eight different autoencoders. Each autoencoder is built according to its input vector size (number of services). The histogram depicted in Figure 8 As we can see, the best results are for the clusters with the highest number of services (clusters with 1803 and 1749 services). Their results are far less than the average result. However, the third and the fourth ranked autoencoders are the clusters with the smallest number of services (60 and 73 respectively). For the remaining clusters, despite the fact that they contain a considerable number of services, we note that their results are significantly higher then the average result.
The major significance of these results is on the manner that we operate for the initialization of the code-book. In fact, we opted for a Kohonen feature map network with 80 × 80 neurons to perform the clustering of 4500 services. Each line on this map contains 80 neurons and represents one country from our dataset. Note that, in ws-dream dataset we detected 81 different countries (we merged two countries). The size of each neuron vector is equal to the number of users available in our dataset (i.e., 142 users). So, each line for a selected country i on the map represents semantically the maximal geodesic distance (according to their longitude/latitude values) between all services pairs which compose this country i (i.e., argmax(D(s 1 , s 2 )), ∀s 1 , s 2 ∈ Country i ), where D denotes a geodesic distance function. This means that the first neuron is initialized with the vector of s 1 divided by the size of all services which compose the selected country. The same thing is operated for the last neuron with the vector s 2 . The rest services are distributed on the rest of neurons according to their geodesic distance. Note that, if there are several services with the same geographic coordinates, we put on the corresponding neuron the average vector of those services. Finally, the services which do not have the geographical information (i.e., latitude/longitude) are randomly dispatched throughout the whole neurons of their corresponding country line.
In support of our initialization outcomes (Figure 8 (A)), other experiments were performed with a random initialization of the code-book in SOM algorithm. As we can see in Figure 8 (B)), the average results in terms of RMSE and MAE are less than the results of experiments operated on SOM clustering results based on Geographical characteristics initialization.
Other interesting observations on our proposed initialization method are: -Firstly, in terms of RMSE and MAE, we observe that about 80% of all services are clustered in the top-3 clusters for clustering based on Geographical characteristics initialization (figure 8 (A)), and only 69% for a random initialization (figure 8 (B)). -Secondly, we observe that there is a directly proportional relationship between the training results and the sparsity on data of each cluster. The more the density of input data is the better the results are. Table 3 summarizes the sparsity (column two) ratio on each cluster (column one), knowing that the sparsity of all data is about 23%. -The third important observation is about the percentage of missing geolocational information. We observed that most of services without geographical characteristics are clustered in the latest clusters. The third column in Ta-ble 3 shows the missing rate according to the clusters size. -Finally, with our proposed initialization mechanism, we ensured a deterministic behavior of clustering result.
Based on this analysis, and in order to preserve the topological properties of the dataset and reduce the effect of QoS sparsity, we take the advantage of SOM algorithm to create the appropriate clusters with our proposed initialization mechanism. Table 4 summarizes the results in terms of RMSE, MAE and TIME for the three architectures: A deep autoencoder, a clustered autoencoder and clustered* autoencoder with only top-3 clusters. As already mentioned, the results represent the comparison between the three different approaches that we have implemented, so, the results are grouped according to those methods. The Deep AE group concerns the results on test data after training the deep autoencoder without using any clustering. The Clustered AE group concerns the average results for eight clusters of the deep autoencoder. The third group (Clustered* AE) presents the average testing results on the top-3 best clusters (about 80% of all services are clustered in the top-3 clusters). All the experiments are conducted with three different data density values, and by adding a random noise to the input data for each of them. Note that the technique of adding the noise, as specified in [START_REF] Neelakantan | Adding gradient noise improves learning for very deep networks[END_REF], not only helps us to avoid over-fitting, but also can result in lower training loss (especially when density is about 20%). This clearly demonstrates the interest of using the clustering phase before. Indeed, the results show that the method with SOM clustering behaves better than the first method where the prediction of QoS values is performed on the whole dataset. Consequently, using deep-autoencoder on the top-3 clusters outperforms the other methods. We also note that along with the data densities increasing, the prediction metrics become smaller.
A detailed version of experiments operated on the whole dataset according to k-fold cross-validation principle can be found at the following links:
-Deep Autoencoder: http://bit.ly/2JrIGSs -Clustered deep autoencoder: http://bit.ly/2XKRMxO
We additionally observe, that not only the prediction accuracy decreases on the clustered AE, but the training time improves significantly from the deep autoencoder method to the clusterd AE method.
Performance Comparison
With the aim of evaluating the advantages of our method and its variants, we compare them with the following stateof-art baseline QoS prediction methods. UPCC [START_REF] Zhao | User-based collaborative-filtering recommendation algorithms on hadoop[END_REF]: Userbased CF using PCC. This method is a user-based model using Pearson Correlation Coefficient for recommendation and prediction of Web services. IPCC [START_REF] Sarwar | Itembased collaborative filtering recommendation algorithms[END_REF]: Item-based CF using PCC. This method uses similar services for the QoS prediction using Pearson Correlation Coefficient method. ARIMA [START_REF] Godse | Automating qos based service selection[END_REF]: this method is often considered as the baseline method. It is a statistical method adapted to QoS web service prediction. WSRec [START_REF] Zheng | Wsrec: A collaborative filtering based web service recommender system[END_REF]: It is a hybrid CF algorithm that combines a user-based prediction [START_REF] Zhao | User-based collaborative-filtering recommendation algorithms on hadoop[END_REF] model with item-based prediction model [START_REF] Sarwar | Itembased collaborative filtering recommendation algorithms[END_REF]. Lasso [START_REF] Wang | A spatial-temporal qos prediction approach for time-aware web service recommendation[END_REF]: this approach optimizes the recommendation problem by adapting the lasso penalty function. Country-clustered Autoencoder [START_REF] Smahi | An encoder-decoder architecture for the prediction of web service qos[END_REF]: this is our first approach to predict QoS of Web services based on historical data. We used a simple (not deep) autoencoder architecture (with only one hidden layer) to predict the QoS values on sets of clusters based on the country ID. For that, all the experiments for our previous approach have been reconducted in line with the actual execution parameters (number of clusters 7 and number of folders).
We note that our work is not compared with QoS Prediction methods that use a Topk based model (since they do not consider the entire dataset).
Table 5 presents the MAE and RMSE results of different prediction methods on response-time criterion when the training set densities take two different values: 80% and 50%. From these results, we notice the following observations:
1. The prediction accuracy of Lasso is better then Country AE, WSRec and ARIMA methods; 2. The Deep AE method is slightly better then Lasso method for the two density values. However, when the rate of corruption data increases the performance of our Deep AE decreases, compared to Lasso method; 3. Compared with Lasso method, the Clustred AE (when no noise) can obtain as high as 9.6% and 11% improvements in prediction accuracy when data density is 80% and 50% respectively. Furthermore, the performances are almost identical when the input noise is about 50%. 4. We clearly remark that the top-3 clustering deep autoencoders outperform the others methods in all cases (density and noise variations) even when the noise rate is around 80%. 5. Globally, when the noise is lower or equal to 20%, we observed that the results of the three variants of our proposed method outperform other prediction approaches in prediction accuracy in all cases in terms of RMSE metric (except for Deep AE when density is about 80% and noise is equal to 20%), and in the most cases from MAE metric. First, we use the geographical characteristics of a service to determine the most representative cluster (in terms of country, autonomous system, or provider); this step is performed by using the first part of Algorithm 3 (from line 1 to line 9). Then, we use the new QoS values (given by the first step) as an input of all trained DAEs and take the average of the returned results (line 11). Note that, in the learning phase, we use about 60% of entire dataset as examples to fit the parameters of the SOM classifier. In the testing phase, we use about 40% of examples to assess the performance of our fully-specified classifier.
As depicted in Table 6, the RMSE, MAE, and testing TIME, show the postive impact of the contextual characteristics on the resolution of the cold-start problem. For that, we are not only focused on the checking of the most represen- tative cluster in terms of geographical characteristics (step 1), but we also use already the trained DAE (after using Algorithm 2) to average the returned results. We show that the application of DAE (step 2), can significantly improve the results.
Threats to validity
Regarding the recommendations in [START_REF] Wohlin | Experimentation in software engineering[END_REF], there are several elements to discuss about the validity of the experiment. For that, we consider the threats to internal and external validity of our study:
-Internal Validity threats are intrinsically tied to experimental realism. To select the different parameters for our models, we have tried to be as objective as possible. However, despite the fact that we conduct several pre-trainings for selecting the best configuration of those parameters, other different values may conduct to different results. For the deep autoencoder model, since we split the dataset into training and testing sets with varying proportions and use the k-fold cross-validation, this can alleviate the actual threat. For the clustered deep autoencoder model, and in addition to previous remarks, internal validity threats are mitigated since the number of neurons is proportional with the size of clusters. -External Validity threats are about the ability to generalize our proper findings and conclusions to other contexts. In this study, we performed our experiments on a large dataset (ws-dream), which enables us to train correctly our prediction method. To the best of our knowledge, this is the only large Web service QoS dataset publicly available. Although we used only one dataset, we are quite confident that the impact of using another dataset is reduced with the use of k-fold cross-validation which operates on multiple data densities. On the other side, the fact of using a denoising autoencoders by corrupting the data by randomly forcing some of the input values to zero, decreases the risk of external validity.
Related Works
In [START_REF] Zhang | Deep learning based recommender system: A survey and new perspectives[END_REF][START_REF] Singhal | Use of deep learning in modern recommendation system: A summary of recent works[END_REF][START_REF] Jalili | Evaluating collaborative filtering recommender algorithms: A survey[END_REF], the authors draw up a detailed survey and new perspectives on "deep learning"-based recommendation systems. For the two first articles, the authors treat all categories of recommendation systems. However in the third article, the authors focus only on the Collaborative Filtering recommendation algorithms.
In this paper, we focused on the Collaborative Filtering methods using a deep learning mechanism and/or exploiting the context information (and more particularly geographical information) for clustering Web services. For this reason, we present recent related works according to these two different aspects:
Clustering-based Methods in Collaborative Filtering
In the literature, several clustering-based works have been carried out [START_REF] Tang | Collaborative web service quality prediction via exploiting matrix factorization and network map[END_REF][START_REF] Chen | Your neighbors alleviate coldstart: On geographical neighborhood influence to collaborative web service qos prediction[END_REF][START_REF] Chen | Exploiting web service geographical neighborhood for collaborative qos prediction[END_REF][START_REF] Chen | Your neighbors are misunderstood: On modeling accurate similarity driven by data range to collaborative web service qos prediction[END_REF][START_REF] Ma | Variation-aware cloud service selection via collaborative qos prediction[END_REF][START_REF] Liu | A personalized clustering-based and reliable trust-aware qos prediction approach for cloud service recommendation in cloud manufacturing[END_REF]. Those studies are trying to discover a set of clusters based on the neighborhood characteristics of data.
In [START_REF] Ma | Variation-aware cloud service selection via collaborative qos prediction[END_REF], the authors focus on Cloud Services Selection according to users' non-functional requirements. Their method is based on time series QoS data. In order to identify the user clusters, they use the double Mahalanobis distances to improve the similarity measurement of QoS cloud models during multiple periods. They assume two challenges: Challenge in exactly identifying the neighboring users for a current user and Challenge in selecting the appropriate cloud service with optimal QoS meeting user's period preferences.
In the work presented in [START_REF] Liu | A personalized clustering-based and reliable trust-aware qos prediction approach for cloud service recommendation in cloud manufacturing[END_REF], the authors assume that most existing CF approaches ignore the influence of task similarity among different users on QoS prediction results and vice versa (i.e. the influence of similar users on different tasks). To address this problem, the authors proposed a novel clustering-based and trust-aware method for personalized and reliable QoS values prediction. For that, they combine two contributions to make a more personalized QoS prediction. For the first one, they develop a clustering-based algorithm to identify a set of similar users from the point of view of task similarity. The tasks similarities are computed by incorporating both explicit textual information and rating information as well as implicit context information. For the second one, they made assumptions that the QoS values may be contributed by unreliable users. For that, they design a trust-aware CF approach by merging local and global trust values, to reconstruct trust network of the clustered users. A series of experiments on two real-world datasets were conducted to evaluate the proposed approach.
The work presented in [START_REF] Chen | Exploiting web service geographical neighborhood for collaborative qos prediction[END_REF] aims to improve QoS prediction accuracy. To this end, the authors have considered the factor of geographical neighborhood to perform both matrix factorization and service clustering. The authors built a set of service clusters based on geo-neighbors similarity. The clusters are identified using both a bottom-up hierarchical neighbor-hood and the contextual attributes (like the country and service provider). Accordingly, the authors leveraged a neighborhood-based term to decompose the QoS matrix and predict the missing values.
Deep Learning Methods in Collaborative Filtering
Recently, deep learning-based models are rapidly developed in the era of recommendation. A wide range of Collaborative Filtering approaches have been proposed like in [START_REF] Kanagawa | Cross-domain recommendation via deep domain adaptation[END_REF][START_REF] Xiong | Deep hybrid collaborative filtering for web service recommendation[END_REF][START_REF] Bai | Dltsr: A deep learning framework for recommendations of long-tail web services[END_REF][START_REF] Wang | Online reliability prediction via motifs-based dynamic bayesian networks for service-oriented systems[END_REF]. We focus next on two works that we consider as very related to our work.
In [START_REF] Xiong | Deep hybrid collaborative filtering for web service recommendation[END_REF], the authors proposed a hybrid deep learning approach by combining Matrix Factorization with Content filtering to improve Web service recommendation. To predict the probability of a particular service to be invoked by a particular web application or mashup (user in our case), they used two feed-forward neural networks for both Collaborative filtering and Content filtering components. After that, these two components are combined by concatenating their last hidden layers to compose a third multi-layer perceptron to learn the interaction between them as well as their functionalities (it is well known that the interaction treated with Collaborative filtering and Content filtering is about multiple factors such as invocation history and functionalities). In order to evaluate their method, the authors conducted several experiments using a real-world Web service dataset. However, despite the fact that for those experiments the training data is randomly selected under different percentages of training data (20% to 80% with a step of size 10%), they kept only 20% in test data for each experiment round. The second remark is about the lack of information on their choices of the number of layers and the size of each of them and how those choices can affect the final results. A final remark is about the learning time, which is relatively high due to the fact that they try to learn three different neural networks.
In [START_REF] Bai | Dltsr: A deep learning framework for recommendations of long-tail web services[END_REF], the authors used an autoencoder architecture to learn low-dimensional representations and perform feature extraction. To this end, they build a deep learning framework to recommend long-tail Web services. To tackle the problem of the unsatisfactory quality of the low-quality of the content description, they used a stacked denoising autoencoder to perform feature extraction for long-tail Web services with severe sparsity on both their content description and their historical usage data. The proposed approach is tested on a real-world dataset and compared with several state-of-art baselines. In summary, this framework can be considered as a content-based recommendation system. However, from the viewpoint of matrix factorization and collaborative filtering paradigm, the interaction between the applications (mashups) and services is not addressed here.
Positioning of the approach
In contrast to the aforementioned works, our contribution leverages both contextual attributes and QoS data to perform clustering. By doing so, we limit the chances of having sparse or small-sized clusters. More specifically, the initialization of the cluster-heads (or weights of neurons) is performed using contextual attributes; in addition, this initialization is accomplished in a self-organized manner (i.e., the neighboring neurons of the map are also neighbors in the physical network). Besides that, the training of the SOM model exploits the QoS data (instead of contextual attributes) to update the weights of the cluster-heads.
To the best of our knowledge, our contribution is the first work that designs a deep autoencoder architecture (and its variant denoising autoencoder) for QoS prediction Web services.
Conclusion
QoS is a key factor for building successful service-oriented applications. Most existing QoS-predicting works do not handle data sparsity, the cold-start problems, and the contextual information of services and thus are likely to perform worse in real scenarios.
In this paper, we tackled the aforementioned issues by adopting a DAE architecture. To optimize prediction capabilities, the hyper-parameters of the DAE are tuned through cross-validation. We also introduced a SOM-based clustering step as a pre-processing to the DAE learning. This step aims to reduce QoS matrix sparsity and improve prediction accuracy. We conducted a set of experiments to evaluate the performance of this prediction method. We explored many variants of this method and compared their performance with the state-of-the-art methods. These experiments showed that our method outperforms the existing methods in terms of accuracy.
As perspectives to this work, we would like to investigate first the use of alternative architectures of deep learning models like stacked autoencoders or deep belief networks. These architectures may ensure more accurate prediction results than the autoencoders used in this work. Second, we plan to use deep learning in predicting Web service Quality of Experience (QoE). We intend to use some powerful text mining tools to analyse the users' feedbacks in order to estimate QoE scores for Web services. We argue that the combination of QoS and QoE will contribute to improve Web service recommendation systems.
Fig. 1
1 Fig. 1 Deep Autoencoder architecture.
Fig. 2
2 Fig. 2 Kohonen feature map network. (The connections between the input and output layers indicate the relationships between the input and output vectors.)
Algorithm 1 : 3 foreach s ∈ S do 4 compute
134 Data Clustering According to Batch SOM Algorithm Inputs : S = {s 1 , • • • , s n } , set of n service vectors Data : W = {w 1 , • • • , w k } set of services weight vectors 1 Initialize all w ∈ W 2 for epoch : 1 → N epochs do
7 else 8 P
78 (s) = P(argmin
Fig. 5
5 Fig. 5 Training RMSE per minibatch on the second dataset, with learningrate = 0.05, epoch = 20, for an optimal parameters selection. (A) Influence of layers size, (B) influence of activation functions.
Fig. 6
6 Fig.6with a training/testing set of (density%)/(100 -density%) for a deep autoencoder prediction (on various noises and densities). (A) The average RMSE and MAE results, and (B) The training performance graph.
Fig. 8
8 Fig. 8 Detailed results of clustered autoencoder with density = 80% and noise = 0%. clustering results: (A) based on Geographical characteristics initialization. (B) with random initialization
QoS prediction performance
(D) Cold-start process
2 Training the best model for each cluster Compute the initial QoS values for the new service S P1 new =?
build model for cluster1 1 x 1 ENCODER1 DECODER1 x1 for model1 3 Optimize the squared error loss
build model for cluster2 1 x 2 ENCODER2 DECODER2 x2 for model2 3 Optimize the squared error loss
. . .
. . .
1 build model for clustern xn ENCODERn DECODERn xn 3 Optimize the for modeln squared error loss
Table 1
1 Table 1 summarizes some characteristics of WS-Dream repository data. Information details of ws-dream repository (RT : response Time, TP: Throughput Time )
The first one contains 1 974 675 in-
vocations from 339 users on 5825 Web services. The sec-
ond one contains about 30 million Web service invocation
records collected from invocation of 4500 services by 142
users at 64-time intervals. Each time interval t takes 15 minutes. The innovation, on both datasets, considers response 4 https://github.com/peterwittek/somoclu time and throughput criteria (c ∈ {rt,th}).
Table 2
2 The results of different improvements on dataset
Attributes % of missing values (before) % of messing value (after)
QoS values 26% 23%
AS number 21% 15%
IP Address 24% 13%
Latitude/Longitude 22% 15%
(A) shows the detailed results of training autoencoders for each cluster separately. In fact, this graph represents the results in terms of RMSE and MAE metrics of eight autoencoders. The average RMSE and average MAE are plotted with blue and red lines respectively. Note that the x-axis corresponds to the size of input data for those autoencoders.
2.33 3 4 5 1.577 1.819 3.697 2.060 3.781 2.665 (A) 4.409 3.056 4.567 2.752 5.101 3.619 5.762 4.303 u 2.217 2.543 3.697 2.115 3.881 2.635 (B) 4.500 3.120 4.803 2.903 6.204 4.492 6.500 4.866 3.23 4 5 u
1.19 0.590 0.756 0.913 1.104 1.78
1803 1749 60 73 196 356 99 164 1557 1382 144 420 50 552 91 304
RMSE MAE RMSE MAE
Table 3
3 Information details of ws-dream repository
Cluster size % of sparsity % missing geoloc. info.
1803 22,34% 13,81%
1749 23,10% 9,43%
60 19,30% 20,00%
73 24,44% 13,70%
196 28,06% 32,14%
356 26,63% 33,99%
99 22,53% 15,15%
164 26,69% 24,39%
Table 4
4 Users accuracy comparison methods on RT criterion (training time is specified in minutes)
Method Noise Density=20% RMSE MAE TIME RMSE MAE TIME RMSE MAE TIME Density=50% Density=80%
0% 3,318 1,819 121,2 2,761 1,407 144,5 2,562 1,270 194,2
Deep AE 20% 50% 3,229 3,554 1,717 1,972 97,98 92,56 2,819 3,026 1,501 1,681 128,9 115,8 2,616 2,807 1,367 1,511 167,6 143,4
80% 4,090 2,337 94,92 3,575 2,057 108,4 3,310 1,894 131,1
0% 2,852 1,571 53,74 2,558 1,383 69,29 2,325 1,188 85,34
Clustered AE 20% 50% 2,765 2,977 1,514 1,647 54,16 54,34 2,581 2,743 1,398 1,501 70,20 70,30 2,379 2,577 1,236 1,380 86,26 86,12
80% 3,173 1,724 53,97 2,981 1,612 69,68 2,862 1,527 85,40
0% 1,908 0,840 42,20 1,762 0,771 54,70 1,700 0,748 66,90
Clustered* AE 20% 50% 1,997 2,087 0,907 0,936 42,25 42,32 1,806 1,917 0,802 0,858 55,10 55,20 1,731 1,829 0,771 0,822 67,90 68,00
80% 2,224 0,956 42,31 2,128 0,945 55,10 2,071 0,930 67,50
Table 5
5 Performance Comparisons of Prediction methods on RT with two data density values
Method Density=50% RMSE MAE RMSE MAE Density=80%
UPCC 3,034 1,470 3,032 1,467
IPCC 2,951 1,396 2.925 1.372
ARIMA 3,401 1,236 2,986 1,028
WSRec 2,945 1,380 2,925 1,372
Lasso 2,872 1,021 2,572 0,893
Country AE 3,825 1.892 2,803 1,250
noise = 0% 2,761 1,407 2,562 1,270
Deep AE noise = 20% noise = 50% 2,819 3,026 1,501 1,681 2,616 2,807 1,367 1,511
noise = 80% 3,575 2,057 3,310 1,894
noise = 0% 2,558 1,383 2,325 1,188
Cluster AE noise = 20% noise = 50% 2,581 2,743 1,398 1,501 2,379 2,577 1,236 1,380
noise = 80% 2,981 1,612 2,862 1,527
noise = 0% 1,762 0,771 1,700 0,748
Cluster* AE noise = 20% noise = 50% 1,806 1,917 0,802 0,858 1,731 1,829 0,771 0.822
noise = 80% 2,128 0,945 2,071 0,930
4.4 Cold-start situation
To handle the cold-start problem, we adopt the strategy de-
scribed in algorithm 3. More specifically, we estimate the
initial QoS values of a new service (or new user) as follows:
Table 6
6 Cold-start problem (time is specified in seconds)
Cold-Start Steps RMSE MAE TIME
Step 1 3.877 1.951 1.201
Step 2 3.010 1.420 2.194
The autoencoder is called linear if the transfer functions are linear, otherwise it is non-linear. x dim y dim . . .neuron i i n p u t l a y e r
An AS is either a single network or a group of networks that is controlled by a common network administrator (or group of administrators) on behalf of a single administrative entity (university, a business enterprise, etc.)
Maxmind GeoIP2 Geolocational Databases. Retrieved on May 2019 from http://dev.maxmind.com/geoip/geoip2/geolite2/
IP2Location LITE Databases. Retrieved on May 2019 from http://lite.ip2location.com
For this work, the clustering is performed on country ID. In order to have only eight clusters we grouped some of countries in the same cluster
Acknowledgment
This work has been performed with the support of the High Performance Computing Platform MESO@LR, financed by the Occitanie / Pyrénées-Méditerranée Region, Montpellier Mediterranean Metropole and the University of Montpellier, France. |
03923857 | en | [
"math.math-na",
"phys.phys.phys-comp-ph"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-03923857/file/c3.pdf | Aymen Laadhari
email: [email protected]
Ahmad Deeb
Numerical Approach Based on the Composition of One-Step Time-Integration Schemes For Highly Deformable Interfaces
In this work, we propose a numerical approach for simulations of large deformations of interfaces in a level set framework. To obtain a fast and viable numerical solution in both time and space, temporal discretization is based on the composition of one-step methods exhibiting higher orders and stability, especially in the case of stiff problems with strongly oscillatory solutions. Numerical results are provided in the case of ordinary and partial differential equations to show the main features and demonstrate the performance of the method. Convergence properties and efficiency in terms of computational cost are also investigated.
INTRODUCTION
We are interested in numerical solutions of strongly coupled systems of PDEs involving highly nonlinear membrane forces, see e.g. [START_REF] Laadhari | An operator splitting strategy for fluid-structure interaction problems with thin elastic structures in an incompressible Newtonian flow[END_REF][START_REF] Laadhari | Exact Newton method with third-order convergence to model the dynamics of bubbles in incompressible flow[END_REF]. For such problems with high nonlinearities, fine meshes and spatial discretizations by highorder finite elements are required [START_REF] Doyeux | Simulation of two-fluid flows using a finite element/level set method. Application to bubbles and vesicle dynamics[END_REF]. Therefore, numerical time integrations with higher orders are also necessary to allow viable numerical approximations. In this preliminary work, we present the formalism in the case of the level set problem, where we seek to produce solutions with a higher-order time approximations.
Different temporal integration techniques with high order approximations exist to study dynamical systems. Popular numerical methods are those associated with discrete flow, where solutions are sought over discrete times using a numerical flow composition [START_REF] Ernst Hairer | Solving Ordinary Differential Equations I: Nonstiff Problems[END_REF]. Fixed or adapted time steps were used. In the existing literature, several studies on continuous flows have investigated the Borel-Padé-Laplace integrator for stiff and non-stiff problems. See, for example, [START_REF] Ernst Hairer | Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems[END_REF][START_REF] Deeb | Comparison between Borel-Padé summation and factorial series, as time integration methods[END_REF]. Indeed, an integrator calculates an approximation over a continuous time interval and has the ability to increase the order of approximation as much as necessary by changing a single parameter in the integrator [START_REF] Deeb | Performance of Borel-Padé-Laplace integrator for the solution of stiff and non-stiff problems[END_REF].
To allow computational savings and improve precision while maintaining numerical accuracy, adaptive timestepping strategies are commonly employed where the level of adaptability may depend on the rate of change between consecutive solutions [START_REF] Guo | A novel adaptive crank-nicolson-type scheme for the time fractional allen-cahn model[END_REF] or on some error estimates [START_REF] Ernst Hairer | Solving Ordinary Differential Equations I: Nonstiff Problems[END_REF][START_REF] Ernst Hairer | Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems[END_REF]. However, the computational efficiency depends on the order of the scheme and how the error estimate is computed. Hence the need for high-order discrete flow to improve numerical efficiency, especially for stiff problems. For a given numerical approximation, the convergence depends on the error in time and in space. Although the adaptive meshing techniques allow a better precision of the numerical solution, the global error estimate will be limited by the order of the numerical scheme in time. Thus, it is important to increase the order of approximation in time to allow high-order of precision.
One can distinguish one-step and multi-step methods based on the number of previous numerical solutions used in the temporal discretization of differential equation. For a detailed discussion of the classical theory of multi-step methods as well as the properties of order, convergence, stability and symmetry, we refer the reader to [START_REF] Ernst Hairer | Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems[END_REF][START_REF] Butcher | Numerical Methods for Ordinary Differential Equations[END_REF][START_REF] Hairer | Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations[END_REF]. Among the most used schemes for stiff problems with oscillatory terms, we can mention the implicit linear multi-step methods based on the backward difference formulas (BDF) presenting a stability limited to lower orders [START_REF] Iserles | A First Course in the Numerical Analysis of Differential Equations[END_REF][START_REF] Curtiss | Integration of stiff equations[END_REF][START_REF] Butcher | Coefficients for the study of Runge-Kutta integration processes[END_REF][START_REF] Butcher | A history of Runge-Kutta methods[END_REF]. Indeed, only first-order (implicit Euler) and second-order schemes are stable. The BDF schemes of order 3 to 6 exhibit a weaker stability property, called A-stability, and formulas of order greater than 6 are unstable. A generalization of BDF using a second derivative allows to obtain implicitly stable schemes up to the order 10, see e.g. [START_REF] Ernst Hairer | Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems[END_REF]. Runge-Kutta methods, referred to as RK, are one-step schemes [START_REF] Runge | Ueber die numerische Auflösung von differentialgleichungen[END_REF]. To cope with the lack of A stability of explicit RK methods, which are not suitable for solving stiff equations, implicit and stable RK methods are also developed. We refer the reader to [START_REF] Ernst Hairer | Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems[END_REF][START_REF] Butcher | Numerical Methods for Ordinary Differential Equations[END_REF] for a note on A-stability and the Gauss, Radau IIA and Lobatto IIIA, IIIB and IIIC schemes. Composition methods have emerged in the literature, with several variants, as a powerful numerical tool based on the composition of lower-order basic time-integration methods with the aim of increasing the order of approximation.
The aim of this work is to provide a higher-order approximation of the level set solution. Various higher-order finite element approximations are used and allow to increase the spatial precision. Since the error in time takes over in case of low-order schemes, composition methods are applied on simple and low-order discrete flows to increase the order of approximation in time, while keeping a reasonable computational cost. Numerical examples are presented to validate and show the main characteristics of the method in the cases of the Lotka-Volterra differential system and the level set advection problem. This paper is organized as follows. Section 1 presents the preliminary concepts and mathematical setting. In Section 2, we present the composition methods for the implicit first-order accurate backward Euler and second-order accurate Heun's method. In Section 3, we provide an assessment of the composition technique through numerical examples.
MATHEMATICAL FORMULATION
Level set problem
Let T > 0 and d = 2 be respectively the simulation period and the spatial dimension. For any time t ∈ (0, T ), the interface is denoted by Γ(t). Its deformations are described implicitly as the iso-value zero of a level set function ϕ. The interface is assumed smooth enough and is enclosed in a larger domain Λ so that
Γ(t) = (t, x) ∈ (0, T ) × Λ : ϕ(t, x) = 0 .
Given a velocity field u, the evolution of the level set function satisfies the advection equation:
∂ t ϕ + u • ∇ϕ = 0, in (0, T ) × Λ. (1)
Appropriate initial and boundary conditions are considered: ϕ = ϕ b on (0, T ) × Σ -and ϕ(0) = ϕ 0 in Λ, where
Σ -= {x ∈ ∂ Λ : u • ν(x) < 0}
is the inflow boundary and ν denotes the unit normal exterior vector to Λ. For an initial internal domain Ω 0 , ϕ is initialized as a signed distance function to Γ(t = 0) so that |∇ϕ 0 | = 1:
ϕ 0 (x) = inf{|y -x|; y ∈ ∂ Ω 0 }, if x / ∈ Ω 0 , -inf{|y -x|; y ∈ ∂ Ω 0 }, if not.
However, the signed distance property is not preserved when solving (1), thus leading to numerical instabilities if the gradient of the level set function becomes either very small, or very large, in particular in the vicinity of the interface or on ∂ Λ [START_REF] Osher | Level Set methods: An overview and some recent results[END_REF]. To restore the signed distance property, an auxiliary redistancing problem, is commonly solved while maintaining the zero-level set position to avoid the loss of mass characterizing Eulerian methods. Higher-order methods and the use of fine meshes in the vicinity of the interface can help enforcing the local mass conservation, see for example [START_REF] Doyeux | Simulation of two-fluid flows using a finite element/level set method. Application to bubbles and vesicle dynamics[END_REF][START_REF] Laadhari | Implicit finite element methodology for the numerical modeling of incompressible two-fluid flows with moving hyperelastic interface[END_REF][START_REF] Laadhari | Fully implicit finite element method for the modeling of free surface flows with surface tension effect[END_REF].
Time integration by composition method
Let y(t) be the solution of the following initial value problem:
y = f (t, y (t)) , for all t ∈ R, with the initial condition y(0) = y 0 . (2)
For t > 0, the exact flow of ( 2) is defined by the map Y t as follows:
Y t : R → R y 0 → Y t (y 0 ) = y(t). (3)
For any given numerical scheme of order p, we can associate a numerical flow, denoted
Φ [p]
∆t , such that a sequence of numerical values y n is constructed on a discrete set of points t n = n∆t as the approximations of the solution at time t n :
y n+1 = Φ [p]
∆t y n . We say that the numerical flow is of order p if the following equality holds:
Y ∆t (y 0 ) -Φ [p] ∆t (y 0 ) = O(∆t p+1 ). The superscript [p] in Φ [p]
∆t refers to the order p of the numerical flow. As stated in [START_REF] Hairer | Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations[END_REF] (Theorem 4.1, page 43), we can increase the order of the approximation using a s times composition of the same numerical flow
Φ [p] ∆t . The resulting discrete flow Φ [p+1] ∆t = Φ [p] a 1 ∆t • Φ [p] a 2 ∆t • . . . • Φ [p]
a s ∆t is at least or order p + 1 if the conditions hold:
a 1 + a 2 + • • • + a s = 1 and a p+1 1 + a p+1 2 + . . . a p+1 s = 0. ( 4
)
There are no real solutions for system (4) when p is odd. However, if p is even, the smallest composition that increase the order by 1 is when s = 3. This composition is called the the triple jump, see [START_REF] Hairer | Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations[END_REF] (page 44, section II.4). One can also continue to increase the order of construction of the discrete flow by continuing to compose discrete flows, see [START_REF] Muruq | Order conditions for numerical integrators obtained by composing simpler integrators[END_REF]. However, a problem for compositions with real coefficients comes from negative coefficients which result in a negative time step. Castella et al. consider solutions for the system with complex coefficients in the case of parabolic equations [START_REF] Castella | Splitting methods with complex times for parabolic equations[END_REF]. In the following, we will use the composition technique in the case s = 2.
Let us consider any discrete flow Φ
[p] ∆t of order p and introduce the following composition:
Φ [p+1] ∆t = Φ [p] a [p] 2 ∆t •Φ [p] a [p] 1 ∆t
.
We recall a useful theorem which states that (see e.g. [START_REF] Casas | Compositions of pseudo-symmetric integrators with complex coefficients for the numerical integration of differential equations[END_REF]):
Theorem 1 Let us consider a differential system of type (2) and any numerical flow
Φ [p] ∆t of order p. If a [p] 2 = a [p] 1 = 1 2 + i 2 sin 2l + 1 p + 1 π 1 + cos 2l + 1 p + 1 π , with - p 2 l p 2 -1 if p is even, - p + 1 2 l p -1 2 if p is odd.
(
) 5
then the approximation defined by the real part of the output, denoted by Re(.) and obtained by the composition of flow, is an approximation of the solution of order p + 1, i.e.:
Y ∆t (y 0 ) -Re Φ [p] a [p] 2 ∆t • Φ [p] a [p] 1 ∆t (y 0 ) = O ∆t p+2 . (6)
First, we consider the numerical flow Φ BE1 of the backward Euler scheme which is of order one. We define the two time composition as follows:
Φ BE2 ∆t := Φ BE1 a [1] 2 ∆t • Φ BE1 a [1] 1 ∆t . (7)
Here, the coefficients a [START_REF] Laadhari | An operator splitting strategy for fluid-structure interaction problems with thin elastic structures in an incompressible Newtonian flow[END_REF] i are given by formula (5) for l = 0: a
[1] 1 = a [1] 2 = 1 2 + i 2 .
Hence, the approximation given by the real part of the output obtained by the composition ( 7) is an approximation of the solution of order 2, i.e.:
Y ∆t (y 0 ) -Re Φ BE2 ∆t (y 0 ) = O ∆t 3 .
Thus, the double composition of the discrete scheme BE1 of order 1 gives a scheme of order 2. Second, we consider a numerical flow of order 2 associated to the Heun's method Φ HM1 ∆t (second-order RK):
Y ∆t (y 0 ) -Φ HM1 ∆t (y 0 ) = O ∆t 3 ,
thus the composition method:
Φ HM2 ∆t := Φ HM1 a [2] 2 ∆t • Φ HM1 a [2] 1 ∆t
, with 6a
[2] 1 = 6a [2]
2 = 3 + i √ 3 leads to a numerical flow of order 3 and we have:
Y ∆t (y 0 ) -Re Φ HM2 ∆t (y 0 ) = O ∆t 4 .
We can continue increasing the order of the approximation by constructing new numerical flows using the above composition technique. This was thoroughly analysed in [START_REF] Casas | Compositions of pseudo-symmetric integrators with complex coefficients for the numerical integration of differential equations[END_REF] and presented with a comprehensive study. In addition, it is demonstrated that the numerical flow 1 2 Re Φ
[p]
a [p] 1 ∆t • Φ [p] a [p] 2 ∆t + Φ [p] a [p] 2 ∆t • Φ [p] a [p]
1 ∆t is of order p + 2. When composing two discrete flows with permuted coefficients, the imaginary parts are conjugated and are therefore canceled by the summation, resulting in only a real part. Remark that the A-stability of the resulting composition scheme is not studied here but we refer the interested reader to the reference [START_REF] Fedoseev | Preference and stability regions for semi-implicit composition schemes[END_REF]. Roughly speaking, the stability function S r (z) of the resulting composition scheme is the product of two functions: S r (z) = S(a 2 z) S(a 1 z), where S(z) represents the stability function of the initial diagram. Therefore, the region of stability, defined by {z ∈ C, such that |S r (z)| 1}, can be represented as the following intersection:
{z ∈ C : |S(a 2 z)| 1} ∩ {z ∈ C : |S(a 1 z)| 1}.
Note that each set in this intersection has a wider region than the original scheme with a slight translation in the complex plane.
NUMERICAL APPROXIMATION
Space discretization by finite elements. Let T h be a partition of Λ consisting of geometrically conforming open simplicial elements K (triangles for d = 2), such that Λ = ∪ K∈T h . Lagrange finite element polynomials are considered for the space discretization of the level set function on a mesh of size h = max K∈T h diam(K). Let us denote by ϕ h an approximation of ϕ, while X h = ψ h ∈ C 0 (Λ) : ψ h|K ∈ P κ , ∀K ∈ T h , with κ 1 represents the finite dimensional space of admissible level set. The Galerkin scheme associated with (1) consists in finding ϕ h ∈ X h such that ϕ h (t, x) = ∑ i y i (t) ψ h,i (x), for a certain finite element basis.
A semi discrete equation is then obtained by multiplying by a test function and integrating aver Λ. We employ the Stabilization Upwind Petrov Galerkin, referred to as SUPG [START_REF] Loch | The level set method for capturing interfaces with applications in two-phase flow problems[END_REF], method where the SUPG test function is defined as v h,i = ψ h,i +τ K u•∇ψ h,i in order to add a diffusion in the streamline direction. We set a streamline diffusion parameter proportional to the local mesh size so that
τ K = C h K max {|u| 0,∞,K , tol/h K } ,
where C represents a scaling constant and the term tol/h K avoids the division by zero. The semi-discrete system arising from the discretization with finite elements writes:
M(t) y + K(t) y = 0, with y = (y 1 , . . . , y i , . . .) T . (8)
Here, M(t) is the stabilized mass matrix, while K(t) represents the transport matrix, both time-dependent; y is the unknown degrees of freedom for the level set field. In the sequel, we omit the subscript h referring to the spatial approximation to alleviate the notations.
Time advancing scheme. Let us divide [0, T ] into N subintervals t n ,t n+1 of constant time step ∆t with n = 0, ..., N -1. For n > 0, the unknown ϕ n h approximates ϕ at t n . Considering the differential system (8) and for given degrees of freedom, y n represents the vector field which approaches ϕ at time t n . To increase the accuracy in time, we focus on composition methods for certain numerical methods: the backward Euler and Heun's methods, because of their simplicity and accuracy, designated respectively by the acronyms BE and HM. On the one hand, we approximate the solution at time t n using the backward Euler scheme as follows:
Φ BE1 ∆t y n-1 := y n = M(t n ) + ∆tK(t n ) -1 M(t n-1 )y n-1 .
On the other hand, we use the one-step Heun's method as stated by the following numerical flow:
Φ HM1 ∆t y n-1 := y n = I - ∆t 2 [M (t n )] -1 K t n-1 + [M(t n )] -1 K (t n ) I -∆t [M(t n )] -1 K t n-1 y n-1 . Φ BE4 Φ BE2 Φ BE1
u(t) v(t) 0 1 2 u(t) 0 1 2 v(t)
NUMERICAL EXAMPLES Example 1: Validation in the case of Lotka-Volterra equations
We apply the above numerical flow compositions to the Lotka-Volterra system of first order nonlinear differential equations describing the evolution over time of two species: predators and prey. Dynamically speaking, if we assume that predators and prey are not connected, then the number u of prey increases exponentially, with a factor α, and v for predators decreases exponentially, with a factor δ . The Lotka-Volterra system describes the predator-prey interactions, when connected, as follows:
u (t) = α u(t) -β u(t)v(t) v (t) = -δ v(t) + γ u(t)v(t)
, with initial conditions u(0) = u 0 , v(0) = v 0 .
Here, β represents the rate of prey mortality by predators and γ is the growth rate of predators while eating prey. We present in Fig. 2 the evolution of the numerical solution of the Cauchy problem (u(0) = 2, v(0) = 1) in the interval
[0, 55] with α = δ = 2 3 , β = 4 3
and γ = 1. This problem has an invariant prime integral F t , which is therefore identified using the initial conditions, i.e. F t u(t), v(t) = F(u 0 , v 0 ), for any time t > 0. The invariant is given by:
F t u(t), v(t) = β v(t) + γu(t) -α log(v(t)) -δ log(u(t)).
To study the convergence properties of the numerical strategy, we calculate the error between the approximate invariant and the reference invariant at the initial time F 0,ref (u 0 , v 0 ). The error quantification corresponds to the time evolution of the quantity F t (u(t), v(t)) given by:
e ∆t l = ∑ N n=1 F 0,ref (u 0 , v 0 ) -F t (u(t n ), v(t n )) 2 ∑ N n=1 |F 0,ref (u 0 , v 0 )| 2
The parameter ROC represents the convergence rates necessary to set a computational effort to establish a certain precision, in which ∆t is the time step in a given time discretization level l. To investigate the accuracy of different composition methods, we run a series of simulations using the methods of Backward Euler and Heun and their recursive composition for several time steps. The table [START_REF] Laadhari | An operator splitting strategy for fluid-structure interaction problems with thin elastic structures in an incompressible Newtonian flow[END_REF] and the figure 1 report the errors (9) over a complete period for different compositions of numerical flows. The estimated convergence orders are reported in the ROC column, showing that we retrieve the expected theoretical convergence orders of the basic schemes. Moreover, the increase in order by double composition of flows is clearly depicted. Thereafter, we study the computational efficiency by evaluating the CPU times for different time steps and composition methods. Thus, a correlation can be established between the error sought by the user and the CPU time that the numerical flow needs to carry out the simulation, for the two discrete schemes BE and HM and their compositions. Results in Fig. 3 show that the resulting composition scheme allows the user to produce numerical results with a lower computational time than the original scheme, while keeping the same precision. This has been observed for different composition levels, for a given original basic scheme (Heun or Backward Euler method). Note that we do not compare in this work such compositions with other schemes having the same convergence orders.
CPU time [s]
Φ BE1 Φ BE2 = Φ BE1 • Φ BE1 Φ BE4 = Φ BE2 • Φ BE2
CPU time
[s] Φ HM 1 Φ HM 2 = Φ HM 1 • Φ HM 1 Φ HM 4 = Φ HM 2 • Φ HM 2
Example 2: Level set problem by composition methods
In this example, the aforementioned composition of one-step time integration schemes is used for simulations of largely deformable interfaces in a level set framework. The numerical computations are performed using the opensource software FEniCSx [START_REF] Alnaes | The FEniCS project version 1.5[END_REF]. We first consider a reversible vortex test case and evaluate the properties of convergence with respect to the temporal discretization. The computational domain is Λ = [0, 1] 2 . An initially circular interface of radius R = 0.15 is centered at (0.7, 0.7) and is periodically stretched into thin filaments by a vortex flow field. The interface unrolls and reaches its maximum deformations at t = T /2, before resuming its circular shape at time T = 4. The advection velocity is given by: u(t, x) = -2 sin(πx) 2 sin(πy) cos(πy) cos(πt/T ) , 2 sin(πy) 2 sin(πx) cos(πx) cos(πt/T ) T , with x = (x, y) T ∈ Λ. Convergence history for various time steps and time composition schemes.
e ∆t Φ BE1 Φ BE2 = Φ BE1 • Φ BE1 Φ BE4 = Φ BE2 • Φ BE2
Some snapshots showing the interface deformations are reported in Fig. 4. We investigate the order of accuracy with respect to the time discretization using a relatively simple composition method. Let π h be the Lagrange interpolation operator. We compute the errors in the L 2 (Λ) norm for different time step sizes with respect to an exact reference solution π h ϕ at time t = T after one deformation period. The convergence of the composition technique is assessed using the implicit backward Euler scheme, as well as two-and four-time backward Euler compositions, respectively called Φ BE2 and Φ BE4 . Finally, we assess the robustness of our level set solver in the case of the standard Zalesak's rotating disk test, using a one-time composition of the backward Euler scheme. We consider the same setting presented in [START_REF] Ta | An implicit high order discontinuous Galerkin level set method for two-phase flow problems[END_REF], where the slotted disk recovers the initial position after a period T = 4. We choose ∆t = 10 -3 and h = 10
CONCLUSION
We have presented an application of numerical methods of flow composition to the simulation of the level set problem in order to increase the accuracy of the solution and to capture the dynamics of stiff problems. A validation is presented in the case of Lotka-Volterra differential equations. By using simple and lower-order numerical schemes, we CONCLUSION have shown numerically that composition techniques allow to increase the accuracy of the approximation while presenting a lower computational cost. This is part of an ongoing work on modeling the dynamics of highly-deformable biomembranes [START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the Level Set and Newton methods[END_REF][START_REF] Gizzi | A three-dimensional continuum model of active contraction in single cardiomyocytes[END_REF][START_REF] Kolahdouz | A numerical model for the trans-membrane voltage of vesicles[END_REF], where temporal integration schemes with high orders are required. It is also planned to implement the method for fluid-structure interaction problems with large structural deformations [START_REF] Laadhari | Eulerian finite element method for the numerical modeling of fluid dynamics of natural and pathological aortic valves[END_REF].
FIGURE 1 :
1 FIGURE 1: Error versus the time step size for the composition of backward Euler scheme (left) and Heun's scheme (right).
FIGURE 2 :
2 FIGURE 2: Numerical solution of Lotka-Volterra equations for the first five periods obtained with the BE4 scheme.
FIGURE 3 :
3 FIGURE 3: CPU time versus error for different compositions of backward Euler (left) and Heun (right) schemes.
FIGURE 4 :
4 FIGURE 4: (Left) Snapshots showing interface deformations at t ∈ {0, 0.6, 1, 2, 2.6, 3.4, 3.8, 4}, for h = 10 -2 .Convergence history for various time steps and time composition schemes.
Fig 4 (right) reports the convergence of the computed errors versus the time discretization, showing in particular an increase in the order of convergence by composition of discrete flows.
-2 . A few snapshots of the rotating disk are shown in Fig 5, showing good preservation of the sharp angles.
FIGURE 5 :
5 FIGURE 5: Zalesak's rotating disk. Snapshots showing interface deformations at t ∈ {0, 0.6, 1, 2, 2.6, 3.4, 3.8, 4}.
TABLE 1 :
1 Time convergence history for different time integration composition methods.
∆t e [BE1] ROC e [BE2] ROC e [BE3] ROC e [HM1] ROC e [HM2] ROC e [HM4] ROC
4.17E-1 2.04E-1 4.125 1.022 --2.148E-1 --3.899E-3 --2.051E-1 --6.650E-3 --2.888E-4 --1.955 5.231E-2 1.979 3.761E-4 3.277 4.476E-2 2.132 6.021E-4 3.365 1.809E-5 3.881
1.01E-1 4.156E-1 1.279 1.270E-2 2.013 4.002E-5 3.186 1.046E-2 2.067 6.338E-5 3.201 1.129E-6 3.945
5.03E-2 1.899E-1 1.122 3.113E-3 2.013 4.595E-6 3.100 2.527E-3 2.034 7.253E-6 3.105 7.056E-8 3.971
2.51E-2 9.101E-2 1.057 7.698E-4 2.008 5.503E-7 3.051 6.211E-4 2.017 8.667E-7 3.054 4.412E-9 3.985
1.25E-2 4.457E-2 1.028 1.914E-4 2.005 6.733E-8 3.025 1.540E-4 2.009 1.059E-7 3.027 2.758E-10 3.992
6.25E-3 2.206E-2 1.014 4.770E-5 2.002 8.328E-9 3.013 3.833E-5 2.004 1.309E-8 3.014 1.725E-11 3.996
3.13E-3 1.097E-1.007 1.191E-5 2.001 1.035E-9 3.006 9.561E-6 2.002 1.627E-9 3.007 1.084E-12 3.999
1.56E-3 5.473E-3 1.003 2.975E-6 2.001 1.291E-10 3.003 2.388E-6 2.001 2.027E-10 3.003 6.774E-14 3.999
7.81E-4 2.733E-3 1.002 7.434E-7 2.000 1.612E-11 3.001 5.966E-7 2.001 2.531E-11 3.002 1.385E-14 --3.91E-4 1.36E-3 1.001 1.858E-7 2.000 2.013E-12 3.002 1.491E-7 2.000 3.156E-12 3.003 2.510E-14 --1.95E-4 6.827E-4 1.000 4.645E-8 2.000 2.231E-13 3.173 3.727E-8 2.000 3.710E-13 3.088 5.621E-14 --9.73E-5 3.400E-4 1.000 1.152E-8 2.000 6.210E-14 1.835 9.245E-09 2.000 4.395E-14 3.060 8.845E-15 --Order of convergence 1 --2 --3 --2 --3 --4
ACKNOWLEDGMENTS
The authors acknowledge financial support from KUST through the grant FSU-2021-027. |
04101231 | en | [
"shs"
] | 2024/03/04 16:41:20 | 2011 | https://hal.science/hal-04101231/file/D.%20SNdC%20_%20From%20good%20Fortune%20to%20Khanty%20Identity%20_%2006.03.2011.pdf | The Khanty (Ostyaks in the historical accounts) are hunters-fishermen who live on the eastern bank of the Ob river, in the Northwest taigas of Siberia. Mentioned since the eleventh century in medieval Russian chronicles, they have been living in the shadow of Russia since the conquest of Siberia in the sixteenth century, its colonization in the seventeenth century, Christianization in the eighteenth and nineteenth centuries, and Sovietisation and industrialization in the twentieth and twenty-first centuries. Their interactions with the Russian world are numerous, especially with respect to their religious system. For example, the simple hunting ritual described in seventeenth century little gradually came to be performed as an "existential territory", with the bear-prey, a celestial son come down to earth, put to death by men. Prohibited by the early Soviet power, the Bear Games continued to be performed in part clandestinely before reappearing publicly in the Russian Federation, after "Gorbachev gave [us] permission to shamanize" (Mandelstam Balzer, 1999: 197).
Firstly, I will briefly present the Ostyak ceremonies associated with the Bear through an examination of pre-revolutionary sources and then discuss the Bear Games as they have been organized since l994 in the Kazym region. My analysis is based on contemporary Indigenous discourse expressed in literature1 and amateur videos as well as on my fieldwork and interviews with northern and eastern Khanty informants (2004,2005,2006,2007), such as Timofey Moldanov and Zoya Loz'jamova (Kazym Khanty), Raysa Nettina (Ob Khanty), Eremey Ajpin (Agan Khanty), Agrafena Sopočina (Pim Khanty). If one emphasises the uniqueness of their individual worldview -a phenomenon that is highly enhanced by the complex history of Russia -above and beyond the scepticism of some observers and the uncertainties of the Khanty themselves, each of these informants, in his own way, aspires, like the Bear Games which replay the Creation, to recreate a new communal impulse, a future in the face of the dangers that encircle the Siberian taigas, where the bear is no longer as frightening as the oil and gas worker (Samson Normand de Chambourg, 2003a : 43), where "already, the big Russian man with the iron head and belly and the big Russian woman in a red caftan, like Ulyb and Salla escaped from fiction, devour everything in their path and make shadows in the sky" (SNdC, 2003b : 73.).
The Bear's Transfiguration
The Bear Games have been described since the nineteenth century, especially by Hungarian and Finnish ethnographers and linguists, visiting their "Siberian relatives". But how did the taiga dwellers turn the game hunt into Bear Games, the bear into a celestial son?
In his sovereign will to integrate the minorities of Russia, Peter the Great shaped geographical (1701) as well as spiritual space (1700, 1706) by a wide range of decrees: henceforth "there was only one God in heaven and one Tsar on earth." This concentration of political and ecclesiastical powers engendered an empire where these two aspects complemented and opposed each other on the field, depending on the period. In the Siberian taigas, Christianity laboured to establish itself in the face of the supposed "superstitions" of illiterate societies. Russian power was still imperfectly established; the clergy had often gone astray as the Metropolite of Tobolsk already lamented in 1622, and the Ob-ugrian population remained quick to defend a shamanism that is synonymous with local survival. The word "shamanize" -recently introduced into Russian through the horror-filled writings of the old believer and archpriest Avvakum exiled in Siberia (1672-1675) -symbolises a dark and infernal force illustrated in the Brief Chronicle of Siberia (1655-1700), by the "sorcerer" that Ermak met in Chandyr' in about 1580: "pierced by [vogul] sabres and knives", drunken and smeared with blood shed in such a manner, his body bears not a single wound at the conclusion of the séance of shamanic prediction (Karjalainen, 1996: 217).
Despite official Christianization of the Ob-Ugrians by the Metropolite of Tobolsk and Siberia, Filofey Leščinskiy2 , even the Russian administration did not take needless risks when it came to oath practices commonly used in Imperial Russia. Many ancient sources document that the oaths sworn to the Russian sovereign or to Russian justice were not sworn upon the New Testament Gospels or upon the cross as any Christian subject of the Empire would have done, but upon the bear. Thus, according to Jean-Luc Lambert's hypothesis, which overturns the traditional Finno-Ugric notion, in order to become established the oath practices imposed by the colonizing Russian power seem to have relied upon the pre-existing bear hunting rituals of the animistic Ob fishers and hunters3 . The formidable king of the forest was thus regarded as a transcendental and immanent dispenser of justice: indeed, were there not some neophytes who passed to the Metropolite Filofey the story of a perjurer buried in 1713 and then dug up by a bear in 1718 in order to be punished in the place where he had sinned 4 ?. Paradoxically, for their part the Ostyaks assimilated the Russians into their holistic vision of the world by apologizing to the killed animal through "devil games 5 ", "grand honours, (…) numerous scowls of feigned lamentations 6 ", "hymns, (…) much courtesy 7 ", "every possible tribute 8 ", as well as dances. In reality, the Ostyaks were not the ones "who forged the iron that pierced it", "the feather that hasted the path of the arrow was of a foreign bird & they did nothing more than let it go 9 ". The Russians and their arms are the guilty ones.
A potential figure of idealization, the bear gradually stops being the original prey as in other parts of Siberia, in order to crystallize mirror games between two opposed societies: "the simplest, the purest humanity", unaware of those "crude perversions, so common, even among educated nations", and the monasteries, where according to the Metropolite of Tobolsk (1622), a life prevails such that "one would think only pagans and people who are totally unaware of God live there 10 ". The bear reunites the two opposing worlds in a new ritual, but the colonizer's strategy stops where the Bear Games would soon start. The response of the colonized who appropriated the oath of allegiance, was to set it gradually into a mise en abyme, set it to songs and celebrations and enhance it with their own world perception. By neglecting to attribute any evolution to Indigenous societies, except their fatal degeneration in the face of the bitter fruits of colonisation (alcohol, smallpox and syphilis), many a traveller and scholar of the Empire wrongly saw the Ostyaks as ossified in time; the fierce peoples disarmed by the Pax Rusica would no longer be but children subjected to a possessed, ungrateful and cruel mother-like Nature and to an all-mighty and improbable Empire. Yet, far from the "poor and dark land 11 ", the "cold climate, injurious to the already frail health of the missionary father Makariy 12 ", the attributive adjectives given by the Kazym Khanty to a land not their own are inspired by its fecundity: "sacred roundness in the shape of a circle" (kusylak kerlum najan muhal) like the woman's belly, " silky and downy"(suhaŋ naj aŋki, putaŋ/pumtaŋ naj aŋki) like the game that ensures their existence, "earth with sable knees", "earth shared by the seven deities, earth shared by the six deities" (ļaptyevə tərumə g ortum mŭvievə, hətyevə g ortum mŭvievǎn), and so on. Did not the bear ask his Father to let him descend to earth, when from the sky -into which he had poked holes with his father's golden stick -she saw "the happy earth covered with yellow cloth, the happy earth covered with green cloth"? Dreaming about earth that you take into your hands does it not mean, among the Kazym Khanty, that Mother-Earth (Muv aŋki) is coming to help you 13 ? In addition to the wealth of the firm land, the Khanty also celebrate the foster-rivers whose names they bear and that they populate in the first spirits' immemorial wake: Kasum imi ("The Kazym woman"), As tyj iki ("The man of the Ob upper waters"), Vejt iki ("The man from Vjtehovo"), Lev kutup iki (« The man of the Middle of the Sos'va ») and so on. Khanty land is sacred because its primal inhabitants are, according to oral tradition, the celestial Tōrum' children, because they had to fight for a long time against the terrible taiga spirits, so that men could come to daylight in the Middle World. Life flows in fertile rivers which nourish men and lend their name to each Ostyak group and to the spirits.... From this "grandiose still life" which plunged the missionaries into a sense of unease, solitude, and their own insignificance, the Ostyak created a living organism, a sacred land."
In similar fashion, the religious system continued to change. Faced with the landscape of the taiga, people had to find their own place within it. At the outset, the need to get food meant that they developed a complex system of Indigenous concepts which detected and progressively defined a network of complementary relationships between all the inhabitants of the forest. When all is said and done, the existence of all depends upon this network created with the other forms of life with which human beings come in to contact on a daily basis and call "spirits" because they are modelled upon human relations; the end of a species would limit the symbolical alliances and the possibilities of exchange. In fact, through the rituals (purifications, offerings, apologies, averted accusation, and so on) that establish a relationship with the other species, humans negotiate the game like the animal spirit negotiates the sacrifice of one of their own. In this way, in the Siberian north, the hunter does not kill, he obtains prey that gives itself up to him; the bear does not kill, but tears apart the flesh of a traitor; the river does not kill, but drowns the one who offended it. It is a matter of ethics and partnership. Shamanism is a strategy, a strategy for good luck in hunting, a strategy for survival. Like the celestial horseman As tyj iki, one of whose arms remains hidden during his measured dance at the Bear Games, for fear that the living earth will deviate from its orbit, the hunter-fishers must comply with certain ethics if they want their universe to be perpetuated.
From the nineteenth century onward, in combination, this "efficiency" of ancestral shamanism, the transcendence of the Bear established by the imperial Russian power and the Good Word spread and sowed by the missionaries in the taigas, has contributed to the development of the Siberian hunting ritual into an Ob-ugrian Bear Dance or Bear Games (pūpi jāk, voj jāk14 ). The animal skin on which the Ostyaks had to swear "is nothing but a mask" that, as Castrén observes in his Travel Notes (1845)(1846)(1847)(1848)(1849), hides the double nature of the bear: human and divine. The oral literature collected by the Hungarian, Antal Reguly, and his successors (Bernát Munkácsi, József Pápay, etc.) sheds light on the Bear's transfiguration: how he became the beloved son of the celestial god Tōrum and gave birth to the first Bear Games by coming down to earth along an iron chain, where his heavenly father intended him to act as his "clawed arm", by having mercy on the just and chastising the perjurer 15 and by being put to death by men. In the same way as the fatal crown of hunters encircles the den and the recumbent figure of brown gold is laid low by the arrow, the side arm or shotgun responds to a vital undertaking. Both bears and hunters promise compensation within the ritual where the future existence of each one is at stake, that is, the return of the guest's soul to his heavenly father and good fortune in hunting for "the numerous men of the village". And once this night-dance of the Bear is accomplished 16 , the hunter who defeated the animal will be denounced: the symbolic father of the "holy beast" and his household will go into mourning for 4 to 5 weeks in the Kazym region. No one in the house will venture to walk bare-footed any more, to joke or to curse, to name things other than in bear language17 . A good death perpetuates life.
The Finn K.F. Karjalainen studied numerous authors (Witsen, Ahlqvist, Gondatti, Infant'ev) and undertook fieldwork among the Ostyaks (1898)(1899)(1900)(1900)(1901)(1902); he even attended a "bear ceremony" near Surgut. Although the descriptions of the nineteenth century differ according to the groups studied, the organization of the ritual seems to be almost the same: the obtaining of a bear by the hunter(s); the return to the village of the hunters who in litany-like fashion enumerate for the bear the various placesrecognized personally or described to them -that they passed through; the four or five days (depending on the sex of the animal) of ritual at the "onset of the evening" and into the depths of the night in the seasonal settlements "with the little girls, small girls, the little boys, small boys, for the games with the little girls,/ for the games with the little boys,/ in the house with a roaring fire, with a fire, in the house made ready by the Ostyak 18 " ; the mourning (pèsy).
As far as the structure of the Games is concerned, it is shared for the most part by a number of Ostyak rivers: seated in a place of honour in the house of the hunter who "brought it down" (from the forest), the bear is combed out and adorned as a male or a female according to its sex; its eyes are closed by silver or copper coins; each newcomer is cleansed with water before bowing down before the "holy beast", kissing it on the head or on the muzzle (as in the above mentioned taking of the oath) and bringing it an offering or a gift. The host or a musician is seated next to the heavenly host and the spectators on the bedstead, while the floor serves as a stage for the songs, plays and dances that will follow one another daily in codified order, with each game being accounted for by notching a piece of wood. Finally, during the fourth or fifth night, the most sacred according to a number of sources, the great spirits appear before the bear, invited by greeting songs "in order to be the witnesses to the good care lavished" on the furry icon; then a procession of threatening animals arrives (including the gadfly, the crane, the eagle-owl, the fox) which will set free the soul of the heavenly son; and a divination, destined to make clear who will be the next fortunate hunter, ends the ritual.
Thus the bear who « hears and sees everything » is honoured by the hunters in order to cast aside its anger and chase away "the arrow, made by the Russians, the boar-spear, made by the Russians" that ravished its soul and to achieve its ascension towards its heavenly father after leaving behind some final sung/chanted precepts. Each night, the "bear songs" (pūpi arəχh, voj ār) performed a capella by three (five, nine) men dressed in ordinary clothes, open the festival, evoking the heavenly and earthly life of Num-Tōrəm's son, the interdicts given by the father and broken by the son, the killing of the bear by the hunter, the actions of the animal, the pre-colonial golden age, and so on.; these episodes all form a Genesis. The ritual being thus founded, the dishonourable death, and the mission of formidable justice being affirmed by these (sung) chanted myths, "the beast of the forest" that lived among men can be acknowledged and welcomed y the Ostyak people.
Before the Bear, masculine and feminine dances are performed that leave their mark on the very name of the Games: pūpi jāk. Dressed in their finest clothes, all the guests have to dance for the Bear; men, first, then the women. Despite their collective and simultaneous performances, each man and each woman dances on his or her own, repeating a choreography composed of leaps to the rhythm of the music, of head and hand gestures, of twirls. Besides the Bear Dance's illustration of the actions of the "sacred beast," there are dances consisting of abrupt hand movements for the men and of tempered ones for the veiled women, whose precise significance already seem to have been lost by the time of Karjalainen's fieldwork at the end of the nineteenth and at the very beginning of the twentieth centuries, but remain as reminders of "a means of attack and defence19 " according to Karjalainen, of "eroticism, like the French cancan », according to Ahlqvist20 .
The little plays, sung or not, follow one other, describing daily life with its hunting scenes, with its (unpleasant) dealings with the Russian merchants and officials, its triviality -comic or tragic -and its harshness and poetry. Lev Tolstoy, who had read "an account about the theatre of the wild Vogul people" and a tragic hunting play, wrote some time later: "As far as I'm concerned, from this description alone I felt that this scene was an authentic work of art 21 ". Sometimes with hyperbole, derision, double meanings, and open ribaldry, male "actors" wearing masks made of birch bark and holding a stick (when playing males) or long coloured scarves (when playing females) portray their lives within the space of a performance and bring their existence into play through the successful proceedings of the ritual. They draw from a repertoire that has been passed down to them, but they are free to embroider and to improvise on the theme and the action of the chosen Ostyak "snapshot"; this oral patrimony is not set in stone since it continuously grows richer with each adventure or misadventure. A dozen plays may be staged each evening (Karjalainen mentions sixty plays and songs performed from 5 pm to 2 am). Some examples of Ob-ugrian plays are documented by the Finnish linguist and his compatriot August Ahlqvist: three Ostyak hunters who decided to go to town and sell a fox fur in exchange for a shotgun finally spend their money on alcohol and, when they regain consciousness, blame one another for having forgotten to buy the weapon; a mother revisits in her song the suitors (an Ostyak, a Samoyed, a Tatar) of her daughter who begins dancing at the mention of the latter, thus indicating the one she wants to marry; a woman who is "working" a sable skin, relates through song how this animal lives, how it raises its little ones, teaching them how to find their food and avoid the hunter's trap; for lack of ever being able to see the face of the maid he is courting, a young man prays to Numi-Tōrəm to make the wind blow and raise the maiden's scarf, but once his prayer is fulfilled, the boy runs back home and scorns himself, not forgetting to thank the Deity for having saved him from such an ill-favoured girl; a mother, accompanied by her daughter, goes to the forest to collect berries and soon loses sight of her and finds hertoo late -raped by a forest spirit; a drunken man reeling through the lunar taiga, mistakes his shadow for someone else, and considers his echo an answer, loses his temper and by using his stick, makes the "opponent" step backwards into a dark corner and vanish, leaving the drunkard to dance for joy on his way back home; after whistling and unbridled dancing, a boisterous couple of menkv (forest spirits) falls onto the matrimonial bed, and so on. The last night, after each greeting song of the "sorcerer", the spirits make their entrance (by rank of importance in the local group) into the house. The actors, dressed this time in rich fabric and furs, follow one another in order to bow before the heavenly son, perform a hieratic sacred dance, recieve a little vodka from the hunter-host, and greet the participants before leaving the room. In the Vogul taigas, even Saint Nicholas, flanked by a cross and a candle, performs a Russian dance as a token of respect to the Bear Games22 . In conclusion, a living and threatening bestiary terrifies the "powerless" audience and chases out the bear's soul which is then ready to return to the paternal "unfathomable seventh heaven".
At last, the divination may occur and the hunter-host may murmur a question to the bear. Reciting in its ear the names of the hunters, he lifts its sacred head until it grows lighter: the Bear thus reveals which hunter will receive good luck and will host the Bear Games the next time. According to the Ugrian groups the cooked meat of the celestial son is consumed during or after the Games with some parts being put aside until the bear skin is carried off and taken out through a window. And sometimes, before the mortal remains of the bear, an old woman about to die pretends to be simple and children lie, according to their parents' instructions, about improbable crows accused of having hollowed out the bear body with their beaks, before flying away 23 . The host-hunter may choose to give the bear skin as an offering to a spirit or keep it and sell it so that he can cover the cost of the Games (the vodka, the wine and the beer have to be conveyed from Russian villages often several hundreds versts24 away). Linked to Russian eschatology, other hunters used to keep the bear skin forty days after the ritual and celebrate the deceased again on the ninth, sixteenth and thirty-sixth days25 . This is how men mourn for the Bear. Thus, a substantial ritual expressed through sung myths, living sketches, dances relating the sacred history of the Bear and the voyage of the spirits (near or far) to "the wooden house, that reverberates the shouts of the river gulls of the surging pond, of the Ob' gulls" provides a response to the supremacy of the Russian State and to the "exact sciences" of the Book and the orthodox missionaries who celebrate its sacred history. This is a living tradition, as Jean-Luc Lambert noted, brought up to date with the flowing of the rivers for Indigenous communities who "historicize" the colonization in their own way -it is a trompe l'oeil Northern Christ whose kenos is a promise of salvation for Christians and whose good death is a token of good fortune for the Ostyaks' hunt.
Serving as a way to highlight the way in which the religion of the hunt, Christianity and oral tradition, all mingle to reinforce the social cohesion and resilience of holistic societies, forced to reconsider the world in the violent light of the chaos of colonisation, the Bear dance turns away the kiss of death and embraces the living universe.
Atheism, Folkorisation and Memory
The Soviet period was not only a disaster for the taiga communities, it also presented new challenges. The authoritarian nature of the political system certainly helped to save a part of the Indigenous heritage that might have naturally disappeared in an open political system. Along with the folklorisation of the living culture, the enforced secrecy imposed on the performance of the ritual partly preserved its symbolical power.
The bear was banished. The campaigns of Christianisation were followed by the atheist propaganda of the new soviet power. The shamans, who may have still been members of the local Soviets at the end of the 1920's, were soon declared "enemies of the people26 ": the authorities intended to break up the Indigenous societies by depriving them of their economic (the kulak) and spiritual intelligentsia.
(…) Twelve years ago, I've been disqualified from voting for observing a religious custom in my family, yet without ever going from one yurt to another in order to shamanize, as shamans usually do. I have never been a shaman, I made a living out of my own work: since I was a child, I've been employed by merchants, then I had my own exploitation which provided me my own livelihood as well as that of my people, that is to say some sixteen persons. I, Semën Pakin, aged 67 years, am not able to work any longer (I enclose a medical certificate); I now depend on my son David, but he and his wife Leljan have been deprived of their rights too, because their livelihood is made on my activity, they and look after me. Is this treatment fair ? After depriving me of my rights to be a shaman, the local authorities now treat me as a kulak exploiteur, which I've never been. My assets (and that of sixteen people) are: a house, four horses, three cows and one animal). I'm even deprived of the right to complain: the authorities wanted to confiscate the certificate signed by fellow citizens on 14 th july 1934, without checking their signature, and then demanded 43 rubles that I have not got, for the authentification. They have forbidden me, Semën Pakin, to go to the district capital of Samarovo, to the hospital, for me to be treated 27 .
For Russian authorities long time the shaman has long symbolized a potentially subversive counterpower, the obscure force of holistic societies. The howl that was heard as a (spirit) figure was suppressed at the command of Vassiliy Levin, the priest of the Ostyak yurts of Zamosovskiy, whose « hands since then have been shaking so much that he could hardly write 28 », as he himself reported to the French Academy Delisle in 1740. No longer could the echo of the « wild howl of the shaman [which] hurt so much the ear, that against my will, I began to shake" be heard, according to the statement of the father Irinarkh, describing his visit to an Ostyak shaman on the Poluy river, « a hoary old man and almost blind old man, [who] began to jump as high as an acrobat29 » ? During the Empire, the Russian civilizer and the colonized shamanist learned to distrust one other. While, Soviet propaganda and films depicted shamans in very simple terms, identifying their characteristics in real life is far more difficult. This is because shamanism has had to wear many masks; in Ugra, the party was looking for great shamans, like in other Siberian societies, but the Ugrian experience of the world has had over of the centuries become distributed in a constellation of ritual specialities. Thus, in these Ugrian communities, all the descriptive terms not only serve to create confusion about who is or is not a shaman, but also shed light on the fields of knowledge of each ritual specialist taken in a specific context.
Against a of atheistic propaganda, the Indigenous « wars » in response to sovietisation, naturally relied on shamanism (respected ritual specialists, ritual assemblies and divinations). They were quickly described as « counterrevolutionary movements » and their "players" were rudely disqualified as « shamans with a drum », « shaman with an axe » or « shaman of the dark hut ». Because the holistic society was discredited by the positivist ideology of the new historical « soviet » community, the rapid propaganda of planned progress and atheism and the often brutal local agents of sovietisation, it soon responded to what it felt to be a declaration of war, with its own war, that turned into an apparent defeat (mainly in the Kazym, then in the Num-to area). This clampdown on the Ostyaks and Samoyeds of the region, in winter 1933-1934, provoked a series of arrests of « counter-revolutionary elements », twenty-seven out of fifty-two of whom were dubbed « shamans », followed by a parody of justice and ruthless repression that raged until 1935: punitive commando groups armed with larch truncheons appeared in the taigas for the manhunt 30 .
Thus, the shaman was accused by the Soviet authorities of conducting « an obstinate propaganda among the Indigenous against the schooling and the boarding of the children, arguing that educated children would be sent to big towns, enlisted in the army and denied the possibility of traditional activities 31 ». For example, Evdokiya Semënovna Nikishkina, a « shaman with a drum and of the dark hut » of the Vanzevat yurts, was denounced by an activist of the literacy campaign (likbez) because of her objections to the opening of a boarder-school and her statement about a « Soviet power that does nothing good, arrests the rich ones and the shamans, then puts them into jail and make them starve 32 ». From then on the shamans formed a social class to be eradicated. Thus, Evdokiya Nikishkina of the Indigenous soviet from Polnovat, born in 1877, was arrested on 22 nd October 1937, judged on 5 december 1937, shot on11 janvier 1938 and not rehabilitated until 27 th July 1989 33 . But beyond the physical elimination of the shamans, shot or deported to camps, shamanism remained out of reach of the Soviet power, because it is a worldview which inhabits the whole society and survives in the people, as in the past "in the Irtysh region, in spite of the absence of sorcerers, the vanished powerful Tatar influence and later the overwhelming present Russian influence34 ".
In many cases, socialist emulation lead to arbitrary actions. From 1937 to 1939, across the whole country and through repressive acts as rapid as definitive, the State hunted down « internal ennemies » who kept haunting it, such as Konstantin Grishkin and Ksenofont Seburov, arrested in February 1937 for organizing Bear Games. Indeed, in the of the authorities, a ritual where about a hundred were gathered amounted to a counter-revolutionary assembly:
We lived peacefully until my father and his brothers, Ivan and Efim, were arrested again for having gathered people for the Bear Dance; indeed it was forbidden to organize that kind of things at that time. Our families remained without hunters 35 .
Most of the men taken (raflés) by the « Reds » are sentenced by a troika on 5 december 1937 (without even a trial being conducted) and shot on 11 january 1938 in Ostyako-Vogulsk, today Hanty-Mansiysk). Human quotas have to be met too.
When the Great Patriotic War which continued to weaken the North, the victory led to the ever-increasing inflow of alien men subject to Party line (partijnost') and to their quest of a better life in the North : I let a Russian man marry my daughter. My son-in-law liked drinking; He drank away my herd of reindeer. He had kisy [hautes bottes de fourrure], He drank his kisy as well. And after that, he drank away my kisy. And now Iam here, on your holiday, barefooted. Now listen: Be careful choosing your future son-in-law. Don't let such men marry your daughters 36 ! Far away in the taiga, with no kolkhozes (State farms) nor five-year plans, the Games, or fragments as often as not, were practiced as a form of dissidence, until the evil might be overcome, as predicted by the old Khanty Stepan Nikolaevič Aypin, prisonner n° 135, shot in Ostyako-Vogulsk on 21 rst of January 1938.
Meanwhile, the Bear cult persisted. It sometimes expressed itself differently, as in the story, told by the inhabitants of the village of Tugyany of an incident that occurred at the beginning of the 1950s. A little girl from a kindergarten vanished in the forest while walking with a group of children. One week later, a villager, Anastasiya Grishkina, and her son, while rowing on the river, saw the little girl bathing and brought her back to her parents', despite her desperate calls to her mother. Three days later, in spite of the presence of her mother, little Nina was still calling for her mother. At nightfall, a female bear came grunting and scratching at the door. Nina finally explained that "the bear was just like her mother; she was caring for her, feeding her with berries and letting her bathe". Anastasija Grishkina died shortly afterwards, and her son soon thereafter. They had been taken instead of the little girl. That was the explanation of the elders and it was confirmed by a shamanic séance in a "dark hut 37 ".
When the Soviet Union was about to collapse, the Bear festival which had survived in spite of its prohibition, in spite of the threat of its participants being dismissed from work or summoned before a court, was rightfully reinstated. After seventy stolen years, the Kazym men again called their deities to come "to the top of the small trees, to the middle of the tall trees". And the spirits that had vanished on the thresholds of schools, abruptly shrinking the perception of the Khanty world, reappeared in the land of "the Great Ob strode along by twenty reindeers", "in the playing house" "filled hup with the cries of the river" and purified by smoke: in Yuil'sk, on 5-11 January 1991, at Pëtr Ivanovic Sengepov's settlement, in Polnovat in March 1993, in Kazym in March 1996, in Syunyugan in December 1998, in Kazym in December 2002, and so on. Today, despite the irremediable losses 38 and the changes of meaning accompanying the new urban and country lifestyles of numerous Khanty, but also in the activist disposition of a younger intelligentsia, the great spirits have returned to the Kazym. Just in time for the generation that was born in the late 1920s and the early 1930s to pass on this abused heritage: "S. TARLIN: My father is Kazym Goddess elder brother ... -Look, you get the wrong tune; you started the song of Kunovat Goddess... S. TARLIN: -It is the song of the Middle Sosva Goddess I want to sing! I am going to visit my aunt and bring her a sable skin as a gift. I was sitting home on the bed, covered with a mat made of sedge, I got tired of sewing and went outside... 39 Today, from thirty to forty spirits, depending on the rivers, come down to humankind. They give back to men, one after another, the luck they deserve: "an even path young girls' feet, an even path under the young boys'feet" (Vèyt iki, 1987); "the dance that wards off war, the dance that wards off diseases" (Khin iki's song, 1989); 37 Slepenkova, 2002: 66. 38 Among the Mansi whose Bear festival has been more widely described by the scientific literature of the XIX th century, the islets of living traditional culture are rarefying: "The hunters gather into brigades of three or four men. They take their sleds. One of them goes to the bear, the others join him close to the lair. They kill the bear, and on their way back, their gunshots alert the villagers that the hunting has been successful. How would it happen exactly? I don't know, I don't remember, I never saw that. I just heard about it. But I saw a Bear Dance. Men wore different masks of birch bark. The feast lasted three or four days, and if the bear was a male, five days. One says that hereby live shamans. I've been told that they sacrify reindeers. But I've never been there, saw nothing of that kind" (Péter Erderley, Bekódolt Szibéria/La Sibérie décodée, Duna Televízió, 2004). A Bear festival has been organised in Pomapaul' (rajon of Ivdel) in april 2005 by the three Pakin brothers, whose benjamin, Vladimir (born in 1962), good sangvyltap player, "hosted" a bear. 39 Medvezh'ie igrishcha na Obi, Studia O.K., 2005, 28 mn. This Olga Kornenko's film is essentially based upon the montage of Timofej and Tat'jana Moldanov, Stas Kovalenko's amateur videos.
"the lofty dance that rides the waters' backs" (Kheymas' song, 1989); "the dance that shadows war, the dance that shadows misfortune" (Em vož iki's song, 1993); "the dance that fights discord, the dance that keeps war away" (song of Yuilsk protective spirit, 1993); "the dance that breeds fish, that breeds game" (song of the Goddess of Kazym, 1993); "the great dance that turns aside war, the great dance that turns aside diseases" (Lev kutup iki, 1993), "the dance that protects girls, the dance that protects boys" (As tyj iki's song, 1991); "The eternal dance of the young women's long life, the eternal dance of the young men's long life"" (Kaltašč's song, 199340 ).
The Goddess of Kazym was revived by collective memory and collected songs. Men call her "the father's grandmother" (shchashchi). Tōrum's daughter is touchytempered and tattoo-handed, her long braids are snakes and her clasps are living lizards. Beyond numerous tales and legends regarding "The Wife of Num to" (Lor yuran nenye), regarding "the one who is a lucky hunter" and who "wears a coat of mail that looks like the scales of a little fish", there is one of her songs sung during the Games that tells, for instance, of the journey of "the wife from the upstream" (vuyt imi) who regulated the world of the Kazym. She left from the Septentrion Sea coast, where she had vented her anger at her Nenets husband -grabbing "a saber like the Ob clear waters" cutting "the slightest blade of grass that showed, the slightest shrub that grew" as well as her husband's legs. She flew as a gold-winged duck to her celestial father who forgave her and agreed that she should become the protective spirit of "the world of the men to be, the world with the face of dolls to be". As she proceeds on her thousand-reindeer's herd, she animates and consecrates the space through her children to whom she allocates the territories and "her seven white-dressed servants" whom she allocates to the river mouths. Then she "sanctifies" the space with a stone nass (net for fishing) to keep away "the betula-barked small crafts" sent by the spirit of diseases and "the small crafts of war".
The lake that is named Tōrum, the name of the Bright one: In its centre, I, the Grand Goddess, will "enthrone" majestically! The goddess who expands her daughters' life expectancy The goddess who expands her sons' life expectancy As the goddess, I will "enthrone" majestically! The dance that breeds fish, The dance that breeds game, The dance that brings luck to reindeers, The dance of the girls' longevity, The dance of the boys' longevity, The dance that keeps battlefields and war away, I'm going.
In her wake followed Kheymas, "the man who leads the fish to the spawn area", the vert [male spirit], grand fish supplier", from the mouth of the Ob river, where "the big and shredded wood shavings" that fill up his house adorned with fish scales turn into many fish in summer and many game in winter (reindeers, squirrels, sables). Dressed for the Games in a most beautiful water garment (green or blue), Kheymas repeats a ritual move that recalls wood working, "he shakes his braids filled with lake fish", and, before going away, splashes with water or sprays with snow each participant so that they will catch fish profusely.
Vèjt iki, Tōrum's third son, "the warden of threatening clouds and winds", the grand vert leader of the elements, the weather master, left his residence in the Berezovo area along the Ob's river banks, right in the middle of the spring-thawoverflooded meadows, right in the middle of a betula copse, in a golden house with a hidden door. He wears a white hat, has winged arms, and is followed by two servants. He protects the men from opposite elements as well as the youngest male children. Em vozh iki, the warrior rider with eyes "as big as the moon", who watches the frontier between the dead and the living, is asked by a singer to appear and to annihilate any mischievous spirit. The mask-faced spirit is dressed in black like the coat of his dark mount. He chases the assaults of the evil spirits away and redirects the diseases toward the Ural where he gets rid of them by lapidating their bones. He holds the bears as a sacred icon -one of the deity's names is consequently "the vert whose traits resemble those of a toothed or a clawed beast".
Lev kutup iki is "the man from the Sosva middle stream", who has dark waters abundant with fish, the Gardian and protector of the Sacred Stone's foothills (the Ural), the vert who brings luck to the reindeer herds, the one who is "older than gold" and who lives in a house as "colourful as the golden undulating waters, a house resembling a wave, resembling waters blown away". He is preceeded by a singer who announces the imminence of the nightly sound of their hooves -that actually belong to the five colourful reindeers of Lev kutup iki's team. He comes majestically on a "happy nest covered by a black sheet" to pay tribute to the bear and to show his face to the humans as he dances his sacred dance, his face "encircled by a ray and by numerous braids".
As tyy iki, "the man from the Ob river's upstream", "the vert sympathizing with the young girls, the vert sympathizing with the young boys" is a celestial rider who has a mount that has "the colour of a spring squirrel, the colour of an autumn squirrel" and a light sabre as undulating as a wave. He rides around the world, easing in his wake the fears and the pains of humankind, fighting for good, defeating evil, succeeding sometimes in changing the thoughts of humans; he often appears dressed in nothing but white, a fox fur hat on his head, a piece of fox fur in his right hand -his left hand empty, for if the celestial rider were to use both his hands, the earth would stray from its orbit, provoking the end of the world. Welcomed by the two ort (chosen people who oversee how well the Games proceed), the divinity sketches a fragile and majestic dance, where well-placed moves and a slow rythm seem to constitute the very revolution of the earth.
Finally, "the grand silver-haired nay [female spirit or goddess] who allocated the place of each spirit in the Middle World and who decides each newborn's life expectancy on earth, ends the grand spirit procession. With the jerky rythm of an ironspeared arrow, Kaltashch (a man dressed like the goddess) hammers as recommandations, "her goodbye words, numerous and bright" addressed to the parents who want to see their children grow up "sound and strong", before women cover her with numerous scarves as an offering and she executes "the eternal dance of the young women's long life, the young men's long life eternal dance".
In the Russian Federation, sometimes the ritual can no longer limit itself to the sir or the community of a precise territory. Connoisseurs of the repertoire have to be sought out throughout the whole autonomous district so that the children of the Kazym ethnographic settlement Numsang ëx ("The thinking people") are able to reenact what has been saved by the elders. Already, Danil Nikolaevich Tarlin has taken with him to his grave one of the sacred songs of Kaltashch, and Nikolay Lozyamov, worn out gisant, has stopped dancing in the night. For Juriy Vella, the Forest Nenets, reindeer herder and poet, by now the Games are nothing but a funeral evening, the pale image of a broken communal momentum:
In the banqueting hall of the University of Debrecen, our Kazym Khanty are performing the Bear Games. The American and Japanese delegations are sitting there and listening carefully.
Tartu, Tallin and Helsinki, scientists and students from Budapest and Debrecen are eager and pleased to do the same. Even our people from Syktyvkar and Yoshkar-Oly are thrilled by what is being performed on stage. Moscow herself, conceited and bossy, sits there, compelled to listen as the occasion requires. But one could have cried, when our Khanty and Mansi fellows rudely break in on the performance with perfectly misplaced advice, when none should be given! Why? Because the West knows that THIS belongs to the past, that what is being played here on the stage is bound to die, that IT vanishes at every second with the last initiated men... But we, on the other hand, still kid ourselves 41 .
Semën Tarlin, however, as the repository of his great-grand-father's long song, hurries to go to the ethnographic settlements to pass on fragments of the Bear Games before his memory, grown old, plays up.
Games as Self-Expression
How should one decipher those Bear ritual games? Far from being fixed, Ugrian societies have never stopped constructing themselves through a number of interactions, especially religious ones, to which the Bear Games testify. Representing a real challenge to colonisation, they re-enact the orally transmitted experiences of the taiga communities, from the times of the Empire to the Federation Era, becoming an identity marker of hope.
Traditionally, the zither is also used for rituals of divination, as recalled by Karjalainen42 ; indeed, the drum that was still used among the most northern Ostyaks who had remained rebels in the campaigns of Christianization in the XVIIIth century seems to have vanished from that time onwards in other areas, such as the Irtysh, Vakh, Vasyugan, and so on, in favour of the zither (nars-yukh), which was less compromising in front of Russian authorities. Thus, « the ordinary musical instrument, as soon as it's in the hands of the sorcerer, gets so sacred that one can nor put it on the ground, nor carry it openly from one house to another43 ». The musician, his head turned to the sacred corner of the house, with the zither put on his right knee, calls the deities with a melody appropriate for each one. Meanwhile other "actors" of the ritual, in specific dress, headgears and ornamented gloves, feature as the divinities and their journey in the Bear's Game. So the musician (singer44 ) and the dancer (singer) seem to divide the actions of the shaman who usually concentrates on invoking the spirits/gods and tracing their journey. This particular "redeployment" of the shaman's art is not surprising, considering the nature of the instrument and the strategic tricks that microsocieties were forced to use at that time in order to escape from the Russifying ogre; without the drum, the shamanic journey came to an end, the spirits and deities themselves now come to visit a shaman who is fixed in place and whose practise is masked by his songs and his musical instrument.
The figure of the bear, the zither and the offerings progressively had to echo the (spirit) figures, the drumbeats and the sacrifices, too visible and suspect in the eyes of the colonizer. Through these games which work well as a big collective ritual aiming to bring good luck to hunters and to herders of northwest Siberia, Khanty managed, in spite of centuries of uneven contact with the Russian world, to remain "Others", by responding ritually to the Russian colonizer.
Today the Khanty have invested in urban space. Some of them are forced to leave their ancestral territory to resettle in places with an unobstructed view over "the big Russian man with the iron head and belly" (the derrick) and the "big Russian woman in a red gown" (the flare): others are seduced for others by the city lights. The asfal'tnye hanty as Timofey Moldanov defines himself 45 , are gradually introducing their sense of the sacred to the city. Since the creation of a park-museum in 1987, rituals and many cultural events have been organized to educate others about the Indigenous cultures of the epynomic peoples of the okrug. Thus, the ritual Games sacrilised the space at the edge of Khanty-Mansiysk and of the forest, three days in a row in July 2008, at the V th International Festival of Finno-Ugric Peoples. Because of the refusal of the bear to be celebrated in the capital (through divination the bear is now asked about which games she wants), a smaller "old man with teeth, old man with claws" has been invited from Kishyk and celebrated by people. The emergence of this urban ritual remains to be carefully studied, based on the initiative of the Khanty intelligentsia and thanks to a grant of the Department of the Indigenous affairs of the district, in such a way that avoids an intellectual appropriation of Games that are traditionally community ones. The Russian authorities, after forbidding the Bear Games in over fifty years of folklorisation, is trying to reify them for local tourism and presents them in the official leaflets as if they were an ancient celebration of a "small Switzerland".
Nevertheless, through this occupation of public space, which the Khanty and Mansi make sacred, they remind the state of their presence and of the difficulty of reconciling the natural with the human resources in the district. Two worldviews confront one another around the Bear. The Federation runs Siberia as a land of plenty, prisoner of short-term economic logic and of alien workers little concerned about the North. In contrast, the Khanty take care not to hurt the earth, not to take out of the nature more than is necessary to them (what a number of authors of the XIXth century qualified as inconsequential) and to leave behind them the least possible, so as to prevent evil spirits from doing them any wrong. This holistic vision of the world gives a new legitimacy to the games when it is confronted with industrial exploitation considered as "vain action lacking any spirituality", and a "negation of the Indigenous relation with Creation established across centuries". As during the Empire, it is a matter of survival for Indigenous peoples to reinvest in these games. For instance, the traditional behaviour code adopted in the forest or on the rivers that was transmitted through songs (like an echo of the commandments addressed by Num-Tōrum to his child before he/she came down on earth) is now related to Khanty identity and ecological claims. Similarly, the song of the Master of the Spirits of the Forest reminds humans of their alterity and calls to the deaf and blind civilization beyond the circle of Khanty, gathered for the Bear Dance.
When you are in the taiga, you walk among us, but your eyes don't see us. If we did not want to move aside from your way, we would bump into one another permanently 46 .
As the Soviet power tried to break the Weltanschauung of the communities, it turned them into one "people' while they used to define themselves in terms of their rivers. Today, Khanty use this status to be heard, but also to construct an image of themselves, their own image. Contemporary games participate in the process of selfconstruction in the post-Soviet space and enable Khanty people to reaffirm roots for the future.
An ethno-historical perspective shows that the Bear Games are not remanents of an archaic totemism, but the resilient expression of microsocieties confronted with the shock of colonization. The original hunting ritual has returned in a subtle war against the Russian-making machine and the political and religious weapons used by the colonizing power. At the heart of the ritual, as early as the XVIIIth century, Indigenous peoples accused the coloniser of being guilty of killing the "guarantor for the oath". Therefore they implicitely compelled the new power to succumb to the animal revenge of the celestial child. So long as ritual games perpetuate life and the balance of Creation, Khanty people will inhabit the Land
From
FIG.1-Jeux de l'Ours (campement d'A.A. Ernyhov, cantonde Belojarsk, 2004)
FIG. 2 -
2 FIG. 2 -Tente ostyak en écorce de bouleau (Poluj, début du XX e siècle; © Complexe muséal ethnographique de Salekhard))
Voir Moldanov, 1999 ; Material V jugorskih čtenij « Medved' v kul'ture obsko-ugorskih narodov,
;Kravčenko, 2004.
Voir SNdC, 2010.
Cf. Lambert, 2007-2008 : 19-43.
Patkanov, 2003: 26.
Šemanovskij (Irinarh), 1905: 25-26
Moldanova, 2001: 292.
Lambert, 2003Lambert, -2004 : 391-396. : 391-396.
Patkanov, 2003: 209.
Le nom même des deux rituels, l'un sporadique (voj jāk), lié à l'obtention d'un ours à la chasse, l'autre périodique (lènh jāk), célébré traditionnellement tous les sept ans à Vežakora et à Tegi en alternance, reflète le caractère sacré de la danse aux yeux de l'ours qui « se souviendra de chacune », de celle « dans son maigre vêtement en écorce de sapin » à celle « en caftan de drap, duveteux comme la fourrure de l'écureuil, en bottes ornées de perles de verres, pareilles aux coudes de l'écureuil »(Kravčenko, 2004 : 12).
L'ours qui voit et entend tout s'offusquerait qu'on lui manquât de respect par une parole ou un geste ; dans cet esprit, selon la terminologie de la langue de l'ours (soit plus de 500 termes dont 132 pour désigner le seul animal), les chasseurs ne l'écorchent pas, mais « ôtent sa fourrure », « dévêtent l'ancien » ; tout le corps de « l'homme de la forêt », du coeur (« le lieu sacré ») à la graisse (« la sente du couteau ») en passant par ses cuisses (« la cité »), a également un nom rituel. De même cette langue secrète est utilisée dans la maison pour désigner les animaux et les objets le temps du deuil : le banc est « celui qui a des jambes », le sel « la chose délicieuse », la tasse « l'arbre qui puise », l'eau, « la chose à boire », « manger » se dit « cueillir », etc.(Karjalainen, 1996 : 173-174).
Ibid., 1996: 165-166.
Ahlqvist, 1999: 119.
Karjalainen, 1996: 164.
Patkanov, 2003: 211.
Ancienne mesure russe équivalant à 1 067 mètres.
Patkanov, 2003: 213.
Voir SNdC, 2007-2008: 115-195.
GAHMAO [Gosudartvennyj Arhiv Hanty-Mansijskogo Avtonomnogo Okruga], fonds 111, inventaire 3, dossier 9, feuillets 85-86.
Šemanovskij (Irinarh), 1909: 403-410.
Voir Eremej Ajpin, Bož'ja Mater' v krovavyh snegah, Ekaterinburg, Pakrus. L'ouvrage consacré à ces événements a été traduit en français : La Mère de Dieu dans des neiges de sang, traduit du russe par A.-V. Charrin et A. Coldefy, Paris, Paulsen.
TOCDNI, fond 107, inventaire 1, dossier 111, feuillets 25-26.
Naša obščaja gor'kaja pravda, 2004:171-172.
Kniga rasstreljannyh, 1999: 122.
Karjalainen, 1996: 184.
GAHMAO.422. 17.1.1-4. Memories of Anna Grigor'evna Yumina (born in 1922) in Tugjany, rajon of Berezovo, that have been collected in 2002 by T.S. Seburova and archived as a manuscript at the State Archives of the Khanty-Mansi autonomous okrug.
This song has been performed by Nikolaj Loz'jamov of Kišik as an element of the fourth part of the Games (or Lŭļŋǎltup) which has a humoristic complexion and stages throughout the art of singing the relationships bearmen, men-men, men-nature.
Other deities leave "the dance that gives abundance of fish, the dance that gives abundance of game"(Kazym Goddess' song, 1993) etc "the dance that conjures up the strokes of war, the dance that conjures up the strokes of disease"(Hin iki' song, 1989), "the dance that guards from discord, the dance that protects from the war" (Juil'sk protective spirit, 1993).
Karjalainen, 1996: 204, 206, 230.
Ibid., 1996: 207.
Les termes vernaculaires désignant l'instrument et le musicien sont respectivement naras-juh (« l'arbre qui chante ») est narasti ho (« l'homme qui joue d'un instrument »). |
01009106 | en | [
"spi.meca",
"spi.mat"
] | 2024/03/04 16:41:20 | 2001 | https://hal.science/hal-01009106/file/Moresi2001.pdf | Louis Moresi
Frederic Dufour
Hans Mi Ihlhaus
Viscoelastic formulation for modeling of plate tectonics
The Earth's tectonic plates are strong, viscoelastic shells which make up the outermost part of a thermally convecting, predominantly viscous layer. In order to build a more realistic simulation of the planet's evolution, the complete viscoelastic convection system must be included. A particle-in-cell finite el ement method is demonstrated which can simulate very large deformation viscoelasticity. This is applied to a plate-deformation problem. Numerical accuracy is demonstrated relative to analytic benchmarks, and the characteristics of the method are discussed.
INTRODUCTION
Underneath the lithospheric plates of the Earth lies the mantle (Figure I). Approximately 3000km deep, it is composed of solid rock that is warm enough to deform like a viscous fluid, albeit at incredibly slow speeds of a few centimetres per year. The plates move because the mantle is forever stirring as heat gen erated by natural radioactive decay struggles to es cape via thermal convection. The plates which form the ocean floors are part of this circulation and are sucked down when they become old, cold and dense.
The continental crust is formed by lower density rock which remains buoyant despite being cold. In the lithosphere the rocks are significantly cooler and be have as a viscoelastic, brittle solid. In regions of high stress, brittle failure gives rise to earthquakes. This picture of the Earth's interior is widely ac cepted by geophysicists. It clearly indicates that the fundamental process is thermal convection; plate tec tonics is the manner in which the system organizes.
Therefore, a consistent model of plate behaviour must contain a description of the convection system of which the plate is a part.
There are some fundamental problems which need to be addressed before the routine application of en gineering principles to the lithosphere. The principle issues is that plate tectonics is itself only a kinematic description of the observations: a fully consistent dy namic description of the motion of the plates is still
\i.u=f (1)
where u is the stress tensor and f a force term. As we are interested only in very slow deformations of highly viscous materials, (infinite Prandlt number) we have neglected all intertial terms in (I). It is conve nient to split the stress into a deviatoric part, r, and an isotropic pressure, p,
<T = T-pi ( 2
)
where I is the identity tensor.
Viscoelasticity
There are a number of different viscoelastic models, we will use the Maxwell model which has been used in previous studies of lithospheric deformation where viscous and elastic effects are important such as post glacial rebound [START_REF] Peltier | The impluse response of a Maxwell Earth[END_REF]. This model assumes that the strain rate tensor, D, defined as:
D ;i = � (8V; + 8\!j) 2 OXj OX; ( 3
)
is the sum of an elastic strain rate tensor D • and a viscous strain rate tensor Dv. The velocity vector, V, is the fundamental unknown of our problem and all these entities are expressed in the fixed reference frame x;. Now we decompose each strain rate tensor and l .
Dv = 3tr(Dv)I + D,.
where K. is the bulk modulus and ( is the bulk vis cosity. P = p as it p is a scalar.
:;.= + + rW-Wr
(8)
where W is the material spin tensor, W ; j = � (av; _ ai1 ) 2 OX j OX;
(9)
The W terms account for material spin during advec tion which reorients the elastic stored-stress tensor. We note that the form of equation ( 7) is unsuited to conventional fluids as the material has no long term resistance to compression. This behaviour is, how ever, relevant to the simulation of the coupled porous flow, matrix deformation problem. Here it is common to ascribe an apparent bulk viscosity to the matrix material in order to model compaction effects (e.g. [START_REF] Mckenzie | The generation and compaction of partially molten rock[END_REF], particularly for large scale geologi cal systems where the details of the pore network can not be measured directly.
Numerical implementation
As we are interested in solutions where very large de formations may occur -including thermally driven fluid convection, we would like to work with a fluid like system of equations. Hence we obtain a stress I strain-rate relation from ( 6) by expressing the Jau mann stress-rate in a difference form: (10) where the superscripts t, t + 2'.t indicate values at the current and future timestep respectively. ( 6) and ( 7 where a = 77/ µ is the shear relaxation time and , 8 = UK. is the bulk relaxation time. We can simplify the above equations by defining an effective viscosity T/eff and an effective compressibility (elf:
At T/eff = '1 2'.t + a
At and �.ff = e 2'.t + /3
Then the deviatoric stress is given by
( • t+t.t r' W1r1 r'W' ) T/eff D + --+----- µ !J.. t µ µ
and the pressure by
P t+ t. t = -" ( Dt+ .cl. t -L ) �•ff kk 2'.t K. ( 13
) (14) (15)
To model an incompressible material K. and ( are made very large such that Dkk � 0.
Our system of equations is thus composed of a quasi-Newtonian viscous part with modified mate-rial parameters and a right-hand-side term depend ing on values from the previous timestep. This ap proach minimizes the modification to the viscous flow code. Instead of using physical parameters for viscos ity and bulk modulus, we use effective material prop erties (13) to take into account elasticity. Then during computations for the force term, we add elastic inter nal stresses from the previous timestep or from initial conditions.
F'•1 = � ':-"" -�a':-"'' I h',fll•' µ flt IJ,J (16)
We solve ( 15) and ( 14) and obtain a solution for v•+t>.t. From this solution we compute the new stress state due to the velocity field and previous stored stresses.
Stability
The approach outlined above is unconditionally stable only if the timestep is larger than the relaxation time for the material, i.e.
flt< !l µ (17)
in the case of the shear moduli. Alternatively, this means a Deborah number, D e < 1, indicating that the method is appropriate to the viscous, rather than the elastic, limit.
One difficulty is that the tirnestep is not necessar ily chosen to match the physical problem, but by the Courant condition for the chosen mesh. This means that a convergence demonstration for arbitrarily small elements may not be possible for the general case. We are currently addressing this issue.
In practice, however, for our area of research, vis cous flow drives the plate motions, and the litho spheric plates are embedded in a highly viscous ma terial. This may produce a situation where a system wide relaxation time is more important than the re laxation times of individual materials, since loading and unloading of the elastic materials happens almost exclusively through a low-viscosity medium. Under these circumstances, the relaxation time of an indi vidual layer (such as the lithosphere) may be much larger than the Courant timestep, but stresses are ei ther balanced, or relaxed by driving a flow in one of the viscous materials.
COMPUTATIONAL METHOD 3.1 Choice of Numerical Scheme
In fluid dynamics, where strains are generally very large, but not important in the constitutive relation ship of the material, it is common to transform the equations to an Eulerian mesh and deal with convec tive terms explicitly. Problems arise whenever advec tion becomes strongly dominant over diffusion since an erroneous numerical diffusion dominates. In our case, the advection of material boundaries and the stress tensor are particularly susceptible to this nu merical diffusion problem. Mesh-based Lagrangian formulations alleviate this difficulty, but at the ex pense of remeshing and the eventual development of a less-than optimal mesh configuration. This increases complexity and can hinder highly efficient solution methods such as multigrid iteration. The Natural El ement Method eliminates remeshing difficulties but is associated with considerable complexity of imple mentation, particularly in 3D.
A number of alternatives are available which dis pense with a mesh entirely: smooth particle hydro dynamics and discrete element methods are common examples from the fluid and solid mechanics fields respectively. These methods are extremely good at simulating the detailed behaviour of highly deform ing materials with complicated geometries (e.g. free surfaces, fracture development), and highly dynamic systems. They are, in general, formulated to cal culate explicitly the interactions between individual particles which ultimately means that a great many timesteps would be required to study creeping flow where the timescales associated with inertial effects are very many orders of magnitude smaller than typi cal flow times.
We have therefore developed a hybrid approacha particle in cell finite element method which uses a standard Eulerian finite element mesh (for fast, im plicit solution) and a Lagrangian particle framework for carrying details of interfaces, the stress history etc.
The Particle in Cell Approach
Our particle-in-cell finite element method is based closely on the standard finite element method, and is a direct development of the material point method of [START_REF] Sulsky | Application of a particle-in-cell method to solid mechanics[END_REF]. The standard mesh is used to discretize the domain into elements, and the shape functions interpolate node points in the mesh in the usual fashion. The problem is formulated in a weak form to give an integral equation, and the shape func tion expansion produces a discrete (matrix) equation. Equation ( 1) in weak form, using the notation of (2) becomes l N(i,j)Tijdnl N,;pdn = l Nif;dn (18) where the trial functions, N, are the shape functions defined by the mesh, and we have assumed no non zero traction boundary conditions are present. For the discretized problem, these integrals occur over sub domains (elements) and are calculated by summation over a finite number of sample points within each el ement. For example, in order to integrate a quantity, In standard finite elements, the positions of the sam ple points, xp, and the weighting, wp are optimized in advance. In our scheme, the Xp 's correspond pre cisely to the Lagrangian points embedded in the fluid, and wp must be recalculated at the end of a timestep for the new configuration of particles. Constraints on the values of wp come from the need to integrate poly nomials of a minimum degree related to the degree of the shape function interpolation, and the order of the underlying differential equation (e.g Hughes, 1987). These Lagrangian points carry the history variables which are therefore directly available for the element integrals without the need to interpolate from nodal points to fixed integration points. In our cas• e, the dis tribution of particles is usually not ideal, and a unique solution for wp cannot be found, or we may find we have negative weights which are not suitable for inte grating physical history variables. We therefore store an initial set of wp 's based on a measure of local vol ume and adjust the weights slightly to improve the integration scheme. Moresi et al. (2000) give a full discussion of the implementation of the particle-in-cell finite element scheme used here including full details of the inte gration scheme and its assumptions. They also dis cuss the specific modifications to the material point method required to handle a convecting fluid.
BENCHMARKS
We have benchmarked our numerical scheme against analytic solutions in order to characterize its strengths and weaknesses, and to quantify the likely level of ac curacy we can achieve with a given mesh/particle den sity. We first benchmark the purely viscous flow case to provide a baseline for comparison with viscoelastic cases.
Analytical solution
We study the spreading of a rectangular sample of ma terial under a constant downward velocity V applied on top (see Fig. 2).
The specified boundary conditions give:
{ ( \I ) 00 ( b h o ) ; (1 -f.t); -1 } In 1 --t + L -- ho ;;J aV i.i! (25)
We use this relationship to eliminate the pressure derivative from ( 21) and ( 7) and express the unknown The Eulerian mesh does not carry any information from timestep to timestep other than the boundary conditions. Therefore, when convenient, the mesh may be modified, replaced, and refined as necessary.
For this problem, compression is applied by a moving boundary condition which causes the mesh to com pact in one direction. For simplicity, the mesh is sim ply scaled to the new aspect ratio without altering the number of elements. In a more complicated situation, however, it would be possible to regrid completely without loss of accuracy. The only detail which needs to be observed is that the updating of the boundary node locations follows the same formulation as that of the particles (here, a second order Runge-Kutta inte gration procedure) to prevent the boundary conditions from drifting with respect to the stored information on the particles.
In the x-direction we have a free surface. In order to investigate the properties of a particle representaion of such interfaces, it is important not to simply use a mesh-based boundary condition. Instead we use a mesh (Fig. 2) larger than the specimen and fill the gap with a backgroun� material having (17 = 103 MPa.s and� = 105 MPa.s).
The square mesh is composed of 4096 elements.
Results
We have tested the code, ELLIPSIS, with three dif ferent type of materials: viscous and compressible 4). The numerical parameters for the viscous part of ( 15) and ( 14) are:
TJ = 106 MPa.s. and e = 4.106 MPa.s.
(
) 27
Due to the mesh size and the prescribed velocity on top, the Courant condition requires 6.t < 5.10-3 s .. For our problem we take 6.t = 10-3 s ..
As an indicator of accuracy we compare analytical and numerical x-velocity at the point I (Fig. 2). The steps in the numerical solutions are related to the the motion of the material interface relative to the element edges. When the interface between the specimen and the background material crosses into a new element there is an immediate discontinuous contribution to the element equations from the sample material. We have verified that jump tends to zero as we increase the element and particle densities.
In the viscoelastic case, in Fig. ( 4) we can see that, for different relaxation time, the error against the an alytical solution remains below 3%. Some steps are still present, but in the viscoelastic case the veloc ity is considerably more noisy. What is most clear is that the computations with larger relaxation time have greater fluctuation in accuracy. The problem be comes more acute in this case when the Courant time (decreasing due to the compression of the background mesh) becomes comparable to the relaxation time. This can result in a loss of stability which is entirely an artefact of the discretization. The most promising solution to this issue is to compute the time-derivative of the stress tensor over a physically relevant inter val, rather than that imposed by the mesh. This is a particular focus of our current research. 1) is a gravitational body force due to density changes. We assume that these arise, for any given material, through temperature effects:
v • r -V"p = gpo ( I -aT)z ( 28
)
where g is the acceleration due to gravity, Po is mate rial density at a reference temperature, a is the coef ficient of thermal expansivity, and T is temperature.
z is a unit vector in the vertical direction. We have also assumed that the variation in density only needs to be considered in the driving term (the Boussinesq approximation).
The equation of motion is then
( [ r1 W1r1 r1W']) V T/eff --+ ----- µ f:l.t µ µ (29)
The velocity field u and pressure at t + 6.t can be solved for a given temperature distribution and the stress history from the previous step. Motion is driven by the heat escaping from the inte rior. The energy equation governs the evolution of the temperature in response to diffusion of heat through the fluid. For a given element of fluid, (30) where" is the thermal diffusivity of the material.
So far, all equations have been written in a purely Lagrangian framework. The time derivate of temper ature and the Jaumann stress rate refer to a frame of reference which is carried by the fluid. In choosing a solution method, it is necessary to choose whether to honour the Lagrangian formulation, or to work with a fixed reference frame and introduce additional terms to compensate for the advection of temperature and stress by the fluid.
Brittle failure
As we discussed above, plate models need to in clude a description of the brittle nature of the cold est part of the lithosphere. Geologists use this term quite loosely to distinguish fault-dominated deforma tion which may result in seismic activity, from ductile creep which occurs at higher temperature and pres sure. In all recent studies of mantle convection where the brittle lithospheric rheology has been taken into account, the brittle behaviour has been parameterized using a non-linear effective viscosity which is intro duced whenever the stress would otherwise exceed the yield value Tyield• This approach ignores details of individual faults, and treats only the influence of fault systems on the large-scale convective flow.
To determine the effective viscosity we extend (6)
by introducing a von Mises plastic flow rule:
v T T T • • • • -2 + -;;-+ -X� I I =D,+Dv+Dp =D (31) µ -17 -T
where A is a parameter to be determined such that the stress remains on the yield surface, and lrl = The value of ,X or 171 is iterated to allow stress to redistribute from particles which become unloaded. The iteration is repeated until the velocity solution is unchanged to within the error tolerance required for the solution as a whole.
The value of the yield stress is, in principle, a func tion of strain, yield history, and temperature, and can be distinct for different materials.
PLATE MODELING
As a simple example, we demonstrate the compres sion of a viscoelastic-brittle layer which lies on top of a slightly less dense viscous fluid layer (Figure 5). This system is an analogue of the cool oceanic lithosphere which rests upon the warm asthenosphere (though we do not solve the temperature equation in this case). The viscoelastic layer is initially split to provide an initiation point for a model subduc tion zone. The vertical boundaries are free-slip, and the right hand edge is given a horizontal velocity to shorten the system. There is a layer of highly com pressible material of very low viscosity above the elastic layer which accomodates the volume change associated with shortening of the mesh, and mimics a free surface boundary condition on the upper surface of the elastic layer. As compression proceeds, the vis coelastic layer flexes and the viscous layer flows to accomodate the deformation. As stresses build up in the model lithosphere, a second failure point develops allowing one half of the material to fold up under the other half. Further compression forces the two halves of the lithosphere layer to slide past each other along a zone of material failure. After this point, the pres-ence of the bottom boundary begins to interfere with the evolution of the system. This particular simulation demonstrates the capa bility of the algorithm in the simulation of subduction zone geometry in the style of [START_REF] Melosh | Dynamic support of the outer rise[END_REF] or Gur nis et al, ( 1996). The fact that the algorithm is imple mented within a fluid-dynamics framework suggests that a viscoelastic analysis of convection with strong temperature dependence of viscosity and a yield stress is now possible.
DISCUSSION
The algorithm described above is designed to intro duce elastic effects into convection simulations where temperature-dependent viscosity and yielding domi nate the mechanical behaviour. The viscosity of the mantle and the mantle lithosphere is very strongly de pendent on temperature (several orders of magnitude variation over 1000°C) whereas the shear modulus is not strongly affected (there is only a modest change in seismic wavespeed due to temperature). Therefore, elastic effects become unimportant outside the cold thermal boundary layer where viscosity is extremely large.
The influence of elastic stresses is likely to be felt at the subduction zones where the lithosphere is bent into the interior of the Earth. In these regions stresses are typically close to the yield stress -a fact which allows the plates to move in the first place. The ad dition of elasticity is likely to complicate the simple picture presented by [START_REF] Tackley | Self-consistent generation of tec tonic plates in three dimensional mantle convection[END_REF] and Moresi & Solomatov (1998) for viscous materials with a yield stress.
Our methodology is limited to a coarse continuum description of the subduction zone system at a resolu tion of a few km. This may be able to give us valu able information into the nature of plate tectonics, the thermal conditions in and around subducting litho sphere, and the stress state of the system. However, the resolution is too coarse to say anything about the detailed mechanics of the failure of lithospheric fault zones and the conditions for major failure. to occur. For this we require a coupling of the large-scale code with an engineering-scale code (e.g. DEM or small deformation Lagrangian FEM) using the large-scale to provide boundary conditions for the small scale. The issue of scale-bridging is important in many ar eas of numerical simulation. Essentially the same dif ficulties arise in material science where the atomic scale is best treated by molecular dynamics codes but the large scale must be treated as a continuum (e.g. Bernholc, 1999).
Figure I: A simplified cross section of the Earth with major layerings shown to scale except for the upper boundary layer which is exaggerated in thickness by a factor of roughly two.
1
the deviatoric part of D and tr(D) repre sents the trace of the tensor. Individually we express each deformation tensor as a function of the deviatoric stress tensor r and pres sure P, which finally gives a tensorial equation: . is the Jaumann corotational stress rate for an element of the continuum, µ is the shear modulus and 77 is shear viscosity. The isotropic part gives a scalar equation for the pressure:
dJ over the element domain n• we replace the contin uous integral by a summation(19)
DFigure 2 :
2 Figure 2: Geometry and boundary conditions of for the analytic solution.
Dxx as a function of p and D zz. Consequently we ob tain a relation between the velocity Vx of a point and its coordinate .r. \.� = ___ x_ x ]
Fig.(3.c and d}, viscous and incompressible Fig.(3.a and b), viscoelastic and compressible Fig.(
Figure 3: Viscous deformation . results _ : a. viscou � in compressible, analytical, b. viscous mcompress1ble, numerical, c. viscous compressible, analytical and d. viscous compressible, numerical.
Figure 4 :
4 Figure 4: Viscoelastic material a. Numerical solution, b. µ = 104 MPa., c. µ = 1Cl5 MPa.
(
T; jTiJ/2)(1121. We again express the Jaumann stress rate in difference form (I 0) to give:t+ t. t [ l l .X ] (W'r'r'W') (32) 2µC...t 2µNo modification to the isotropic part of the problem is required when the von Mises yield criterion is used.At yield we use the fact that lrl = Tyield to writer t+ At = r/ [2f/ + t. t + -1-r' + �( W ' r' -r'W')]µC...t µ using an effective viscosity, 171 given by 1 l'JTyieidf!C...t 1'/ = ----�-�---- 1'] Ty i e Id + TyieldµC.,, t + A1']µC...I We determine .X by equating the value of lr•+A• I with the yield stress in (33). Alternatively, in this particular case, we can obtain 171 directly as 1 ]' = T yie ld/ lf>effl where (35)
Figure 5 :
5 Figure 5: Example: compression of a viscoelastic plate with yield stress overlying a low viscosity fluid of equal density D.
) become respectively t+t.t 71 2'.t b' + .e. t a t _ _ (2'. t D '+.e. t + __ /3_ t
T = a + 2'.t + a + 2'.t r
+ 0:2'.t 2'.t +a (W'r1 -r1W') (11)
P -/3 + At kk /3 + At p (12)
and t+t.t
ACKNOWLEDGEMENTS
The authors gratefully acknowledge the suggestions of an anonymous reviewer which have helped to clar ify this paper. |
04101304 | en | [
"shs.phil"
] | 2024/03/04 16:41:20 | 2000 | https://hal.science/hal-04101304/file/The%20Malebranche-Arnauld%20Controversy.pdf | The Malebranche-Arnauld controversy 1 PRÉSENTATION From 1683 to 1694 1 , a long and furious controversy opposed Malebranche to
When the debate begins, Malebranche is still a "young" philosopher (The Search After Truth was published in 1674) identified by the public as one of the most talentuous representant of the new cartesian generation. Antoine Arnauld (1612-1694) is on the other hand an "old" thinker. It is probably better not to say an "old philosopher", as Arnauld was at this time especially known for his theological writings, which can be divided into two periods : from 1640 to 1668, Arnauld was one of the main figures in the debate about efficious grace, after the publication of Jansenius' Augustinus, and he then appeared as the leader of the "jansenist" group ; after the "Peace of Church" (1668) Arnauld mainly devotes to controversy against the Protestants, in collaboration with Pierre Nicole. In philosophy, Arnauld wrote very little : the Fourth objections to Descartes Meditations in 1641, the Grammar (1660) and the Logic (1662) (both said "de Port-Royal") the first in collaboration with Claude Lancelot, the second with Pierre Nicole. Because of these, he was regarded as a cartesian. Then, this ideological similarity between Arnauld and Malebranche defines and delimites the area of their discussion: both are catholics, priests, and refer to Descartes and Augustine.
The origin of the debate is a text published by Malebranche in 1680, without having waited for Arnauld's opinion, that he had nevertheless solicitated in 1679 : the Treatise on nature and grâce. It is unquestionable that this Treatise is the cause of Arnauld's attack : the fact is attested by about ten letters in Arnauld's correspondence between 1680 et 1683 2 . But this is posing a difficult problem for the comprehension of the development of this controversy : in 1683, the first book that Arnauld publishes against Malebranche is the famous On true and false ideas 3 . It is a text dealing with theory of knowledge, in which Arnauld fights, in the name of Descartes, against the malebranchist thesis that the tradition retains under the spectacular denomination of "vision in God". Now, in the Treatise on Nature and Grace, nothing concerns theory of knowledge. Therefore, this is a serious difficulty : why does Arnauld, who notoriously disagreed with the
1 It is the year of Arnauld's death, but his friends continued to publish some (posthumous) anti-malebranchist texts. Malebranche answered until 1704. Then, a distinctive feature of this controversy is that it goes on after he death of one of its protagonist. All references to Arnauld refer to the Oeuvres d'Antoine Arnauld (designated by OAA ; this edition is sometimes said "édition de Lausanne"), 43 volumes, Paris and Lausanne, Sigismond d'Arnay, 1775-1783. The texts against Malebranche are in the volumes 38, 39 et 40 of this edition. 2 E.g., see Letter to Neercassel of the 13-01-1681, OAA, 2, p.95 ; Letter to the marquis de Roucy of the 26-05-1681, OAA, 2, p.101 (also given in the Défense contre la réponse aux vraies et fausses idées, OAA,38, ; Letter to the marquis de Roucy du 04-01-1681, OAA, 2, p.116 (also given in the Défense contre la réponse aux vraies et fausses idées, OAA,38,p.434).. 3 This text is in OAA,38, There are recent english translations of this texte : 1) On True and False Ideas, traduction by S. Gaukroger, Manchester, Manchester U.P., 1990 (with an Introduction, The Background to the Problem of Perceptual Cognition, and an Appendice, The Controversy with Arnauld). 2) On True and False Ideas, traduction by E-J. Kremer, New-York, E. Mellen, 1990 (with an Introduction ; a traduction of the correspondence between Descartes et Arnauld ; gives in addition the differences between the text of the first edition of On true and false ideas (1683) and the text of the "édition de Lausanne"). I will use Kremer's translation, designated by "Kremer".
Treatise, decide to begin the debate with a book which has apparently nothing to do with the Treatise at stake ? At first glance, the question seems to be purely historical or factual. I will try to establish that it is in fact determining for the comprehension of the very sense of this controversy ; and that it is problematic in relation with the interpretative tradition inaugurated by Thomas Reid and his Essay on the Intellectual Power of man, who focused on the only question of "ideas" and neglected the rest of the debate.
The following chronology gives a global view of the debate and details the different texts exchanged by the protagonists THE MALEBRANCHE ARNAULD CONTROVERSY: CHRONOLOGY -ATTEMPS OF THEMATICAL DIVISION 4 PRÉLIMINARIES 167?-1679 Malebranche et Arnauld are "friends" 5 1679 Meeting Arnauld-Malebranche chez un common friend, the marquis of Roucy (mai 1679). Disagrement about the theses of the Treatise on Nature an Grace (hereafter designated by TNG). Malebranche writes the TNG ; Arnauld leaves France, because of his opposing to Louis XIV in the "Regale" affair. . Malebranche sends a copy of theTNG to Arnauld (end of 1679 ?), in order to ask for his opinion. 1680 Without waiting for Arnauld's opinon, Malebranche decides to publish the TNG (OC, 5) 1681 Many unofficial critics against theTNG (Arnauld, Bossuet, Fénelon, Fontenelle, Nicole, Madame de Sévigné). Arnauld annonces that he is going to refute the TNG. 1682 Pierre Nicole tries an unsuccessfull mediation between Malebranche and Arnauld. 1 Ideas an the intelligible extension 1683 Arnauld : Des vraies et des fausses idées contre ce qu'enseigne l'auteur de la Recherche de la Vérité (OAA,38,hereafter designated byVFI). Malebranche : TNG, third edition. 1684 Malebranche : Réponse de l'auteur de la Recherche de la Vérité au livre de Monsieur Arnauld Des vraies et des fausses idées (OC,6, hereafter designated by RVFI) Arnauld : Défense de Monsieur Arnauld Docteur de Sorbonne contre la Réponse au livre des vraies et des fausses idées (OAA,38, ; hereafter designated by DRVFI )) Malebranche : TNG, fourth edition ; Traité de morale, first edition. In the Nouvelles de la République des lettres, Pierre Bayle begins his reviews of debates (he is generally favourable to Malebranche) 4 The thematical division in four moments oversimplifies. It indicates at the very most the beginning of the discussion about a topic theme : during the devlopment of the controversy, discussions about new themes S'AJOUTENT À CELLES QUI SE PROLONGENT SUR LES SUJETS DÉJÀ ABORDÉS. Except indicated exception, the dates are these of the publication of the texts. 5 Arnauld's and Malebranche's biographers are unanimous about this before 1679 friendship (see N. de Larrière,Vie de Messire Antoine Arnauld ,p.250 and Y.M.André, Vie du révérend Père Malebranche,p.78). Sur les influences portroyalistes dans la pensée du "jeune" Malebranche, see A. Robinet "Aux sources jansénistes de la première Oeuvre de Malebranche"
Malebranche : Trois lettres de l'auteur de la Recherche de la vérité touchant la défense de Monsieur Arnauld (OC,6, Leibniz : Méditations sur la connaissance, la vérité et les idées.
2 The order of nature -Providence Arnauld : Dissertation de Monsieur Arnauld sur la manière dont Dieu a fait des miracles par le ministère des anges… (OAA,38, Malebranche : Réponse à une Dissertation de monsieur Arnauld...(OC,[START_REF]About grace, see mainly the masterly studies of J. Laporte : La doctrine de Port-Royal, Les vérités de la grâce et La doctrine de Port-Royal, La morale (d'après Arnauld)[END_REF]. Arnauld : Neuf lettres de Monsieur Arnauld Docteur de Sorbonne au révérend Père Malebranche (OAA,39,.
Réflexions philosophiques et théologiques sur le nouveau système de la nature et de la Grâce, Livre I touchant l'ordre de la nature (OAA,39,hereafter First, this chronology suggests that the Malebranche-Arnauld controversy was a significant intellectual event. A lot of great contemporaneous minds (I indicate the most famous, but the list is far from being exhaustive) take position et were involved, to diverse degrees, in the set-to. By studying and classifying their reactions -what is obviously unrealizable in this essay-one will probably make come to the light the philosophical camps and tensions of the 1680's. Secondly, and in opposition to the interpretative tradition that rembered the only question of ideas, it must be noticed that one of the main characteristics of this controversy is the profusion of the quarelled themes : after 1685, that is to say after that the dispute has grown a little more ample and acrimonious, it is not unusual to find in less than twenty pages reflexions about theory of knowledge, freedom of God, causality, miracles, functions of pleasure in moral life, plus some mockeries for the adversary and a lot of erudite commentaries of the Fathers of the Church. Then, again, it is out of question to give an exhaustive presentation of this discussion as a whole in the restricted scope of this essay.
Therefore, I will not study the exchanges about grace (#3 in my chronology) and the pleasure of senses (#4) : everything that deals with grace concerns more theologians than philosophers and reach quickly a very deep degree of complexity, after more than 40 years of discussion about the theory of efficacious grace developed by Jansenius in the Augustinus ; as for the discussion about pleasure of senses, it rather has a status of epiphenomenon in the process of the debate between Arnauld and Malebranche, insofar its principal responsible was a "third men", Pierre Bayle, who stands up for Malebranche in his Nouvelles de la République des lettres of august 1685, after that Arnauld devoted a few pages to criticize the Oratorian's view on that subject 6 . Furthermore, both of these two themes has still been well-studied 7 . Therefore, I rather present the main part of the two first moments of my division of the controversy (ideas and nature), and then I try to show the link which unifies these moments and gives a logic to the progress of these debates.
IDEAS
From 1683 to 1685, la polémique entre Malebranche et Arnauld beginns par un épisode fameux auquel l'ouvrage éponyme d'Arnauld a donné son nom : la querelle des "vraies et fausses idées". On remarquera d'emblée le contraste entre l'apparente netteté du désaccord entre Malebranche et Arnauld sur les idées, et la complexité de la très volumineuse littérature critique à laquelle cette partie de la querelle a donné, et donne encore, lieu : on pourrait presque écrire une histoire des débats autour de ce débat, qui, de façon à première vue étrange, a davantage intéressé les philosophes et commentateurs anglo-saxons que leurs collègues continentaux 8 .
Pour simplifier, on peut dire que la discussion entre Arnauld et Malebranche se constitue à partir d'interprétations divergentes d'un texte cartésien : le début de la Troisième méditation, où Descartes avait différencié deux points de vue pour décrire nos idées : "...In so far as the ideas are considered simply as mode of thought, there is no recognizable inequality among them : they all appear to come within me in the same fashion. But in so far as different ideas are considered as images which represent different things, it is clear thaht they differ widely. Undoubtly, the ideas which represent substances to me amount to something more and, so to speak, contain within temselves more objective reality, i.e.participate by representation in a hiher degree of being or perfection, than the ideas which merely represent modes or accidents 9
Si donc on considère ce que Descartes appelle la "réalité formelle" de nos idées (nos idées en tant qu'elles sont des événements mentaux, de modifications de notre esprit), ces idées sont toutes semblables : on peut les décrire de manière ontologiquement correcte comme modifications intervenant en moi, chose qui pense. Mais on peut aussi adopter un autre mode de description, et introduire alors une distinction entre les idées. Le principe de cette distinction est ce que Descartes appelle la "réalité objective" des idées, leur contenu représentatif. De ce second point de vue, les idées sont bien entendu distinctes ; et l'on peut, par exemple les hiérarchiser, puisque certaines représentent plus ("contiennent pour parler ainsi en elles plus de réalité objective") que d'autres. This text is ambiguous, puisque selon le point de vue descriptif adopté, on peut dire que les idées sont toutes semblables (réalité formelle) ou toutes différentes (réalité objective). Ce texte cartésien est discuté pendant plusieurs dizaines de pages par Malebranche et Arnauld, et Arnauld est alors conduit à identifier l'idée et la perception, en expliquant que ces deux termes désignent le même événement mental ("penser à quelque chose"), mais sont différemment connotés : on parle de perception quand on insiste sur l'aspect "modification de l'esprit" ; on parle d'idée quand on insiste sur la relation représentative à l'objet dont l'idée est l'idée :
"I have said that I take the perception and the idea to be teh same thing. Nevertheless, it must be noted taht this thing, although only one, has two relations : on to the soul which its modifies, the other to the thing perceived insofar as it is objectively in the soul ; and that the word perception indicates more directly the first relation, and the word idea the second. So the perception of a square indicates more directly my soul as perceving a square and the idea of a square indicates more directly the square insofar as it is objectively in my mind" (On true and false ideas, 5, OAA, 38 p.198, Kremer p.20, Arnauld stesses).
Ainsi, pour expliquer la manière dont un acte de connaissance s'opère, on n'a pas besoin d'intercaler un troisième terme entre l'idée entendue comme perception de l'objet et l'objet dont l'idée est l'idée : "we can know material things, as well as God and our soul, not only mediatly but also immediatly, i.e. thaht we can know them without there being any intermediary between our perception and the object" (On true and false ideas, 6, OAA, 38, p.210, Kremer, p.31, Arnauld stresses) Face à cette identification stricte opérée par Arnauld entre idée et perception, entre la modification de l'esprit et la structure représentative de l'idée, Malebranche commente la Troisième méditation en radicalisant la distinction opérée par Descartes. Il dissocie ainsi nettement perception d'une part, et idée représentative de l'autre 11 : "les perceptions ne [sont] point représentatives des objets et les idées qui les représentent [sont] bien différentes des modifications de notre âme" (Réponse à la troisième lettre de Monsieur Arnauld, OC, 9, p.905). Pourquoi cela ? Certes, on peut concéder à Arnauld que lorsque je pense à quelque chose, il y a en moi, substance pensante, un événement mental qu'on peut appeler une "modification" ou une "perception" :
10 Parmi les principaux commentaires de ce passage de la Troisième méditation de Descartes (ou du texte correspondant des Réponses aux secondes objections) par Arnauld, voir VFI 6, OAA, 38, p.205-207, Kremer p.26-27; DRVFI, OAA, 38, p.386-389 ; Neuf lettres…OA, 39, p.138-139. 11 Le principal commentaire de ce passage de la Troisième méditation de Descartes par Malebranche se trouve en Trois lettres, OC, 6, p.214-218 (voir la belle analyse de V. Delbos, Etude de la philosophie de Malebranche,. Pour une comparaison des commentaires croisés de ce texte cartésien par Arnauld et Malebranche, see J. Ganault, " Les contraintes métaphysiques de la polémique d'Arnauld et de Malebranche sur les idées " and R. Wahl, " The Arnauld Malebranche Controversy and Descartes' Ideas " "Lorsque je vois ce centaure, je remarque en moi deux choses. La première c'est que je le vois, la seconde, c'est que je sens bien que je le vois (...) Je sens que je vois ce centaure, que c'est moi qui le vois, que la perception que j'en ai m'appartient, et que c'est une modification de ma substance" (Réponse aux vraies et fausses idées, OC, 6, p.60).
Mais si j'analyse le contenu de cette perception, c'est-à-dire l'idée, je constate que cette idée est ontologiquement irréductible à son être-pensé : l'idée est perçue, mais n'est pas en elle-même une perception. La perception est en effet une modification de mon esprit, alors que l'idée perçue ne se laisse pas, en raison de ses propriétés (par exemple, dans le cas particulièrement parlant d'une figure géométrique : immutabilité, nécessité, éternité, universalité), dériver de ou rattacher à ce que je suis : une substance pensante qui ne possède pas ces caractéristiques. L'idée est donc un être dont la consistance et les caractéristiques intrinsèques sont telles qu'il ne peut pas être dépendant de ma pensée. Il faut la situer en un lieu qui permette de rendre compte de cette consistance et de ces propriétés : Dieu, ou plus précisément l'entendement divin. C'est donc en Dieu que nous voyons nos idées 12 .
Sans rentrer ici dans le détail des interprétations, on peut donc résumer ainsi à grands traits l'opposition de Malebranche et Arnauld sur cette question des "idées" : Malebranche défend une position de type representationalist, c'est-à-dire estime que the "direct and immediate objects of perception are never independently-existing physical entities, but non-physical, and (on some versions of this theory) mind-dependant, representative entities ; physical objects are perceived indirectly, by means of these immediatly-perceived entities" 13 , la spécificité philosophique de Malebranche étant de placer in God these entities immediatly perceived by my mind when I "have an idea". Arnauld semble plus proche d'une théorie de la connaissance du type d'un direct realism (that is to say "the view that the direct and immediate objects of normal veridical perception are external physical entities existing independently of any perceiving mind (...) For the direct realist (...) the perceiver is (in veridical perception) in direct, non-mediated (non-inferential) perceptual contact with the objects of the physical world") 14 , peut-être à tendance empiriste 15
12 Je résume ici très rapidement la célèbre démonstration donnée par Malebranche in Search, Elucidation # 10, texte fondateur -bien plus que l'argumentation seulement résiduelle of Search, III, 2, i to vi-de la vision en Dieu. Voir aussi, très similaire et synthétique, Dialogues, OC, 12, p.45 : "Les idées ont plus de réalité que je ne pensais, et leur réalité est immuable, nécessaire, éternelle, commune à toutes les intelligences, et nullement des modifications de leur être propre qui, étant fini, ne peut recevoir actuellement des modifications infinies (...) Si nos idées sont éternelles, immuables, nécessaires, vous voyez bien qu'elles ne peuvent se trouver que dans une nature immuable [God] (...) dans cette substance intelligible qui renferme les idées de toutes les vérités que nous découvrons". 13 S. Nadler, Arnauld and the Cartesian Philosophy of Ideas, p.12. 14 S. Nadler, Arnauld and the Cartesian Philosophy of Ideas, p.12 . Pour introduire l'étude du second grand moment de la polémique, moins célèbre que le précédent mais quantitativement plus important puisqu'il a occupé près des deux tiers des textes, on peut repartir de la question posée plus haut : pourquoi Arnauld, qui en veut de manière notoire au Traité de la nature et de la grâce où le thème des "idées" n'est jamais abordé, inaugure-t-il ses débats avec Malebranche en s'en prenant pendant deux ans et mille-cinq-cents pages à la théorie de la connaissance de l'oratorien ? Il faut pour le comprendre revenir à ce que Malebranche a dit dans le Traité (je m'en tiendrai ici au premier discours, c'est-à-dire à ce qui concerne la "nature").
Que trouve-t-on dans ce Traité ? En premier lieu, une théodicée extrêmement originale (à ma connaissance unique chez les catholiques au XVIIème siècle) 16 . Malebranche explique que notre monde n'est pas le meilleur possible, et que nous y constatons à bon droit l'existence réelle, positive, de maux et de désordres. : Ces affirmations constituent une innovation considérable en matière de théodicée, puisqu'elles rompent nettement avec les thèses de Leibniz aussi bien qu'avec celles d'Augustin et Thomas d'Aquin sur l'ordre du monde et la perfection de la création. Arnauld attire donc , avec raison, l'attention sur ce point et se scandalise des affirmations malebranchistes en la matière : "Il est un peu étonnant qu'on ne se soit pas aperçu combien ce langage devait blesser des oreilles chrétiennes" (Réfl.I,ch.6,OAA,39,p.225).
Mais cette rupture de Malebranche avec les théodicées classiques n'est pas autonome, et ne s'est pas opérée à la légère : l'Oratorien s'est donné une théorie de l'action divine qui lui permet Ideological ?" : il est inutile de chercher une théorie arnaldiennne des idées dans les textes écrits contre Malebranche, parce qu'il n'y en a pas (see e.g. p.44-45 : "Furious at Malebranche's dissent from Jansenism, Arnauld's attack is intended to destroy him. No coherent philosophical position supports this attack (...) The point is ideological"). 16 See the essay of D. Rutherford in this volume, and my "Malebranche, le désordre et le mal physique. Et noluit consolari".
d'expliquer pourquoi il y a des maux et désordres dans l'univers 17 . Dieu, explique ainsi Malebranche, veut créer le meilleur, le plus parfait. Mais selon l'oratorien la perfection intrinsèque du monde, de la création, n'est pas la seule variable glorifiante considérée par Dieu lorsqu'il crée. Il prend aussi en compte la perfection de ce que Malebranche appelle ses "voies", c'est-à-dire les manières d'agir qu'il emploie pour créer. Autrement dit, ce que Dieu est amené à faire varier, pour l'optimiser, dans sa recherche d'une perfection maximale, ce n'est pas seulement le monde créé en lui-même, c'est le couple monde+voies :
"Dieu veut que son ouvrage l'honore (...). Mais prenez garde : Dieu ne veut pas que ses voies le déshonorent (...). Dieu veut que sa conduite, aussi bien que son ouvrage, porte le caractère de ses attributs. Non content que l'Univers l'honore par son excellence et sa beauté, il veut que ses voies le glorifient" (Dialogues, OC,12,.
Les voies créatrices sont donc promues au titre d'élément expressif de la perfection divine. Loin d'être de simples moyens indifféremment utilisés en vue d'un résultat (la création) dont seule compterait la valeur, elles doivent être intégrées par Dieu dans sa recherche d'une perfection maximale.
Quelles seront, pour Dieu, les voies de création les plus parfaites ? Ce seront (je résume) les plus simples, celles qui "le glorifient par leur simplicité, leur fécondité, leur universalité, leur uniformité, par tous les caractères qui expriment des qualités qu'il se glorifie de posséder" (Dialogues, OC, 12,., p.214 ; see also TNG, OC, 5, p.28), puisque d'une part, "Dieu doit agir d'une manière qui porte le caractère des attributs divins" (TNG, OC, 5, p.32), notamment la simplicité, et que d'autre part la simplicité des moyens d'action atteste également la sagesse de celui qui agit :
"Un excellent ouvrier doit proportionner son action à son ouvrage : il ne fait pas par des voies fort composées ce qu'il peut exécuter par de plus simples (...) Il faut conclure de là, que Dieu, découvrant dans les trésors infinis de sa Sagesse une infinité de mondes possibles (...) s'est déterminé à créer celui qui aurait pu se produire et se conserver par les lois les plus simples, ou qui devait être le plus parfait, par rapport à la simplicité des voies nécessaires à sa production, ou à sa conservation" (TNG, I, 13, OC, 5, p.28).
Mais ce ne sont pas les voies les plus simples absolument que Dieu va choisir : il ne faut pas oublier que c'est la perfection totale du couple ouvrage/voies qui doit être maximale. Et c'est ici que, selon Malebranche, se découvre l'explication des maux et des désordres rencontrés dans la création. Car les perfections respectives de l'ouvrage et des voies employées pour le réaliser varient de manière inverse. En effet, pour réaliser un ouvrage absolument parfait (sans maux ni désordres), Dieu devrait en vouloir-créer chacun des moindres détails en particulier, et donc multiplier les volontés et voies créatrices particulières. Ce n'est donc pas ce qu'il a fait, pour ne pas sacrifier la simplicité de ses voies : "Si un monde plus parfait que le nôtre ne pouvait être créé et conservé que par des voies réciproquement moins parfaites (...) je ne crains point de le dire : Dieu est trop sage, il aime trop sa gloire (...) pour pouvoir le préférer à l'Univers qu'il a créé" (Dialogues IX 10, OC, 17 Ce point ayant déjà été bien étudié, j'e rappelle seulement les principes de la théodicée malebranchiste qui constituent les axes essentiels de la polémique avec Arnauld. Pour plus de détails, see F. Alquié, Le Cartésianisme de Malebranche,[307][308][309][310][311][312][313][314][315][316][317][318][319][320][321][322][323][324] 12, p.214-215). A l'inverse, des voies parfaites, c'est-à-dire absolument simples, auraient une telle généralité et empêcheraient donc tellement l'organisation détaillée du monde, que l'ouvrage qu'elles aboutiraient à créer serait immanquablement catastrophique. Ce n'est donc pas non plus ce que Dieu a fait, pour ne pas produire un ouvrage qui le déshonore. C'est dans l'entre-deux de ces deux extrêmes qu'il faut situer l'explication de ce qu'est notre monde : il n'est ni le plus parfait possible, ni créé par les voies les plus parfaites possibles. Il est le meilleur composé (compromis ?) possible, c'est-à-dire le monde qui correspond au couple ouvragevoies qui, compte tenu des variations corrélatives et inverses qui affectent chacun de ses deux constituants, offre la perfection maximum. Le monde créé n'est donc pas "le plus parfait qui puisse être absolument" : si Dieu n'avait pas pris en compte les voies, et donc créé par des voies plus complexes, le monde créé eût été, en soi, bien meilleur. D'où l'affirmation de l'existence de désordres, certes limités mais bien réels, dans l'immanenc.
Tout cela converge vers une conclusion : si Dieu agit comme il agit, c'est parce qu'il doit agir conformément à ce que sa Sagesse lui dicte comme étant digne de lui -simplicité et généralité des volontés-plutôt qu'en voulant ce qui serait la perfection du monde créé. On a donc ici une conception spécifique de la liberté d'un Dieu qui "ne peut pas vouloir faire ce que sa sagesse lui défend" (Réponse à une Dissertation…, IX 13, OC 7 p.533), c'est-à-dire agir par des volontés particulières et des voies complexes. Négativement, on peut donc dire que Dieu se contraint, ce qui autorise Malebranche à le présenter comme empêché (see Rép.Réfl., I 1 9, OC 8, p.676 : "La sagesse de Dieu l'empêche de composer ses voies"), soumis au devoir (see RVFI, 4 12, OC 6 p.40 : "Dieu ne doit point troubler l'ordre et la simplicité de ses voies (...) Il ne doit point agir (...) par des volontés particulières") et à l'obligation (see ibid : "Lorsque l'ordre qui est sa [Dieu] loi inviolable ne l'oblige point à en user autrement" ; see also TNG, Elucidation #3 26, OC 5 p.189. Plus positivement, on ne doit donc pas concevoir la liberté divine comme une absolue capacité de choix, mais comme l'autonomie d'un Dieu qui "se donne à lui-même des lois" et "n'obéit véritablement qu'à ses propres lois" (Réponse à une Dissertation…XII 8, OC 7 p.562). De là enfin la fameuse phrase qui va scandaliser mais qui est seulement une manière lapidaire de rappeler que la Sagesse divine limite l'exercice de la toute-puissance : "Sa sagesse le [Dieu] rend pour ainsi dire impuissant" (TNG I 38, OC 5, p.47) 18 .
LES REPONSES D'ARNAULD
On a là autant d'originalités de la théodicée malebranchiste et autant de points qui vont être attaqués par Arnauld dans les Neuf lettres (…) au révérend Père Malebranche, puis, surtout, dans les Réflexions philosophiques et théologiques sur le nouveau système de la nature et de la grâce, qui constituent à notre sens le plus grand ouvrage anti-malebranchiste d'Arnauld.
Théodicée
18 Le "pour ainsi dire" n'a été rajouté qu'en 1712. Le texte qu'Arnauld a lu était donc : "Sa sagesse le rend impuissant".
Arnauld va d'abord, mais de manière très rapide comme si cela allait de soi, contester les résultats de la théodicée malebranchiste, en rappelant les arguments classiques d'Augustin et de Thomas d'Aquin pour affirmer la bonté et l'ordre de la création :
"Toute substance est nécessairement bonne, comme dit souvent saint Augustin, et il n'y a que les manichéens qui puissent croire qu'il y en ait de mauvaises. Celles qu'on appelle défectueuses ne le sont qu'en comparaison de celles qui sont plus parfaites. Nihil vituperatur, dit saint Augustin, nisi in comparatione melioris (…) Un animal monstrueux est, si l'on veut, une dissonance dans l'harmonie de l'Univers. Mais il ne laisse pas de contribuer à cette harmonie" (Réfl. I 2,OAA 39,. "Il est certain au moins que saint Augustin eût eu de la peine à souffrir qu'on eût parlé si crûment des irrégularités et des désordres qu'on prétend se rencontrer dans les ouvrages de
Miracle, providence, volontés
Mais les attaques d'Arnauld vont surtout se focaliser sur le thème des volontés générales, de deux manières : Arnauld va en premier lieu tenter de montrer comment ce thème rend difficile, voire impossible, la pensée de certaines notions classiques et centrales en théologie chrétienne. En second lieu, il va montrer comment Malebranche redéfninit de manière inadmissible la notion de Dieu.
Les volontés générales rendent difficilement pensables certains thèmes classiques. C'est tout d'abord le cas des miracles. Lorsqu'on affirme, comme Malebranche, que les lois physiques qui régissent l'univers ont été choisies et mises en oeuvre par Dieu comme les meilleures, parce que les plus générales possibles, il devient très difficile de penser la possibilité de l'entorse à ces lois que semble constituer le miracle 20 . La Providence est la seconde notion mise en péril : si Dieu se contente d'organiser généralement le cours des choses et ne peut (doit) pas décider du détail, toute planification et à plus forte raison toute intervention divine au niveau du particulier paraissent improbables, pour ne pas dire impossibles : 19 Réfl., I 6, OAA 39, p.225, avec la référence "Confessions, livre VII, ch.14-15-16 Les principales réponses de Malebranche sont : l'ensemble de la Réponse à la Dissertation... ; Rép. Réfl., II, ch.1, p.695-704 ;II, ch.3, p.716-718 et III, 773-776 ;Prévention, p.1114Prévention, p. -1116. . [si Dieu n'agit que par des volontés générales] : comment donc puis-je conclure de là que si j'ai bien de la foi en Dieu, il ne manquera pas de me vêtir, de me nourrir, et de me conserver pendant tout le temps qu'il a ordonné que je demeure sur la terre ? (...) A-t-on besoin d'avoir beaucoup de confiance en Dieu pour se laisser emporter à la suite des lois naturelles ?" (Réfl.,I,ch.17,OAA,39,p.335) De ce point de vue, l'attaque d'Arnauld semble avant tout motivée par des préoccupations d'ordre religieux : il a redouté que la philosophie de Malebranche n'empêche de penser le Dieu personnel qui se dit dans la Bible, celui à qui on peut dire Pater noster sans pour autant mésuser de la notion commune de paternité, qui inclut un souci personnalisé, particulier pour l'enfant. Sur le terrain spéculatif, Arnauld cherche donc à rétablir la possibilité de volontés particulières en Dieu. Si Malebranche s'est trompé, c'est qu'il a identifié le volontaire et le légal dans son explication de la manière dont Dieu agit : "Il prend pour la même chose agir par des volontés générales et agir selon des lois générales". Or, précise Arnauld, "les lois sont l'ordre selon lequel les choses se font ; et les volontés (surtout en Dieu) sont ce par quoi les choses se font" (Réfl.,I,ch.1,p.175). La distinction ici établie est difficile à cerner, mais deux exemples de la Dissertation (...) sur les fréquents miracles de l'ancienne loi où ces questions avaient déjà été abordées permettent de l'éclairer:
"Pour m'être prescrit une loi générale de prier Dieu tous les matins, cela n'empêche pas que je ne le fasse chaque fois par une volonté particulière".
"Dieu s'est fait une loi générale de créer une âme et de la joindre à un corps humain aussitôt que ce corps humain serait formé dans le sein d'une femme (...) S'ensuit-il (...) que la naissance de chacun de nous, et la création de notre âme, n'ait pas été l'effet d'une volonté particulière de Dieu ?"21 .
Il s'agit donc d'enfoncer un coin pour disjoindre une des données fondamentales de la théologie malebranchiste : l'identité posée entre la généralité des voies d'action divines (les lois qui régissent le créé), et la généralité des volontés qui organisent ce créé. Si, comme le tente ici Arnauld, on parvient à différencier généralité du légal et généralité de la volonté, il redevient possible de penser qu'à l'intérieur du cadre général-légal qui régit de toute évidence la création des volontés particulières de Dieu interviennent, qui touchent au détail des êtres créés : a parte Dei, la définition de la forme du volontaire ne s'épuise donc pas dans sa caractérisation par la généralité du légal qui l'exprime. Et sans pour autant nier que le monde soit régi par des lois, on peut alors affirmer que les êtres et événements qui le constituent ont donc été voulus "par une destination particulière de Dieu", "par une volonté, positive, directe et particulière" (Réfl.,I,ch.2,p.204).
Malebranche est sensible à ces attaques et tente d'y répondre, aussi bien dans les textes écrits contre Arnauld que dans les modifications qu'il apporte à ses livres lorsqu'ils sont rééditées (e.g. the Fourth Elucidation of the TNG, about miracles) , ou dans ses nouveaux ouvrages. Ainsi, en 1688, il consacre quatre des ses Dialogues on Metaphysics and Religion (#10, 11, 12 and 13) à récapituler et préciser sa pensée sur la notion de Providence, en insistant notamment sur le fait que Dieu a prévu tout ce qui allait arriver dans le monde qu'il a choisi de créée "Sa [Dieu] prescience n'a point de bornes, et sa prescience est la règle de sa Providence" (Dialogues, XII, 10, OC, XX, En second lieu face à la conception malebranchiste de la liberté divine comme autonomie, Arnauld veut maintenir une conception qu'on pourrait dire absolue et volontariste de la liberté de Dieu, "On ne craint point de donner des bornes à la liberté de Dieu, et de l'asservir aux imaginations d'une nouvelle Métaphysique [...] On ne craint point d'assurer que Dieu forme librement son dessein ; mais que le dessein étant formé il choisit NECESSAIREMENT LES VOIES GENERALES qui sont les plus dignes de sa sagesse [citation de TNG, II, §50]. Et ainsi Dieu n'aura point de liberté dans le choix des voies nécessaires pour l'exécution de ses desseins [...] Mais sur quoi peut être fondée une doctrine si injurieuse à la liberté de Dieu ?" (Réfl.,II 26,, la typographie est d'Arnauld. "Il est bien étrange qu'on se donne si facilement la liberté de donner des bornes arbitraires à la liberté de Dieu" (Réfl.,II 27,OAA 39,p.603).
Enfin la source de l'erreur malebranchiste en ces matières est sans doute d'avoir posé une distinction trop forte entre attributs divins, menant à concevoir "une espèce de combat entre la raison et la puissance de Dieu" (Réfl.,II 19,OAA 39 p.544). Contre cette distinction malebranchiste, Arnauld s'efforce à de nombreuses reprises de réunifier les attributs divins ("Peuton avoir des pensées plus indignes de Dieu que de s'imaginer un tel désaccord entre sa sagesse et sa volonté, comme si sa volonté et sa sagesse n'étaient pas la même chose ?", Réfl., III 10, OAA 39, p.748) et forge des expressions qui paraissent destinées à exprimer leur intime identité et leur interpénétration fonctionnelle. Il parle ainsi, en Dieu, de "volonté raisonnable" (Réfl.,II 2,OAA 39,p.431), puis affirme que "Dieu ne voulant rien que sagement, il ne veut rien que sa sagesse ne veuille" (Réfl.,III 10,OAA 39,p.748 ; see also II,ch.24,p.578 : "tout ce que veut Dieu [est] essentiellement sage dès-là qu'il le veut"). On entend dans ces textes d'Arnauld comme un écho du souci exprimé par Descartes lorsqu'il refusait qu'on distinguât en Dieu entendement et volonté, ou plutôt entendre et vouloir, "ne quidem ratione", "pas même par une distinction de raison"29 . Malebranche, pour sa part, semble plus proche d'une position analogue à celle de Duns Scot en reconnaissant entre sagesse et volonté en Dieu quelque chose comme une "distinction formelle", qui permet de distinguer en Dieu des attributs distincts antérieurement aux différenciations faites par notre pensée30 . 5 LE LIEN : UNIVOCITE Avant de conclure, il faut répondre à la question qui nous guide depuis le début : quel rapport y a-t-il entre les deux principaux moment de la querelle, que nous avons présentés, entre la dispute autour des idées et ces longues discussions sur le Traité de la nature et de la grâce ?
On apercoit la réponse en s'attachant à ce qu'il y a de plus étonnant, pour nous aussi bien que pour un classique, dans le premier discours du Traité : cette prétention malebranchiste à décrire les actes divins et porter de manière assurée des jugements, souvent péjoratifs, sur leurs résultats.
Un augustinien, ou un cartésien (Arnauld) diraient qu'il y a là une transgression imprudente des limites du savoir humain possible : Dieu est "incompréhensible", ses desseins nous demeurent "cachés" et il est en conséquence "téméraire" comme on disait in the XVIIth century, de juger et d'évaluer ses actions.
Mais, du point de vue de Malebranche, cette accusation est injustifiée : puisque nous voyons, en Dieu et comme Dieu, nos idées, nous sommes fondés à évaluer ce que Dieu veut et fait. Pour le dire autrement, puisque nous connaissons, au moins partiellement, comme Dieu connaît, et que la connaissance de Dieu est ce qui règle sa volonté et ses actions, il n'y a donc pas de témérité à juger, et dans certains cas à critiquer, la conduite créatrice de Dieu et ses résultats :
"Si je n'étais persuadé que tous les hommes ne sont raisonnables que parce qu'ils sont éclairés de la Sagesse Eternelle, je serais sans doute bien téméraire de parler des desseins de Dieu, et de vouloir découvrir quelques-unes de ses voies dans la production de son Ouvrage. Mais comme il est certain que le Verbe Eternel est la Raison universelle des esprits, et que par la lumière qu'il répand en nous sans cesse nous pouvons tous avoir quelque commerce avec Dieu ; on ne doit point trouver à redire que je consulte cette Raison, laquelle quoique consubstantielle à Dieu même, ne laisse pas de répondre à tous ceux qui savent l'interroger par une attention sérieuse" (TNG, I 7,OC 5, "C'est en Dieu et dans une nature immuable que nous voyons la beauté, la vérité, la justice, puisque nous ne craignons point de critiquer son ouvrage, d'y remarquer des défauts [...] Il faut bien que l'Ordre immuable, que nous voyons en partie, soit la Loi de Dieu même, écrite dans sa substance en caractères éternels et divins, puisque nous ne craignons point de juger de sa conduite par la connaissance que nous avons de cette Loi" (Dialogues IX 13, OC 12, p.221).
L'opération de connaissance appelée vision en Dieu rend donc légitimes les jugements sur la valeur de la création et la qualité des voies d'action divines. Ce qui pouvait apparaître comme un imprudent prométhéisme théologique trouve donc sa justification méthodologique et sa légitimité théorique dans l'analyse malebranchiste des opérations fondatrices et des conditions de possibilité de la connaissance humaine. C'est pourquoi la structure d'exposition des Méditations chrétiennes -un dialogue entre le philosophe et le Verbe-est très signifiante. C'est parce qu'il y a dialogue, c'est-à-dire co-extensivité entre la raison du philosophe et la Raison divine, que les considérations évaluatrices sur l'action et les oeuvres divines proposées dans le Traité de la nature et de la grâce et les Méditations sept à dix sont envisageables, et valables quand elles ont été correctement produites.
Résumons, sans chercher la nuance : une des conditions de possibilité de la théodicée de Malebranche et, plus largement, de l'ensemble des thèses formulées dans le Traité de la nature et de la grâce, est ce qu'un moderne appellerait l'univocité de la connaissance entre l'homme et Dieu telle qu'elle est impliquée par l'opération désignée comme "vision en Dieu". On ne peut donc pas comprendre la théodicée malebranchiste, aussi bien dans ses contenus que dans ses conditions de possibilité, sans la rattacher à la théorie de la connaissance qui en autorise le développement. En conséquence, une réfutation efficace et complète de cette théodicée ne peut pas faire l'économie d'une critique de cette théorie de la connaissance et du thème de l'union de la raison humaine avec la Raison divine. On comprend alors pourquoi Arnauld, opposé aux conclusions du Traité de la nature et de la grâce, estime nécessaire d'écrire d'abord un texte consacré à la question des "idées" pour mener à bien son projet réfutatif.
On saisit ainsi l'unité et la querelle et la logique de son déroulement unité, et on peut relire les débtas sur les idées à partir de cette question de l'univocité de la connaissance entre l'homme et Dieu : quand Arnauld rabat l'idée sur la perception, il cherche à casser et à interdire toute possibilité de complicité intellectuelle ou d'homogénéité entre raison humaine et Raison divine. Si les idées (ne) sont (que) des modifications de notre esprit, il n'y a pas de co-idéation obligée entre Dieu et moi dans un acte de connaissance. L'univocité supposée par Malebranche est donc à tout le moins contestable. Et les affirmations du Traité de la nature et de la grâce redeviennent "présomptueuses" et "téméraires" : à partir d'idées qui sont miennes, et seulement miennes, de quel droit juger de ce que Dieu connaît et de ce que sa sagesse exige ? Et donc de quel droit affirmer que "la sagesse de Dieu le rend impuissant" ? Tout le travail d'Arnauld dans les Réflexions philosophiques et théologiques consiste alors à tirer les conséquences théologiques des conclusions gnoséologiques mises en place dans les Vraies et fausses idées.
CONCLUSION
On conclura sur trois points pour indiquer des directions de prolongation de cette étude qui aura, j'espère, au moins suggéré le grand intérêt de ces débats longs et complexes i) Il est difficile de donner une grille d'interprétation globale of this controversy. Ou plutôt, on peut envisager, sans qu'elles soient d'ailleurs exclusives, différentes hypothèses de lecture : un Dieu caché "janséniste" et "classic" contre un Dieu plus "moliniste", voire "bourgeois"31 , qui annonce le déisme des "modernes" du 18th century (see hereafter #2) ; une nouvelle version de la gigantomachie entre amis des formes (Malebranche, as an "idealist") et amis de la matière (Arnauld, as a "realist") que Platon présentait dans le Sophiste (245) ; la constitution d'interprétations divergentes de thèmes qui demeuraient ambigus chez Descartes, etc. On suggérera seulement ici que, comme souvent in the 18th century, c'est sans doute dans la distinction et les rapports entre attributs divins qu'il faut chercher le dernier mot du désacord entre les deux auteurs32 : chez Malebranche, l'établissement d'une distinction forte entre entendement et volonté de Dieu permet d'envisager la préexistence dans l'entendement des possibles offerts à titre de "choisissables" à la volonté créatrice. Elle implique en conséquence une détermination de la volonté par l'entendement, qui se traduit par une limitation des capacités effectives de la puissance, et une restriction des modalités de déploiement de la liberté divine. Elle constitue enfin la condition de possibilité de la vision en Dieu : l'entendement de Dieu étant comme séparé du reste de son être à la manière d'une région autonome, l'homme peut y accéder (voir en Dieu) sans pour autant atteindre la totalité de l'être divin (voir Dieu). Chez Arnauld, l'absence de distinction (autre que de raison) entre entendement et volonté de Dieu interdit de penser une priorité logique et déterminante du premier sur la seconde. N'étant ainsi déterminé par rien d'autre que sa propre essence une et nécessaire, Dieu peut être pensé comme absolument tout-puissant et libre. Enfin, cette indistinction entre attributs divins commande, ou permet, le refus de la vision en Dieu malebranchiste : étant entendu qu'il ne saurait être question, pour un classique, de savoir ce que sont les volontés divines, l'inaccessibilité de cette volonté reflue pour ainsi dire sur l'entendement qui n'est pas séparé d'elle et coupe toute possibilité de partage cognitif entre l'homme et Dieu 33On a donc incontestablement ici un beau débat, de grand intérêt philosophique. L'important est que cet intérêt ne se réduit pas, comme l'histoire des commentaires consacrés à la querelle pourrait le laisser croire, à la seule question des idées. Ce n'est pas que cet épisode fondateur soit sans importance ou superflu. Mais les débats techniques sur la nature des idées comme modalités représentatives ne prennent leur sens qu'une fois insérés dans un dispositif de questionnement plus large : c'est l'ensemble des questions impliquées dans le thème malebranchiste de la vision en Dieu, dont l'univocité de la connaissance entre l'homme et Dieu et ses nombreuses répercussions métaphysiques et théologiques, qui justifie cette querelle autour des vraies et des fausses idées.
ii) Les questions abordées dans la polémique entre Malebranche et Arnauld ne sont pas toutes originales : en 1684, il y a longtemps déjà que les cartésiens ou leurs adversaires débattaient des thèmes dont nous avons parlé (statut des idées, miracles, providence) ou d'autres dont discutèrent Arnauld et Malebranche mais qui n'ont pas pu être abordés dans le cadre restreint of this paper (validité de l'évidence comme critère de la vérité, théorie de la causalité, rapports entre philosophie et révélation). En revanche, cette polémique eut une fonction de révélateur. Elle contribua à préciser des questions qui devinrent autant de problèmes pour les penseurs du 18th century 34 . S'il fallait ne retenir que deux concepts que ce débat contribua à mettre en avant sur la scène philosophique, ce serait la notion de représentation dans le domaine de la théorie de la connaissance 35 , sur laquelle Kant s'interrogera encore, en citant Malebranche, dans sa fameuse Letter to Marcus Herz du 21 february 1772 ; et le thème des volontés générales, sur laquelle réfléchiront Montesquieu aussi bien que Hume et qui, au terme d'un mouvement de sécularisation, deviendra la pièce centrale du Contrat social de Rousseau 36 .
iii) At least, reading this debate is also of a great interest for the study of the thought of Malebranche himself. Arnauld était un adversaire de valeur, et ses attaques font apparaître avec une impressionnante précision les originalités et (ou les difficultés) du malebranchisme. De nombreux passages des oeuvres de maturité de Malebranche (from Dialogues on metaphysics and religion to the Réflexions sur la prémotion physique) peuvent ainsi se lire comme des tentatives de réponses aux problèmes soulevés par Arnauld. Il est donc indispensable de connaître les objections faites par Arnauld pour pouvoir comprendre et apprécier le malebranchisme d'après 1685.
Malebranche l'admettait volontiers : "Monsieur Arnauld" fut un redoutable adversaire. Le feu impitoyable des attaques arnaldiennes porta ainsi, plusieurs années durant, le malebranchisme au point d'incandescence : toute la question est de savoir si cette épreuve le renforça, ou l'anéantit.
"
Le monde présent est un ouvrage négligé" (Méd. Chrét. VII 12, OC 10,, p.73, quoted by Arnauld in Réfl, I, 6). "Dieu pouvait sans doute faire un monde plus parfait que celui que nous habitons. Il pouvait, par exemple, faire en sorte que la pluie, qui sert à rendre la terre féconde, tombât plus régulièrement sur les terres labourées que dans la mer, où elle n'est pas si nécessaire" (TNG, I §14, OC 5, p.29, quoted by Arnauld in Réfl. I, 1 and 2) . "Les ombres sont nécessaires dans un tableau et les dissonances dans la musique. Donc il faut que les femmes avortent et fassent une infinité de monstres. Quelle conséquence ! répondrai-je hardiment aux philosophes [...] Tous ces funestes effets que Dieu permet dans l'Univers n'y sont point nécessaires. Et s'il y a du noir avec du blanc, des dissonances avec des consonances, ce n'est pas que cela donne plus de force au tableau, et plus de douceur à l'Harmonie. Je veux dire que dans le fond, cela ne rend point l'ouvrage de Dieu plus parfait. Cela le défigure au contraire, et le rend désagréable à tous ceux qui aiment l'ordre […] Je ne crains point de le redire : l'Univers n'est point le plus parfait qui se puisse absolument [...]. C'est un défaut visible qu'un enfant vienne au monde avec des membres superflus, et qui l'empêchent de vivre. Je l'ai dit et je le soutiens" (Trois lettres du Père Malebranche à un de ses amis dans lesquelles il répond aux Réflexions philosophiques et théologiques de M.Arnauld, III, OC 8,.
Dieu. Il eût cru que les Manichéens eussent tiré un grand avantage de ces sortes d'aveux : et c'est ce qui lui a toujours fait nier constamment, contre ces hérétiques, qu'il puisse y avoir d'autre mal dans la nature que celui qui tire son origine du libre arbitre des créatures intelligentes ; c'est-à-dire le péché et la concupiscence (ibid., I, ch.6, p.225). "Ce n'est pas avoir le jugement sain que de trouver quelque chose à redire à vos [Dieu] ouvrages [...] Mais maintenant je suis persuadé que tout ce que vous avez fait est bon ; et je n'ai garde de dire : ne serait-il point à souhaiter que telles et telles choses ne fussent pas ? Car quand il n'y en aurait point d'autres, je pourrais en désirer de plus parfaites ; mais je serais obligé de vous louer d'avoir fait celles-là, encore qu'elles fussent seules. Il n'y a point d'autre mal que la perversité de la volonté, qui vous quitte, mon Dieu, pour ce qui est moins que vous" 19 .
designated by Refl. I) Beginning of the controversy between Bayle and Arnauld about the pleasure of senses. 1686 Malebranche : Trois lettres du Père Malebranche à un de ses amis dans lesquelles il répond aux Réflexions philosophiques et théologiques de M. Arnauld...(OC, 8, p.619-787 ; hereafter designated by Rép. Refl.) Beginning of the Leibniz-Arnauld correspondence Fontenelle : Doutes sur le système physique des causes occasionnelles. Réflexions philosophiques et théologiques sur le nouveau système de la nature et de la Grâce, Livre II touchant l'ordre de la Grâce.(OAA, 39, p.415-643 ; hereafter designated by Refl. Dissertation sur le prétendu bonheur du plaisir des sens (OAA, 40, p.10-68); end of the Bayle-Arnauld controversy. Fénelon : Réfutation du système du Père Malebranche (non published, uncertain datation). Bossuet : Lettre au marquis d'Allemans du 21-05-1687 (sometimes said Lettre à un disciple de Malebranche). 1688 Malebranche : Entretiens sur la métaphysique et sur la religion (OC, 12-13). 1689 TheTNG, the Trois lettres en réponse aux Réflexions... and the Trois lettres de l'auteur de la Recherche de la vérité.. ARE PUT TO the Index. Première lettre de Monsieur Arnauld, Docteur de Sorbonne, contre le R. P. Malebranche, prêtre de l'Oratoire ; Seconde lettre de Monsieur Arnauld, Docteur de Sorbonne, contre le R. P. Malebranche, prêtre de l'Oratoire (OAA,40 p. 69-81) Malebranche : Première lettre du Père Malebranche, prêtre de l'Oratoire à M. Arnauld, Docteur de Sorbonne ; Seconde lettre du Père Malebranche, prêtre de l'Oratoire à M. Arnauld, Docteur de Sorbonne. Arnauld : Troisième lettre de Monsieur Arnauld, Docteur de Sorbonne, contre le R. P. Malebranche, prêtre de l'Oratoire ; Quatrième lettre de Monsieur Arnauld, Docteur de Sorbonne, contre le R. P. Malebranche, prêtre de l'Oratoire (published in 1698 ; OAA, 40, p.81-110). 08 august 1694 : death of Arnauld. 1699 Malebranche : Réponse du P. Malebranche à la troisième lettre de Monsieur Arnauld, Docteur de Sorbonne, touchant les idées et les plaisirs (published in 1704 ; OC, 9, p.897-989). 1704 Malebranche : Recueil de toutes les réponses à Monsieur Arnauld, first edition, with some previously unpublished texts, as Contre la prévention, which contains an Abrégé du Traité de la nature et de la grâce (OC, 9, p.1043-1132) 1709 Malebranche : Recueil de toutes les réponses du P. Malebranche à Monsieur Arnauld (new edition of whole the texts writen by Malebranche during the controversy).
Arnauld :
3 Grace
Arnauld : 4 Pleasure -Ideas
1693
Beginning of the controversy between Malebranche and Sylvain Régis ; Régis write some articles
about the Bayle-Arnauld controversy.
Locke : Examination of P. Malebranche's opinion of our "seeing all things in God" (non
published).
1694
II)) Réflexions philosophiques et théologiques sur le nouveau système de la nature et de la Grâce, Livre III touchant Jésus-Christ comme cause occasionnelle (OAA, 39, p.645-848 ; hereafter designated by Refl. III) 1687 Malebranche : Quatre lettres du Père Malebranche touchant celles de M. Arnauld (réponse aux neuf lettres de 1685 ; OC, 7, p.341-467). Deux lettres du père Malebranche touchant le deuxième et le troisième volume des Réflexions philosophiques et théologiques (OC, 8, p.789-894) Arnauld :
On true and false ideas, 2, OA, 38, p.184, Kremer, p.6 ; see also 3, OAA, 38, p.187, Kremer, p.9).
leur désaccord is
based on this ambiguity.
Arnauld insiste d'emblée sur l'égalité ontologique entre les idées 10 : certes, elles
représentent, mais elles sont toutes, et ne sont que cela, des modifications de notre esprit : "those
diverse thoughts must be no more than different modifications of the thought which is my nature "
(
15 Voir en ce sens F. Bouillier, Histoire de la philosophie cartésienne, t.II, p.161 ; L. Ollé-Laprune, La philosophie deMalebranche, t.II, ; L. Bridet, La théorie de la connaissance dans la philosophiede Malebranche, ch.7, E. Jacques, Les années d'exil d'Antoine Arnauld, p.452; A-R. Ndiaye, La philosophie d'Antoine Arnauld, p.160. J'ai résumé ici à grands traits l'interprétation "dominante" de la position d'Arnauld, celle qui se situe dans la lignée des lectures de Th. Reid et J. Laird : voir en ce sens (et en faisant abstraction des nuances propres à chacun de ces interprètes) : M. Cook, "Malebranche versus Arnauld" ; S. Nadler, "Reid, Arnauld and the Object of Perception" et Arnauld and the Cartesian Philosophy of Ideas, p.7-10 et 131-132 ; D. Radner, Malebranche, a Study of a Cartesian System, p.99 ; D. Schulthess, "Antoine Arnauld et Thomas Reid, défenseurs des certitudes perceptives communes et critiques des modalités représentatives" ; J. Yolton, Perceptual Acquaintance from Descartes to Reid, p.14-15 et 60-66. Cette interprétation dominante a récemment été discutée par des commentateurs qui contestent que la position d'Arnauld s'apparente à un réalisme direct, et reviennent à une lecture proche de celle de A.O. Lovejoy pour qui Malebranche et Arnauld are both representationalists : see E-J. Kremer, "Arnauld's Philosophical Notion of an Idea" et D. Moreau, Cartésiens. Une interprétation de la polémique entre Antoine Arnauld et Nicolas Malebranche, ch.5. Il faut enfin mentionner H.M. Bracken's provocative interpretation in "The Malebranche-Arnauld Debate : Philosophical or 3 LE TRAITE DE LA NATURE ET DE LA GRACE : DESORDRE ET VOIES DIVINES, SAGESSE ET PUISSANCE
419-427 ; G. Dreyfus, La volonté selonMalebranche, ; H. Gouhier, La Philosophie de Malebranche., p.37-93 ; M. Gueroult, Malebranche, t.II, p.137-207 ; P. Riley, The General Will before Rousseau, notamment p. A. Robinet, Système et existence, Malebranche,
". Dans ce début du Livre I des Réflexions philosophiques et théologiques, les renvois d'Arnauld à des textes classiques d'Augustin (Confessions VII, De Ordine I et II, De vera religione, ch.40) sont redoublés par des références à des textes de Thomas d'Aquin équivalents. Voir le ch.2 qui cite Somme theologique, Ia,quest 22 art.2 et quest.49, art.12. 20 Il ne saurait être question de donner ici une présentation exhaustive des très longues discussions auxquelles ce thème du miracle a donné lieu. Les principaux textes d'Arnauld sur ce sujet sont : l'ensemble de la Dissertation sur (...) les miracles de l'ancienne loi ; les quatre premières des Neuf lettres, les chapitres 7 à 12 et 16-17 du Livre I des Réflexions.
34 Sur l'influnece de Malebranche au 18th century, see F. Alquié, Le cartésianisme de Malebranche, passim.35 Arnauld et Malebranche sont d'accord pour dire que nos idées représentent les objets dont elles sont les idées. Mais Arnauld estime que la concepion malebranchiste de l'idée rend impossible la compréhension de cette fonction représentaive de l'idée : see DRVFI, OAA, 38, p.584-585 and Letter to Nicole du 27 april 1684, OAA 2 p.406-411 ; and R. Glauser, "Arnauld critique de Malebranche : le statut des idées". 36 Sur cette histoire de la notion de volonté générale (dont Arnauld est probablement l'inventeur !), see P. Riley. The General Will before Rousseau. Bibliographical supplement. La plupart des ouvrages sur la pensée de Malebranche consacrent des passages à sa polémique avec Arnauld. Dans les ouvrages (plus rares) consacrés à Arnauld, on pourra voir A.R. Ndiaye, La philosophie d'Antoine Arnauld, passim and especially p.83-154 ; C. Senofonte, Ragione moderna e teologia. L'uomo diArnauld, ch.7, ; L. Verga, Il pensiero filosofico e scientifico di AntoineArnauld, II,
See Réfl. I, OAA, 39,
Dissert., ch.7, p.734 et 737.
Voir Lettre à Mersenne du 27-05-1630, AT, t.I, p.153 : "Car c'est en Dieu une même chose de vouloir, d'entendre et de créer, sans que l'un précède l'autre, ne quidem ratione".
Je reprends ici une suggestion de J. Laporte dans La liberté selonMalebranche, p.205.
See B. Groethuysen, Origines de l'esprit bourgeois en France. Les oppositions dégagées in this book entre Dieu des jansénistes et Dieu des bourgeois pourraient fournir, sur le plan politico-idéologique, une grille d'interprétation de la polémique entre Malebranche et Arnauld.
See Malebranche, Dialogues, VII 16, OC 12 p.170 : "Il nous est de la dernière conséquence de tâcher d'acquérir quelque connaissance des attributs de cet Etre souverain, puisque nous en dépendons si fort".
Voir sur ce point l'analyse d'H. Gouhier, La pensée métaphysique deDescartes, |
00410131 | en | [
"sdv.ee",
"math.math-pr",
"math.math-ds"
] | 2024/03/04 16:41:20 | 2009 | https://hal.science/hal-00410131/file/dyn_pop_var_20090817.pdf | Michel De Lara
Environmental Noise Variability in Population Dynamics Matrix Models
Keywords: environmental variability, matrix population models, growth rate, stochastic orders, log-convex functions
The impact of environmental variability on population size growth rate in dynamic models is a recurrent issue in the theoretical ecology literature. In the scalar case, R. Lande pointed out that results are ambiguous depending on whether the noise is added at arithmetic or logarithmic scale, while the matrix case has been investigated by S. Tuljapurkar. Our contribution consists first in introducing another notion of variability than the widely used variance or coefficient of variation, namely the so-called convex orders. Second, in population dynamics matrix models, we focus on how matrix components depend functionaly on uncertain environmental factors. In the log-convex case, we show that, in a sense, environmental variability increases both mean population size and mean log-population size and makes them more variable. Our main result is that specific analytical dependence coupled with appropriate notion of variability lead to wide generic results, valid for all times and not only asymptotically, and requiring no assumptions of stationarity, of normality, of independency, etc. Though the approach is different, our conclusions are consistent with previous results in the literature. However, they make it clear that the analytical dependence on environmental factors cannot be overlooked when trying to tackle the influence of variability.
We recall here different observations and results in the theoretical ecology literature which point out the ambiguous role of environmental noise on population size in matrix population models, according to whether the noise is added at arithmetic or logarithmic scale.
1.1 Lande's comments on additive noise at arithmetic or logarithmic scale R. Lande in [START_REF] Lande | Stochastic population dynamics in ecology and conservation[END_REF] comments the influence of environmental noise on population size according to whether the noise is added at arithmetic or logarithmic scale. The evolution of population size N(t) in absence of density-dependent effect may be described
• either on arithmetic scale with multiplicative growth rate λ(t) and dynamic N(t + 1) = λ(t)N(t),
• or on logarithmic scale with growth rate on the log scale r(t) = log λ(t) and dynamic on the log scale log N(t + 1) = r(t) + log N(t).
On the one hand, adding environmental noise to multiplicative growth rate as in λ(t) = λ + ǫ(t), where the noise is zero-mean (E[ǫ(t)] = 0), gives the following mean of growth rate on the log scale r = E[log λ(t) ] ≈ log λσ 2 r . "Thus, demographic and environmental stochasticity reduce the mean growth rate of a population on the logarithmic scale, compared with that in the (constant) average environment" [START_REF] Lande | Stochastic population dynamics in ecology and conservation[END_REF].
On the other hand, adding environmental noise to growth rate on the log scale as in r(t) = r + ǫ(t) gives, in case ǫ(t) follows a Normal distribution N (ǫ, σ 2 ǫ ), the following mean of growth rate on the arithmetic scale
λ = exp r + ǫ + σ 2 ǫ 2 .
Thus, Lande concludes that, "with the mean environmental effect equal to zero, ǫ = 0, then it would be found that environmental stochasticity increases the mean multiplicative growth rate, λ".
Tuljapurkar's asympotic approximation
S. Tuljapurkar considers a stationary sequence of random matrices A 0 , A 1 , . . . yielding population vector n(t)
= A t-1 • • • A 0 n(0) and population size N(t) = A t-1 • • • A 0 n(0)
. Under general conditions (see [START_REF] Tuljapurkar | Population Dynamics in Variable Environments[END_REF][START_REF] Caswell | Matrix Population Models[END_REF]), there exists a deterministic stochastic growth rate λ s defined by
log λ s = lim t→+∞ 1 t log N(t) = lim t→+∞ 1 t log A t-1 • • • A 0 n(0) .
Denoting by λ 1 the largest eigenvalue of the average matrix A, Tuljapurkar obtains the approximation
log λ s ≈ log λ 1 - τ 2 2λ 2 1 + θ λ 2 1
where τ 2 is proportional to the variance E[(A t -A) ⊗ (A t -A)] (and θ is related to autocorrelation). In this case, environmental stochasticity reduces the mean growth rate of the population.
A quest for generic results
The two above cases show that environmental noise has an ambiguous impact on population size in matrix population models. Our main objective is contributing to clarify this impact with generic mathematical results. For this, we shall first introduce in Sect. 2 a tool to measure variability, distinct from the widely used variance or coefficient of variation, and known as convex partial orders. Then, in Sect. 3, we shall provide generic results on environmental noise variability in population dynamics matrix models. We conclude in Sect. 4 by pointing out proximities and differences between our approach and those presented in Sect. 2.
Convex orders as tools for measuring variability
To a (square integrable) random variable X, one can attach the variance var(X). This latter scalar measures "variability", and any pair of random variables X and Y may be compared, with X being more variable than Y if var(X) ≥ var(Y ). The variance thus defines a total order.
Other orders are interesting for comparing pairs of random variables. However, they are generally not total: not all pairs may be ranked. Related to this is the fact that no single scalar, such as variance, may be attached to a random variable to measure its variability. In this vein, we shall present the so-called increasing convex and convex stochastic orders. Such orders can only rank random variables for which the primitives of their respective repartition functions never cross.
We think that these orders and many others referenced in the two main books [START_REF] Muller | Comparison Methods for Stochastic Models and Risk[END_REF][START_REF] Shaked | Stochastic Orders[END_REF] may be useful in the ecological modelling scientific community. Of course, for this, practical tests must be developed to compare empirical data as to their variability. This is not the object of this paper.
All random variables are defined on a probability space with probability P. To a random variable X, we shall attach its (right-continuous) repartition function F (x) = P(X ≤ x). We shall always consider random variables with finite means, with generic notation X and Y , and F and G for their respective repartition functions.
Increasing convex order
The increasing convex order compares random variables according both to their "location" and to their "variability" or "spread" [START_REF] Shaked | Stochastic Orders[END_REF]. We say that X is less than Y in increasing convex order, denoted by
X icx Y ,
if and only if one of the following equivalent conditions holds true
• the primitive of the repartition function of X is always below that of Y :
c -∞ F (x)dx ≤ c -∞ G(x)dx, for all c ∈ R, • E(ϕ(X)) ≤ E(ϕ(Y )
) for all increasing and convex function ϕ.
Roughly speaking, Y is more likely to take on extreme values than X. In a sense, X is both "smaller" and "less variable" than Y [START_REF] Shaked | Stochastic Orders[END_REF]. We have the important property that, when X icx Y , the means are ordered too: E(X) ≤ E(Y ). However, nothing can be said of the variances. To compare variances, we need a stronger (more demanding) order.
Convex order
The convex order compares random variables according to their "variability" or "spread" [START_REF] Shaked | Stochastic Orders[END_REF]. We say that X is less than Y in convex order, denoted
X cx Y ,
if and only if one of the following equivalent conditions holds true:
• the means are equal and the primitive of the repartition function of X is always below that of Y , that is, E(X) = E(Y ) and X icx Y , Roughly speaking, Y is more likely to take on extreme values than X (see Figure 1). Notice that the convex order is more demanding than the increasing convex order since the class of "test functions" is larger: all convex functions and not only the increasing convex ones. This is why we obtain stronger important properties that, when X cx Y , the means are equal E(X) = E(Y ), and the variance are ordered var(X) ≤ var(Y ).
Some properties
• This icx and cx orders are stricter than the order defined by comparing variances: not all pairs of random variables may be ranked.
• Consider the class M µ,σ 2 of random variables having same mean µ and variance σ 2 . Elements of M µ,σ 2 cannot be compared with respect to cx. Indeed, if X cx Y and var(X) = var(Y ), then X and Y have the same distribution [4, p.57].
• Adding zero mean independent noise to a random variable increases variability: if Z is independent of X and has zero mean, then X is less than Y = X + Z in convex order. This is a consequence of Strassen's Theorem [4, p.23]. More generally, without assuming independence, X is less than Y = X + Z in convex order whenever the conditional expectation E[Z|X] = 0.
• Consider X following Normal distribution N (µ, σ 2 ) and Y following N (ν, τ 2 ). Then, X icx Y if and only if µ ≤ ν and σ 2 ≤ τ 2 , and X cx Y if and only if µ = ν and σ 2 ≤ τ 2 [4, p.62].
• For p > 0, let us introduce CV p (X) := E(X p ) 1/p E(X)
for positive p-integrable random variable X. For p = 2, we have the usual coefficient of variation CV
(X) = CV 2 (X) = E(X 2 )/E(X). If X cx Y , then CV 2 (X) ≤ CV 2 (Y ) (in fact CV p (X) ≤ CV p (Y ) for all p ≥ 1).
Increasing convex order and convex order for random vectors
We shall need to compare not only random variables but random vectors as in [4, p.98] and [5, p.323]. For this, we can no longer appeal to repartition functions. Let X = (X 1 , . . . , X n ) and Y = (Y 1 , . . . , Y n ) be random vectors with finite mean.
We say that X is less than Y in increasing convex order, written X icx Y , if and only if E(ϕ(X 1 , . . . , X n )) ≤ E(ϕ(Y 1 , . . . , Y n )) for any increasing convex function ϕ : R n → R.
We say that X is less than Y in convex order, written X cx Y , if and only if E(ϕ(X 1 , . . . , X n )) ≤ E(ϕ(Y 1 , . . . , Y n )) for any convex function ϕ : R n → R. In this case, X and Y have the same mean.
Consider X following Normal distribution N (µ, Σ) and X ′ following N (µ ′ , Σ ′ ). Then, X cx X ′ if and only if µ = µ ′ and Σ ′ -Σ is non-negative definite. The situation is not as clear cut for the icx order. If
µ X ≥ µ Y and Σ X -Σ Y > 0 (non-negative definite), then X icx Y . If X icx Y , then µ X ≥ µ Y and a T (Σ X -Σ Y )a ≥ 0 for all vector a ≥ 0 [4, p.100].
Generic results on environmental noise variability in population dynamics matrix models
In what follows, we shall consider a population described at discrete times t = 0, . . . , T (where T is the horizon), either by a scalar n(t) ∈ R or by a vector n(t) = n 1 (t), . . . , n n (t) ∈ R n which may be abundances at ages or stages. The population size is
N = n = n 1 + • • • + n n .
The dynamical evolution of the population is supposed to be linear in the sense that
n(t + 1) = A ε(t) n(t) , t = 0, . . . , T -1 , (1)
where the matrix A is independent of n(t) (no density-dependence effect, this is why we label such model of linear). On the other hand, the components A ij of the matrix A may depend on the environmental factors, a vector ε(t) = ε 1 (t), . . . , ε p (t) ∈ R p at time t.
For instance, the components of the matrix A may depend linearly on the environmental factors, as in the expression
A ε = A 11 + ε 11 • • • A 1n + ε 1n • • • • • • • • • A n1 + ε n1 • • • A nn + ε nn (2)
or may depend exponentially as in
A ε = exp A 11 + ε 11 • • • exp A 1n + ε 1n • • • • • • • • • exp A n1 + ε n1 • • • exp A nn + ε nn . (3)
In this latter case, the components of the matrix A are log-convex function of the environmental factors. Recall that f is a log-convex function if f > 0 and log f is convex. Otherwise stated, f is the exponential of a convex function (as a consequence, a log-convex function is also convex).
In [START_REF] Ives | General relationships between species diversity and stability in competitive systems[END_REF], different non linear models are recalled. When α = 0, they are matrix models without density-dependency. Model (2a) exhibits components which are exponential in the environmental factor, while they are linear in models (2c) and (2d). Calculation shows that model (2b) has matrix components which are log-convex functions of the environmental factor.
We shall coin environmental scenario a temporal sequence ε(•) = ε(0), . . . , ε(T -1) of environmental factors.
Proposition 1 Consider two environmental scenarii, one being more variable in increasing convex order than the other:
ε M (0), . . . , ε M (T -1) icx ε L (0), . . . , ε L (T -1) . Denote by N M (T ) = A ε M (T -1) • • • A ε M (0) n(0) and N L (T ) = A ε L (T -1) • • • A ε L (0) n(0) the corresponding populations sizes.
Assume that the components A ij (ε 1 , . . . , ε p ) of the matrix A in (1) are nonnegative combinations of log-convex functions of the environmental factor (ε 1 , . . . , ε p ). Then, the more variable the scenario, the more variable the population size in the sense that
N M (T ) icx N L (T ) and log N M (T ) icx log N L (T ) . (4)
As a consequence,
E N M (T ) ≥ E N L (T ) and E log N M (T ) ≥ E log N L (T ) .
In a sense, environmental variability increases both mean population size and mean logpopulation size and makes them more variable.
Proof. The components of the vector n(T ) = A ε(T -1) • • • A ε(0) n(0) are sums of products of nonnegative combinations of log-convex functions of the environmental scenario. Therefore, by a property of log-convex functions [START_REF] Cohen | Convexity properties of products of random nonnegative matrices[END_REF], the components of the vector n(T ) are also log-convex functions of the environmental scenario, and so is the population size. Thus, the logarithm log N (T ) of the population size is convex in ε(0), . . . , ε(T -1) . For any increasing convex function ϕ : R → R, ϕ log N (T ) is convex in ε(0), . . . , ε(T -1) since convexity is preserved by left-composition with an increasing convex function. We end up by using the definition of increasing convex order for random vectors in §2.4: E ϕ log N M (T ) ≥ E ϕ log N L (T ) . This precisely means that log N M (T ) icx log N L (T ).
Since a log-convex function is also convex, the population size n(T ) is a sum of convex functions of the environmental scenario. Then, the proof follows as above.
At last, we use the property that X icx Y ⇒ E(X) ≥ E(Y ) to compare the means. 2
Instead of total population, the result would still hold true with any positive weighted combination a
1 n 1 + • • • + a k n k where a i ≥ 0, or with log(a 1 n 1 + • • • + a k n k ) where a i ≥ 0,
As an illustration, consider the following scalar dynamic equation for population size n(t + 1) = exp r + ε(t) n(t) , for which we have n
(T ) = exp rT + ε(0) + • • •+ ε(T -1) n(0) .
Hence, both n(T ) and log n(T ) are convex functions of the environmental scenario ε(•) = (ε(0), . . . , ε(T -1)), so that environmental variability increases mean population size as may be seen in Figure 2. Indeed, the mean population size generated by a more variable environment is above the one by a less variable environment, for all times.
Conclusion
We have used another notion of variability than the widely used variance or coefficient of variation, namely the so-called convex orders. We think that such partial orders may be of interest in theoretical ecology beyond this specific application.
To compare our approach with the literature, notice that, though we consider matrix population models, we make no ergodic assumption on the stochastic process A 0 , A 1 , . . . However, we make separate assumptions, on the one hand on the environmental factors ε(0), . . . , ε(t) and, on the other hand, on the functional dependence A t = A ε(t) .
With this approach, we obtain generic results which are not asymptotic in time, but valid at any time t and for a large class of functional dependence on the uncertainties.
Though the approach is different, our conclusions are consistent with the cases presented in Sect. 2. We extend the observation of Lande that, when adding environmental noise to growth rate on the log scale, environmental stochasticity increases the mean multiplicative growth rate to matrix models. As to Tuljapurkar's asympotic approximation, we arrive at a different conclusion because his assumptions correspond to a matrix A depending linearly on the environmental factors as in (2), and our result does not cover this case.
Our general conclusion is, therefore, that the analytical dependence on environmental factors cannot be overlooked when trying to tackle the influence of variability. However, as shown in this paper, specific analytical dependence coupled with appropriate notion of variability lead to wide generic results, valid for all times and not only asymptotically, and requiring no assumptions of stationarity, of normality, of independency, etc.
Figure 1 :
1 Figure 1: Illustration of the convex order: the primitives of the repartition functions do not cross each other
Figure 2 :
2 Figure 2: Environmental variability increases mean population size for a population model where growth rate depends exponentialy on environmental factor
Contents 1 Influence of environmental noise on population size 1.1 Lande's comments on additive noise at arithmetic or logarithmic scale . . . . 1.2 Tuljapurkar's asympotic approximation . . . . . . . . . . . . . . . . . . . . . 1.3 A quest for generic results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Convex orders as tools for measuring variability 2.1 Increasing convex order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Convex order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Some properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Increasing convex order and convex order for random vectors . . . . . . . . .
3 Generic results on environmental noise variability in population dynamics
matrix models
4 Conclusion
1 Influence of environmental noise on population size
Acknowledgements. Some years ago, Shripad Tuljapurkar encouraged me to develop the general ideas I exposed to him after one of his talks in Paris, and I thank him for this. I want to thank Michel Loreau and Claire de Mazancourt for welcoming me at the Dept of Biology, Mc Gill, Montreal, Canada. Fruitful discussions with them and with the participants to a seminar in August 2008 helped me shape my ideas. I also want to thank Tim Coulson and the participants to the Ecology and Evolution Seminar Series, Silwood Park campus, United Kingdom on March 2009. |
04101329 | en | [
"shs.phil"
] | 2024/03/04 16:41:20 | 2011 | https://hal.science/hal-04101329/file/Clarifying%20the%20Concept%20of%20Salvation.pdf | Denis Moreau
CLARIFYING THE CONCEPT OF SALVATION: A PHILOSOPHICAL APPROACH TO THE POWER OF FAITH IN CHRIST'S RESURRECTION
In this paper, I develop a philosophical clarification of the statement "faith in the resurrection of Christ saves men from sin", using some of the main arguments and hypotheses of my recent book, The Ways of Salvation (Les Voies du salut, Paris, 2010). I begin with some remarks on the theme of salvation in contemporary language and philosophy. I then sketch a conceptual analysis of the concept of salvation, first in its general sense, then in its specifically Christian one. Finally, I offer a hypothesis on the modus operandi of salvation, or at least of one aspect of salvation as understood by Christianity. I.
The concept of salvation still occurs regularly in ordinary language. It also appears, typically without being defined clearly, in a number of contemporary philosophical works far removed from Christianity.
It is striking how commonly the notion of salvation and related words (the verb 'to save' , the nouns 'saviour' , 'salvage') are used in most European languages. In French, people greet one another with the word "salut," in Italian they say "salve, " or "ti saluto, " in German they say "salü, " (or "heil, " "heil dich, " in the past). Though people using the word in such situations may not know it, this recalls an ancient practice of wishing an interlocutor 'salvation' upon meeting. For instance, Pythagorean philosophers appear to have greeted each other with the word 'health!' ugiainein, (a greeting also found in the New Testament, at the beginning of The Third Letter of John), and Seneca's letters to Lucilius often begin EUROPEAN JOURNAL FOR PHILOSOPHY OF RELIGION 3/2 (AUTUMN 2011), PP. 387-407 with the formula: "Seneca Lucilio suo salutem dat." The themes of saviour, salvage, salvation, which are etymologically as well as conceptually related to that of salvation are also increasingly common in political discourse (such and such a person is considered the country's saviour), economic discourse (the salvage of a corporation), as well as computer discourse (we save or salvage data). Finally, on a funnier, but no less meaningful note, French supermarkets sell a shower gel called "Axe. Difficult Morning, anti-hangover. " The product's packaging states quite clearly that it is intended for people who have a hard time waking up after partying, while the label describes its properties in terms that could come straight from a theology class: "miracle shower gel […] it will save your morning and bring you back to life after a short and restless night. "
Of course, the very frequency with which the concept of salvation is used means that in a certain way it is spent, close to losing its meaning from being used in too many contexts. But it might also be fair to ask whether this frequency of use doesn't echo, albeit weakly, ancient questions, long-standing concerns. In fact, if someone wanted to develop a Christian apologetic on the basis of the contemporary world's language use and dominant concerns, this theme of salvation would probably be an interesting starting point, a 'good hold' as people use the word 'hold' in rock-climbing.
All the more so because, while this notion of salvation retains, in its technical use at least, strongly religious and more specifically Christian connotations, it crops up in a surprising way in the writings of philosophers who are not particularly known for their support of Christianity, or are even quite critical of it.
Nietzsche is a striking if ambiguous example. As everyone knows, he sees himself as a fierce opponent of Christianity. But in several texts, he advocates a system of thought that, like Christianity, will lead to salvation -as long as we interpret salvation in accordance with its etymology, as a healing, the conclusion of a struggle against disease and weakness that yields 'the great health' . 1 The word also occurs in Jean Paul Sartre, in the famous last page of his autobiography The Words: "My sole concern has been to save myself -nothing in my hands, nothing up my sleeve -by work and faith. As a result, my pure choice did not raise me above anyone.
Without equipment, without tools, I set all of me to work in order to save all of me. If I relegate impossible Salvation to the prop-room, what remains?" 2 Similarly, in a rather mysterious footnote at the end of the section in Being and Nothingness called "Second attitude toward others: indifference, desire, hate, sadism", Sartre adds: "These considerations do not exclude the possibility of an ethics of deliverance and salvation. But this can be achieved only after a radical conversion which we can not discuss here. " 3 Ludwig Wittgenstein, in a text from Culture and Value (1937), for his part, wrote: "If I am to be really saved [erlöst], what I need is certainty, not wisdom, dreams, or speculation […] For it is my soul with its passions, as it were with its flesh and blood, that has to be saved [erlöst], not my abstract mind. " 4 And finally, Michel Foucault declares, in a way that is both enigmatic and fascinating, "I know that knowledge has the power to transform us, that truth is not just a way of deciphering the world […], but that, if I know the truth, then I will be transformed, maybe even saved. Or else I will die. But I believe, in any case, that for me these two are the same. " 5 These texts have three things in common: the theme of salvation is, for different reasons, unexpected; we understand, as we read them, that it is an important notion, one that reflects a concern essential to the author who uses it; but neither the context of these texts, nor, often, the entire corpus of their authors, give us a clear idea of how we should interpret 'salvation' or 'being saved' . Such conceptual blurriness, if not legitimate, is at least acceptable in the realm of ordinary language. But it is more problematic in a philosophical discourse that aims at conceptual clarity and rigor. To remedy this situation, I propose here a short clarification of the concept of salvation.
2 Jean-Paul Sartre, The Words, translated from the French by Bernard Frechtman, Vintage Books, 1981, p. 255 3 Jean-Paul Sartre, Being and Nothingness, translated and with an introduction by Hazel E. Barnes, Washington Square Press, 1956, p. 534, n. 13 4 Culture andValue, translated by Peter Winch, (Oxford: Basil Blackwell, 1974), pp. 32-33 5 "Interview, " by Stephen Riggins (1982;Dits et Ecrits, Paris, Gallimard, 2001), II, 1354
II. CLARIFICATION OF THE CONCEPT OF SALVATION
Historically, among the Greeks and Romans, the word salvation first meant the state of being or remaining whole and in good health, "safe and sound. " To be saved, then, was to be healed, and salvation, in the practical sense, meant health -not just physical, but also moral and spiritual health. In a more abstract sense, salvation meant both having reached a desirable way of life, as well as the process of attaining it, by being either removed from a situation or freed from a danger that somehow separated us from it. In a general sense, then, salvation can be understood as the return to a desirable former state that had been lost (as when one is saved from a sickness or a shipwreck), the safeguarding of this state against a threat (as one saves one's freedom from a potential oppressor, or one's life from a danger), or, finally, the improvement attaining this state represents. The meaning of the word salvation can, in short, be analyzed into two parts. Understood in its negative aspect, to be saved means to be delivered and freed, rescued and ripped away from a dangerous situation where looms a serious menace. Understood in its positive aspect, to be saved means being granted some good, reaching a state seen as beneficial or desirable, progressing from trials and wretchedness to a state of happiness and fulfilment. Therefore, I think we should find two elements in any soteriology. (A) A pessimistic or lucid diagnosis of our present situation as one that is painful and dangerous, a state we are inevitably and structurally thrown into, and out of which we must claw our way. An optimistic theory that held that everything is naturally for the best and will continue that way could not be called a soteriology.
(B) A more optimistic assessment of whether it is possible to leave this grievous state behind. If a theory accepts the pessimistic diagnosis of the human condition described in (A), but judges that we are bound to remain in this state of wretchedness, decay, and misery, then it is not describing human existence from a soteriological point of view.
Within the framework of these two elements, we can highlight a number of criteria to distinguish different kinds of soteriologies. For instance, we can distinguish different types of soteriology based on:
(a) Whether salvation is achieved through oneself (auto-salvation) or through someone else, something external to the self (hetero-salvation).
I will return to this distinction, which plays an essential role in differentiating Christian soteriology from most other forms of soteriology developed in philosophical contexts.
(b) The manner of reaching salvation. Individualistic theories hold that it is individuals who reach salvation, and holistic views assume that salvation is achieved collectively, by a group (a community, a nation, a Church, humanity as a whole).
(c) How broadly the class of the saved is extended. Some theories include only a few or a small group among the saved, some include the greater part of humanity, and some universalist or even cosmic doctrines include all of humanity, or even the entire universe, among the saved.
(d) Where salvation will take place: immanent theories hold that salvation is attained in this world, while some reserve salvation for another world.
(e) The nature of the alleged saviour: it can be a god (theo-soteriology), a man or a group of men (anthropo-soteriology), or even something else (extra-terrestrial beings, etc.).
(f) What degree of salvation is attainable: some theories hold that salvation is partial, others that it is total, others integrate the two into a process of salvation in stages or degrees.
(g) The nature of salvation, its content: most often, it is happiness, but even if we leave aside the well-known difficulties in agreeing on a common definition of happiness6 , there is no logical obstacle to imagining a different content for salvation.
Let us consider, for instance, how the Marxism of the 19 th and 20 th centuries, interpreted as a theory of salvation (or rather a secularized transposition of a theology of salvation) 7 , fits many of these categories. Marxism combines (a) auto-salvation (it is human beings who save themselves) and (e) anthropo-soteriology: it relies on a group of men (the proletariat, or its educated avant-garde) who hold the function of saviour in a period of transition (until the foundation of a classless society) to achieve a salvation which (c) all men or humanity as a whole share. This is (b) the conclusion of a collective process, which consists in (g) a happiness that is (f) complete and (d) obtained in this world.
III. THE SPECIFICALLY CHRISTIAN CONCEPT OF SALVATION
The New Testament attests that Jesus, whose Hebrew name Yeshoua means 'God saves, ' was quickly recognized by his disciples as the 'saviour' (salvator, sôter), the one who saves (salvare, sôzô) or brings salvation. 8 These early Christian texts use different themes, different images, to describe the status of the one who is saved, and the nature of salvation. The saved man is an invalid cured by Christ, a slave he frees, a debtor whose debt he forgives, a man possessed whose demonic bonds he looses, a man condemned whom he pardons, a dead man he brings back to life, etc. 9 One feature distinguishes Christian soteriology very clearly from those that can be found in ancient wisdom, the philosophical classics (e.g., Spinoza) 10 , Nietzsche, even, when he advises "independents" to "get up on their own" 11 , and contemporary thought that emphasizes human or personal autonomy. Indeed, for all other soteriologies, salvation is something that man, or in some cases humanity understood collectively, can achieve on his own, by his own actions, by making the best use of his own strengths and natural powers -where the rational powers are often singled out -in a process that is clearly a form of auto-salvation. The end of chapter 9, book IV of Epictetus's Discourses neatly synthesizes this conception of salvation: "Look, you have been dislodged though by no one else but yourself. Christian salvation, by contrast, is "salvation from elsewhere" ("hetero-salvation"). Man cannot reach it on his own, and it requires an external, divine and supernatural intervention: from a saviour, or else from a revelation -if by revelation we understand a body of knowledge that can neither be found in oneself nor gathered by human intellectual capacities alone, but must be received. The Christian approach to the concept of salvation, then, introduces a notion of, if not passivity, at least receptivity or dependence on an other (or an Other), that is, in the broad sense, a notion of heteronomy.
Salvation as understood by Christianity should therefore be defined as "movement from a negative to a positive state brought about by an external agent [which presupposes] three elements: a terminus, a starting point, and a transforming agent;"13 or else, "a process whose beneficiaries are moved from a negative situation to a new fulfilled existence by the action of an external agent. "14 In this case, the external agent is Jesus Christ, who, according to different acceptable translations of a series of Greek verbs used in the New Testament "frees, " (eleutheroô), "saves" (sôzô), "delivers" (rhuomai, luô), "tears away" (exaireô) humanity so that it can reach a new way of life.
That Christ is the saviour and that he saves men from sin is an idea that is so obvious to those who accept the Christian faith, that, unlike other concepts also central to Christianity (the trinity, Christ's two natures, etc.), it has never been seriously contested or rejected by any important currents in Christian thought. As a consequence, the great councils that enabled Christians to clarify important but controversial aspects of their faith, and thus to discriminate between orthodox and heretical theses, never made it the object of a dogmatic clarification. 15 The creed of the councils of Nicaea and Constantinople tells us only that Christ came « for us men and for our salvation » (Τὸν δι' ἡμᾶς τοὺς ἀνθρώπους καὶ διὰ τὴν ἡμετέραν σωτηρίαν κατελθόντα ἐκ τῶν οὐρανῶν / Qui propter nos homines, et propter nostram salutem descendit de caelis).
How the process of salvation works, its modus operandi, was thus left so open that it has given rise to a number of speculative accounts, about which it is not always easy to decide whether they are rivals, or whether they bring to light, without fully coordinating or synthesizing their incomplete accounts, different aspects of the concept of salvation. In Latin Christianity, for the last thousand years, the dominant answer to the question of how salvation works was the satisfaction theory proposed by Anselm of Canterbury in the Cur Deus Homo, and developed later, most notably by Thomas Aquinas and later Thomists. 16 I find this theory quite powerful, and I think that conceptually it is still relevant. But I don't want to ignore the fact that this explanation has become almost foreign, so to speak, or "inaudible" for many of our contemporaries. It has in fact become commonplace, for Christians and non-Christians alike, to reject the satisfaction theory of salvation as "judicial", "sacrificial", "vengeful", and "vindictive". Our contemporaries criticize its characterization of salvation as a "compensatory transaction", and the way it seems to worship pain by concentrating all of Christ's saving work in his passion and the sufferings that accompany it. 17 Moreover, in focusing so exclusively on 15 Among Catholics, the "schemata from the preparatory sessions" of the first Vatican council (1869-70) had considered defining redemption, but the texts were neither discussed nor voted on by the council (which was cut short by the Italian army's arrival in Rome). See Jean Rivière, Le Dogme de la rédemption (Paris, Gabalda, 1931), 116-120. There is also a mention (without definition) of the theme of "satisfaction" in a text from the 6 th session of the council of Trent. 16 See Summa Theologiae, III, questions 46-49; see also, for example, Eleonore Stump, "Atonement According to Aquinas" in Oxford Readings in Philosophical Theology, ed. Michael Rea, (Oxford, Oxford UP, 2009), vol.1 267-293. 17 For this type of criticism among writers who attack Christianity, see for example Nietzsche, The Antichrist, § 41 "'how could God allow it!' [Jesus' death] To which the the passion and death of Christ, this theory doesn't really assign to other aspects of his existence -his incarnation and even more importantly, his resurrection considered in themselves -any significant role in the work of salvation.
Therefore, without rejecting the fundamental importance of satisfaction and atonement in the concept of salvation, it is tempting to look for another explanation, or, more modestly, a complementary explanation of the modus operandi of Christian salvation understood as Christ's freeing us from sin.
IV. AN ATTEMPT TO EXPLAIN THE STATEMENT "FAITH IN THE RESURRECTION OF CHRIST SAVES MEN FROM SIN"
It's just this type of explanation I' d like to offer. I have tried to take into account an important trend in contemporary theology, which is really a corollary of the current turn away from satisfaction theories: the rejection of an exclusively sacrificial or expiatory theory of salvation, and the desire to see that the thinking that focuses exclusively on Jesus's passion and marginalizes the resurrection stop dominating the discussion. This agenda, which has been, in a way, the height of fashion in European salvation theology since the 1950's 18 , has been clear in its critical project, clear about what it is rejecting. It has also been clear in its general theoretical aim: "to restore Christ's victory, and return Christ's deranged reason of the little community [Jesus' disciples] formulated an answer that was terrifying in its absurdity: God gave his son as a sacrifice for the forgiveness of sins. At once there was an end of the gospels! Sacrifice for sin, and in its most obnoxious and barbarous form: sacrifice of the innocent for the sins of the guilty! What appalling paganism!". And in a Christian author, see for example, René Girard, Des choses cachées depuis la fondation du monde (Paris, Grasset, 1978) 269: "God [would] not just be demanding a new victim, he [would] be demanding the most precious, most cherished victim: his own son. This claim has done more than anything else, no doubt, to discredit Christianity in the eyes of men of good will in the modern world. It might have been tolerable to the medieval mind, but it has become intolerable to ours, and has become the stumbling block par excellence for an entire world repelled by the concept of sacrifice. " 18 Examples of this tendency born in the inter-war period among protestant theologians (like Karl Barth) include: François-Xavier Durrwell, Walter Kasper, Joseph Moingt, Jürgen Moltmann (at least in Theology of Hope), Wolhart Pannenberg, Karl Rahner, Bernard Sesboüe, Michel Deneken. resurrection to the central place in treatises on redemption it should never have lost"; "[to show that] the resurrection plays a fundamental role in salvation. " 19 But when it comes to the constructive side of the agenda, to specific explanations in support of the general aim, the results have been more problematic, less successful. I am here attempting one such specific explanation of the statement "faith in the resurrection of Christ saves us from sin. "
Here, now, is a summary of the argument I have developed to provide a philosophical clarification of this proposition. I will present it in four parts, each one corresponding to a section in my book.
(1) The first section offers a defence of a pragmatic approach to belief: beliefs should be considered not only with respect to their truth value, but also with respect to their effects, how they transform the believer and the world in which he acts. I say "not only…but also" because I don't think we should lose interest in the truth of beliefs, or reject Clifford's principle with its stipulation that the fundamental maxim of the ethics of belief is "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence. " But nothing stands in the way of our connecting our interest in the truth of beliefs to a pragmatic investigation of beliefs. We will then supplement our interest in a belief 's orthodoxy (how is its content theoretically righteous, speculatively correct) with a further question about its eudoxia (how is it beneficial for the believer who accepts it?). I offer a set of criteria for classifying beliefs from this pragmatic perspective, among which the two most fundamental are the magnitude and the value of a belief 's effects. We can thus distinguish between weak beliefs, which have a minimal impact on the believer's life ("beliefs with weak existential implications"), and highly effective ones, which have a significant impact on the life of those who come to believe them ("beliefs with strong existential implications"). We can also distinguish beliefs that produce correct or beneficial behaviour in those who accept them (eupraxic beliefs) from beliefs that produce incorrect or harmful behaviour (dyspraxic beliefs).
(2) Section II focuses on the notion of death. It takes as its starting point the classical view that any proposition about the nature of death (and more particularly of my death) can be an object of belief only, that positive knowledge or science about my death is impossible. If we accept this, then we are led to ask, even in this life: which among the available beliefs about death are beneficial and which are harmful? In the remainder of section II, I explain, without much original thought, that we spontaneously and naturally believe that death is the end of life, and that in most cases we are afraid of death so understood. As the expression 'fear of death' is a little bland and not always clear, I try to narrow it down by distinguishing different types or different degrees of fear of death. It ranges from the instinctual reaction we share with other animals, fleeing death as a fundamental threat, an annihilation of what we are insofar as we are alive, to highly intellectualized responses like the great artistic evocations of death (Mozart's Requiem, Molière's Don Juan) or philosophical investigations like Heidegger's study of anxiety. The general idea underlying this section is that we are naturally fearful of a death we interpret as the end of life.
(3) Section III shows that this standard belief about death is fundamentally dyspraxic, that is, it leads the believer to behave in ways that are bad and harmful, and which depending on one's lexical preferences, one can call faults, "bad deeds" or "sins". I will call this thesis, that there is a causal connection between the fear of death and sin, "the Lucretius hypothesis" because the idea is expressed, albeit without any detail about the precise nature of the connection, at the beginning of book III of De rerum natura.
and the old fear of Acheron driven headlong away, which utterly confounds the life of men from the very root, clouding all things with the blackness of death, and suffering no pleasure to be pure and unalloyed (…) Avarice and the blind craving for honours, which constrain wretched men to overleap the boundaries of right, and sometimes as comrades or accomplices in crime to struggle night and day with surpassing toil to rise up to the height of power-these sores in life are fostered in no small degree by the fear of death. For most often scorned disgrace and biting poverty are seen to be far removed from pleasant settled life, and are, as it were, a present dallying before the gates of death; and while men, spurred by a false fear, desire to flee far from them, and to drive them far away, they amass substance by civil bloodshed and greedily multiply their riches, heaping slaughter on slaughter. Hardening their heart they revel in a brother's bitter death, and hate and fear their kinsmen's board. In like manner, often through the same fear, they waste with envy that he is powerful, he is regarded, who walks clothed with bright renown; while they complain that they themselves are wrapped in darkness and the mire. Some of them come to ruin to win statues and a name; and often through fear of death so deeply does the hatred of life and the sight of the light possess men, that with sorrowing heart they compass their own death, forgetting that it is this fear which is the source of their woes, which assails their honour, which bursts the bonds of friendship, and overturns affection from its lofty throne. For often ere now men have betrayed country and beloved parents, seeking to shun the realms of Acheron. For even as children tremble and fear everything in blinding darkness, so we sometimes dread in the light things that are no whit more to be feared than what children shudder at in the dark, and imagine will come to pass. 20 In a number of analyses that I can't repeat here in any detail, I go on to show that the fear of death understood as the end of life leads to a series of evil actions: avarice or greed, gluttony, lust, homicide, disrespect to father and mother, and pride. By way of example, here is how we can establish a connection between fear of death and avarice, using a text from Karl Marx as support:
That which is for me through the medium of money -that for which I can pay (i.e., which money can buy) -that am I myself, the possessor of the money. The extent of the power of money is the extent of my power. Money's properties are my -the possessor's -properties and essential powers. Thus, what I am and am capable of is by no means determined by my individuality. I am ugly, but I can buy for myself the most beautiful of women. Therefore I am not ugly, for the effect of ugliness -its deterrent power -is nullified by money […] I am bad, dishonest, unscrupulous, stupid; but money is honoured, and hence its possessor. Money is the supreme good, therefore its possessor is good. Money, besides, saves me the trouble of being dishonest: I am therefore presumed honest. I am brainless, but money is the real brain of all things and how then should its possessor be brainless? Besides, he can buy clever people for himself, and is he who has power over the clever not more clever than the clever? Do not I, who thanks to money am capable of all that the human heart longs for, possess all human capacities?[...]That which I am unable to do as a man, and of which therefore all my individual essential powers are incapable, I am able to do by means of money. 21 The text reminds us of the specific function traditionally attributed to money: it is the universal mediator, the converter that makes all things commensurate by translating disparate realities and use-values into the same yard stick (the exchange value). To this classical analysis, Marx adds the idea that the act of buying, understood as an act of appropriation, causes the attributes of the property to be transferred to its owner. By merging the two driving ideas of this Marxist analysis, we can answer the question: in this mode of production and exchange, what is the object whose appropriation money fundamentally allows, and whose properties an owner claims for himself, at least at the level of fantasy?
It is time.
Wage-labour, after all, is the employer's use of his capital to "buy himself " his employees' time as well as the product of their work during that time. A commodity, likewise, is just the fruit of the work-time needed to produce it. It follows that buying and hoarding money (avarice in the strict sense), or objects (in particular, manufactured objects), in other words, being miserly in the broad sense, is really amassing human time, in so far as it is instantiated, and has been, in a certain sense, deposited in those objects. The more money we have, therefore, the more able we are to appropriate other people's time, buying it with wages, or buying it through the mediation of the commodities it has produced, and the more justified we feel in thinking of all this time as potentially our own.
Whether he consumes or saves, and whether his saving is an end in itself or the means to future consumption, the miser doesn't believe that "time is money"; he is rather moved by the belief that "money is time", that in the world in which he acts, having money means being able to acquire other people's time, literally, "saving time" or "buying time".
We can thus interpret the amassing of money as a more or less conscious fantasy promise of an indefinite heap of time, the illusory assurance that our existence will continue indefinitely, and so, as a fantasized attempt to escape our fear of death.
This, then, is how a causal and explanatory connection can be established between the fear of death on the one hand, and avarice or greed on the other. Using a number of different theoretical tools, I try to show in section III of my book that the same connection can be established between fear of death understood as the end of life on the one hand, and gluttony, lust, homicide, disrespect to mother and father, and pride on the other. Section III's general conclusion, then, is that the common belief about death has the characteristic effect of leading human beings to this type of evil act. The fear of death tends to land us in a kind of existential mediocrity, or, in the worst cases an existential incompetence. In other words, the standard belief about death is fundamentally and globally dyspraxic: it causes us to settle in a negative state that we can also call a state of sin, from which we need to be "saved", right now, in this life.
(4) In the fourth section, I finally turn to the question of salvation. To be saved from the negative state described in the previous section, one might adopt an orthopraxic belief about death -a belief that frees us from the evil acts we commit out of fear of death, and sets in motion a series of intellectual and emotional reactions that improve human existence.
Here my remarks turn avowedly Christian: for I believe that the Christian belief that death has been defeated by Christ is such a belief, orthopraxic in the highest degree.
I do not discuss the question of the truth or epistemic reliability of this belief in my book: I have nothing new to say on this subject.22 I accept this belief in a hypothetical way, following a method sometimes called philosophical theology. I ask: "If someone believes that Christ is resurrected, thereby signalling to us that death is not in fact the end of life, then what happens?"
All the elements I have discussed so far are falling into place to form a philosophical explication of how Christian salvation works. The belief in Christ's resurrection abolishes the ordinary representation of death as an absolute end, as well as the fearful relationship that follows. 23 This belief, then, is able to free us from the morally bad consequences (the sins) of the ordinary representation of death and our reaction to it. Thus, going back to the definition already spelled out, salvation is:
A process whose beneficiaries are moved from a negative situation… That is, humanity's situation as depicted in section III: "led astray" by the ordinary representation of death, and with a propensity to morally undesirable acts.
…to a new fulfilled existence…A man who no longer acts as described in section III has settled into a better realm of existence, one that is qualitatively superior to his former existence, not only because he is now rid of certain negative features, but also because he finds himself in a new set of circumstances conducive to leading a new life, one where dynamisms and capacities that could not be developed in his former existence can flourish. Ancient authors summarized it this way: "Christ killed the death that was killing man"; "he cast death's tyranny out of our nature completely by rising from the dead. " 24 …by the action of an external agent. Salvation is brought about by the power of the belief in Christ's resurrection. From an objective or historical standpoint, the external agent, the saviour, can be identified as Jesus Christ. If, on the other hand, we focus on the information contained in the proposition "Christ is resurrected, " then the external agency is a revealed body of faith. For this proposition cannot be deduced from 23 Cf. Athanasius of Alexandria, The Incarnation of the Word, 27 (Paris, Cerf, 1973, "Sources chrétiennes" n. 199) 362-365 : "Death has been destroyed, and the cross represents the victory won over it. It has no strength left, it is really dead. […] Ever since the Saviour resurrected his own body, death is no longer frightening. All those who believe in Christ […] really know that if they die, they do not perish but live. "
24 Melito of Sardis, Peri Pascha (Paris, Cerf, 1966 "Sources chrétiennes", n. 123), 96-97; Nicholas Cabasilas, The Life in Christ, III, 7 (Paris, Cerf, 1989-1990 "Sources chrétiennes" n. 355 et 361) 243. The text of this 14 th century Byzantine author deserves to be quoted more completely: "Thus, while men were cut off from God in three ways -through nature, through sin and through death -the saviour allowed them to meet him perfectly […], by removing one by one all the obstacles that kept them apart: [he removed the obstacle of] nature by sharing in humanity, the obstacle of sin by dying on the cross, and the last obstacle, the tyranny of death, he completely expelled from our nature by rising from the dead. " natural principles of human knowledge, nor can it be demonstrated a priori, and it is no doubt quite different from an ordinary piece of historical or experiential knowledge. It must therefore be a "revelation", which means that it presents for belief a body of knowledge that does not and cannot come from "me", but is given from elsewhere. Of course it belongs to the world, it appears in it, but from a source that is divine, or claimed to be so.
To use a different language: all the analyses I've developed so far now make it possible to explain the proposition "faith in the resurrection of Christ frees men from sin". If these earlier hypotheses are granted, then faith in the resurrection of Christ will, or should, set in motion in the individual who accepts it a series of intellectual and emotional transformations that improve his existence. If, as I have argued, the central problem of human life, what leads us astray and ruins our lives, is precisely a certain fearful relationship to death understood as the end of life, then the belief that death has been vanquished -a belief central and unique to Christianity25 -must be an excellent way to reach salvation -where salvation is understood as an improvement of existence that begins in this life, not in its eschatological sense (though, of course I don't reject that sense of the word).
CONCLUSION
In conclusion, let me make four qualifications to the thesis I am defending.
(a) I am not, of course, claiming that we are saved through knowledge, and that salvation is available only to the "experts" who think seriously about how salvation works. That would be Gnosticism, and I am not a Gnostic. It is faith that saves in my view, and it is "enough", so to speak, to believe that Christ is resurrected to enter into the process of salvation I describe. Someone who thinks seriously about the problem just adds an explanation of how the dynamics works, or tries to make the process intelligible.
(b) The thesis I am defending does assert that salvation or "justification" is brought about by faith, or, more precisely, by faith in Christ's resurrection with its message that death has been defeated. 26 I am here using the world faith in a strong sense, the sense of the credere in Deum, of strong conviction, a deep and sincere adherence that causes noticeable changes in the believer's interactions with the world. It is faith used in this sense that has certain consequences for the believer's salvation in my remarks above. It is again, faith used in this strong sense that contemporary authors call performative: "the Christian message was not only 'informative' but 'performative'"; "is the Christian faith […] 'performative' for us -is it a message which shapes our life in a new way, or is it just 'information' which, in the meantime, we have set aside and which now seems to us to have been superseded by more recent information?"27 If, by 'performative" (in the broad sense) 28 we understand the property whereby certain beliefs not only represent a state of affairs ("information"), but also produce a change by forming or transforming other beliefs and behaviours, then we can indeed speak of the saving character of performative faith in Christ's resurrection: the belief that death has been defeated has the characteristic effect of producing salvation. Being freed from sin -that is, being less tempted, better able to resist temptation, committing fewer or even none of the morally reprehensible acts described above -flows directly from the radical modification in the meaning of death that comes from accepting the belief in Christ's resurrection. c) On the other hand, in this defence of salvation through faith, I do not mean to commit myself to a specific position in the age-old (though nowadays mostly becalmed) theological debate about the respective roles of faith and works (i.e., individual actions) in salvation. Even if my thesis can nominally evoke the sola fide of the Reformation, since it gives faith the essential role in the production of salvation, it is in fact closer to the position generally thought of as the Catholic one: where the emphasis is on how man is brought into a situation where he can act well, and, using his freedom correctly, (subjectively) appropriate the salvation Christ brings (objectively). In this light, justification becomes the fact of finding oneself in a new practical context, one where obstacles and obstructions to right action no longer bind us when we act, so that it becomes possible for us to be just. On this interpretation, then, justification is more a journey than a consequence of faith in the resurrection that is acquired once and for all; it is not so much a state in which we find ourselves, as the lifelong tension required for becoming just. Collectively, it is less an event we can describe as having happened, than the process in which, according to Christianity, human beings have been involved since the discovery of the empty tomb. The idea that what leads to salvation is revealed implies a certain receptivity, or even passivity, as the intervention of a God or saviour implies heteronomy. But, it turns out, none of these are incompatible with the individual's having, or needing to have some personal agency in the process of his salvation, that is, with the possibility that the individual's actions will have the character of an auto-salvation. Receptivity and passivity are the starting point, necessary conditions for a new mode of action that an individual can't adopt on his own. d) Finally, I want to say more about what it means for this discussion of Christ's saving action to shift the focus from the passion to the resurrection -something scholastic theology often saw as nothing more than a happy ending (it's always better when the good guy wins) or else a "miracle" meant to elicit or strengthen faith. 29 First of all, the shift of focus does not imply that the darkness of Good Friday was useless or superfluous, that Jesus's dying on the cross after a degrading agony was irrelevant, or that nothing essential would be missing if he had died peacefully in his bed before being resurrected a few days later, or even more to the point, if he had proclaimed his victory over death without dying himself. First, as theologians in the first few centuries explained in their arguments against the docetist heresy, the reality of Christ's resurrection and humanity require the reality of his death. Second, the passion insofar as it is sorrowful and negative, and the resurrection, are like two facets of the same event -an event that brings salvation 30 and revelation -which can only be fully understood by focusing alternatively on one or the other of its facets. In its initial phase, this event reveals that salvation is not obtained easily; it is not obtained through the means usually promoted in the world (power, riches, honour, will to dominate), through violence, the desire to punish or to seek vengeance. Christ's passion shows that the road to salvation is hard, and that the fight against evil and everything that leads to it is sometimes painful and can require great sacrifice. The passion reminds us that "there is here an ordered sequence that Christ himself followed: first the passion, then the glorification. […] As long as our life here on earth lasts, suffering and death come before joy and resurrection. " 31 As the theologians say, it was "appropriate, " from this point of view, that Christ should die in the agony of the cross. However, the passion (and this includes its role in salvation) can be fully understood only in the light of the later event that gives it meaning. A story in which Jesus was only crucified would have a completely different meaning. Or it might not have any meaning at all, like a symbol of the absurdity and cruelty of the world. So, following Karl Barth, I want to warn those who think seriously about Christian justification and salvation against the Nordic Melancholy of a (good) Friday theology, abstractly focused on the cross alone and forgetful of Resurrection Sunday.
The seriousness with which we insist on the starting point of justification is a good and necessary thing, that is, on the fact […] that we can only go in one direction: from the death [of Jesus] on the cross to his resurrection. And so we must consider first what is past, that is, our death which he suffered, then what is future, that is, the life he received. […] But we must 30 Ultimately, Thomas Aquinas doesn't disagree: "as to efficiency, which comes of the Divine power, the Passion as well as the Resurrection of Christ is the cause of justification. " Summa Theologiae III, question 56, article 2 ad 4. This same question in the Summa parcels out each one's role by distinguishing two aspects in the "complete" concept of redemption: the passion and death of Christ cause the forgiveness of sins by providing satisfaction, while the resurrection institutes a new life. In every case, finding "the correct dose" of passion and resurrection respectively seems to be one of the central concerns of Christian soteriology.
31 Hans Urs von Balthasar, Theodramatik, 5 vol., Fribourg-Einsiedeln, Johannes Verlag, 1973-1983, vol. 4 see to it that this seriousness -there are examples of this both in Roman Catholic and also in Protestant circles -does not, at a certain point which is hard to define, become a pagan instead of a purely Christian seriousness, changing suddenly into a "Nordic morbidity", losing the direction in which alone it can have any Christian meaning, suddenly beginning to look backwards instead of forwards, transforming itself into the tragedy of an abstract Theologia crucis which can have little and finally nothing whatever to do with the Christian knowledge of Jesus Christ.
[…] The knowledge of our justification as it has taken place in Him can not possibly be genuinely serious except in this joy, the Easter joy. [START_REF] Barth | Die Kirchliche Dogmatik[END_REF] Finally, I do not for a moment claim that this explanation of how salvation works is the only valid one or that it excludes all others. It is, for instance, perfectly compatible with the satisfaction or atonement theory. Ultimately, I think that the best description of the Christian conception of salvation is this: "Christ's incarnation, his life and acts, his passion, death and resurrection, are what make salvation from sin possible. " Of course, one could say that all this constitutes just one event of salvation, the "Jesus Christ event". But as soon as we try to explain how this salvation really works, we end up distinguishing different explanatory frameworks; some that focus more on the incarnation's saving power as is the case with so-called deification-theories, others like satisfaction theories, that focus more on the passion. As a philosopher, and following in the footsteps of some of Saint Paul's texts 33 , I have wanted to draw attention to the soteriological value of faith in the resurrection. 34 I don't think that anyone can be certain with categorical certainty, that the proposition "Christ is risen from the dead" is true. Accepting it will always imply an irreducible element of faith, something like a bet that answers the existential question "what may I hope?" as much as the historical one "what can I know?". But if this proposition is true, if death really has been conquered by a fully human person, this implies an unprecedented existential transformation, whose full array of consequences, in my view, contemporary philosophers, including those who have paid a lot of attention to the relation between life and death (phenomenology, for instance, in Heidegger) have not analyzed. I have tried here to sketch out a few of those consequences, while at the same time suggesting that there is a real existential benefit to the belief that Christ is risen from the dead, in betting, as Pascal understood the word, that he is really risen.
speaking, the fact of dying and, in an analogical sense, the "spiritual death" brought about by the breakdown in the relationship with God. If we take "death" in the strict and biological sense, achievements of modern science seem difficult to reconcile with this view (as, in all cases, the theme of a unique and temporally determined peccatum originale originans). My interpretation avoids this problem, considering that sin, as a situation, and from there the sins, as actions, follow from mortality as it is spontaneously understood as a fundamental characteristic, both biological and existential , of humanity. The point is not to identify "sin" and "finiteness", but to show how sin stems from a form of spiritual negativity inherent to a certain understanding of finiteness, probably de facto inevitable, but not insurmountable.
[…] Turn yourself away to return […] to freedom […]. And now, are you not willing to come to your own rescue? […] If you seek something better, go ahead and continue what you are doing. Not even a god can save you. " 12
See, for example, Ecce Homo, "Why I am so clever, " I; Thus Spoke Zarathustra, I and II.
Cf. Aristotle, Nichomachean Ethics, I, 2
See Karl Löwith, Meaning in History: The Theological Implications of the Philosophy of History (Chicago, University of Chicago Press, 1949), ch. 2
See, for example, Acts of the Apostles, 4:12, and 13:23; I John, 4:14; Gospel of Luke, 2:11.
For more fully developed typologies, as well as detailed studies on the theme of salvation in different New Testament texts, see, for example, Le Salut Chrétien. Unité et diversité des conceptions à travers l'histoire, ed. Jean-Louis Leuba(Paris, Desclée, 1995); Salvation in the New Testament, Perspectives on Soteriology, ed. Jan G. van den Watt(Leiden-Boston, Brill, 2005); Alloys Grillmeier, "Die Wirkung des Heilshandelns Gotte in Christus" in Mysterium Salutis, ed. Johannes Feiner and Dan Magnus Löhrer(Einsiedeln, Benziger, 1969), vol. III-2, 327-390.
See, for example, Jean Lacroix, Spinoza et le problème du salut,(Paris, PUF, 1970). The explicit goal of Spinoza's Ethics is to "lead, as if by the hand, to knowledge of the human mind and its supreme blessedness" (beginning of part II) which is identified as "salvation" (V, 36, scolie).
Posthumous text cited in Didier Franck, Nietzsche et l' ombre de Dieu (Paris, PUF, 1998) 427. Cf. Ecce Homo, "Why I am so wise, " § 2: "I took myself in hand and I healed myself. " We can also remember the taunt "Save yourself " shouted at Christ on the cross according to the Gospels (Matthew 27:40).
I am following, with some modifications, W. A. Oldfather's translation in Epictetus, The discourses as Reported byArrian, The Manual, and Fragments, LOEB Classical Library, volume II,
Paul-Évode Beaucamp, Supplément au Dictionnaire de la Bible(Paris, Letouzey et Ané), vol.11, col. 516.
The different components of this definition are taken from Raymond Winling, La Bonne nouvelle du salut en Jésus-Christ. Sotériologie du Nouveau Testament(Paris, Cerf, 2007).
In this order: Henri de Lubac, Le Mystère du surnaturel,([1965], Paris, Cerf,
2000) 20 ; Bernard Sesboüe, Jésus-Christ dans la tradition de l'Église([1982], Paris, Desclée de Brouwer, 2000) 238.
Lucretius, De natura rerum, III, translation Cyril Bayley
Marx, Economic and Philosophic Manuscripts of 1844. Third Manuscript, Translation Martin Milligan.
On this theme, see for example Richard Swinburne, The Resurrection of God Incarnate,(Oxford, Clarendon Press, 2003).
See the famous claim by Paul of Tarsus in I Corinthians, 15:14: "if Christ had not been raised, then our proclamation has been in vain and your faith has been in vain. "(New Revised Standard Version)
Cf. Saint Augustin, Contra Faustum, 16, 29, Migne PL, vol. 42, col. 336 : "The very resurrection in which we believe justifies us. "
Joseph Ratzinger/Benedict XVI, Encyclical Spe Salvi, §2 et §10.
In the narrow sense (that of J.L. Austin) used in the philosophy of language, only public utterances (and not mental states) that actualize what they describe, or "do what they say", are called 'performative' .
See, for example, Cajetan's commentary on Saint Paul (On Romans 4:25) Epistolae Pauli et aliorum apostolorum [...] juxta sensum literalem enarratae [1531]. |
00410134 | en | [
"phys.phys.phys-atom-ph"
] | 2024/03/04 16:41:20 | 2009 | https://hal.science/hal-00410134/file/a09.PRA.negative.pdf | Andrey Surzhykov
José Paulo Santos
Pedro Amaro
Paul Indelicato
Negative continuum effects on the two-photon decay rates of hydrogen-like ions
Keywords: numbers: 31, 30, J-,32, 80, Wr
Two-photon decay of hydrogen-like ions is studied within the framework of second-order perturbation theory, based on relativistic Dirac's equation. Special attention is paid to the effects arising from the summation over the negative-energy (intermediate virtual) states that occurs in such a framework. In order to investigate the role of these states, detailed calculations have been carried out for the 2s 1/2 → 1s 1/2 and 2p 1/2 → 1s 1/2 transitions in neutral hydrogen H as well as for hydrogen-like xenon Xe 53+ and uranium U 91+ ions. We found that for a correct evaluation of the total and energy-differential decay rates, summation over the negative-energy part of Dirac's spectrum should be properly taken into account both for high-Z and low-Z atomic systems.
I. INTRODUCTION
Experimental and theoretical studies on the twophoton transitions in atomic systems have a long tradition. Following seminal works by Göppert-Mayer [1] and by Breit and Teller [2] a large number of investigations have been performed in the past which focused on the decay of metastable states of light neutral atoms and low-Z ions. These investigations have dealt not only with the total and energy-differential decay rates [3,4,5] but also with the angular distributions [6,7,8,9] and even polarization correlations between the two emitted photons [10,11,12]. Detailed analysis of these two-photon properties have revealed unique information about electron densities in astrophysical plasmas and thermal Xray sources, highly precise values of physical constants [13], structural properties of few-electron systems including subtle quantum electrodynamical (QED) effects [14] as well as about the basic concepts of quantum physics such as, e.g., non-locality and non-separability [15].
Beside the decay of metastable states of low-Z systems, much of today's interest is focused also on the two-photon transitions in high-Z ions and atoms which provide a sensitive tool for improving our understanding of the electron-photon interactions in the presence of extremely strong electromagnetic fields [16]. In such strong fields produced by heavy nuclei, relativistic and retardation effects become of paramount importance and may strongly affect the properties of two-photon emission. To explore these effects, therefore, theoretical investigations based on Dirac's equation have been carried for the total and energy-differential decay rates [17,18,19,20,21] as well as for the angular and polarization correlations [22,23,24]. In general, relativistic predictions for the two-photon total and differential properties have been found in a good agreement with experimental data obtained for the decay of inner-shell vacancies of heavy neutral atoms [25,26] and excited states of high-Z fewelectron ions [27].
Although intensive experimental and theoretical efforts have been undertaken recently to understand relativistic effects on the two-photon transitions in heavy ions and atoms, a number of questions still remain open. One of the questions, which currently attracts much of interest, concerns the role of negative energy solutions of Dirac's equation in relativistic two-photon calculations. Usually, these calculations are performed within the framework of the second-order perturbation theory and, hence, require summation over the (virtual) intermediate ion states. Such a summation, running over the complete spectrum, should obviously include not only positive-(discrete and continuum) but also negative-eigenenergy Dirac's states. One might expect, however, that since the energy release in two-photon bound-bound transitions is less than the energy required for the electron-positron pair production, the contribution from the negative part of Dirac's spectrum should be negligible even for the decay of heaviest elements. From practical viewpoint, this assumption justifies the restriction of the intermediate-state summation to the positive-energy solutions only. Exclusion of the negative continuum would lead, in turn, to a significant simplification of the the second-order relativistic calculations especially for many-electron systems for which the problem of (many particle) negative continuum still remains unsolved.
Despite the (relatively) small energy of two-photon transitions, the influence of Dirac's negative continuum in second-order calculations should be further questioned because of possibility for production and subsequent annihilation of the virtual anti-particles. It has been ar-gued, for example, that transitions involving positron states have to be taken into account for the proper description of Thomson scattering [START_REF] Sakurai | Advanced Quantum Mechanics[END_REF], interaction of ions with intense electromagnetic pulses [START_REF] Boca | [END_REF]30] in the "undercritical" regime as well as magnetic transitions in twoelectron ions [31,32,33]. Moreover, the first step towards the analysis of negative-energy contributions to the two-photon properties has been done by Labzowsky and co-workers [34] who focused on E1M1 and E1E2 2p 1/2 → 1s 1/2 total decay probabilities. The relativistic calculations have indicated the importance of negativeenergy contributions not only for high-Z but also for low-Z hydrogen-like ions.
In this work, we apply the second-order perturbation theory based on relativistic Dirac's equation in order to re-analyze atomic two-photon decay. We pay special attention to the influence of negative continuum solutions on the evaluation of the transition amplitudes and, hence, on the total and energy-differential decay rates. For the sake of clarity, we restrict our analysis to the decay of hydrogen-like ions for which both the positiveand negative-energy parts of Dirac's spectrum can be still studied in a systematic way by making use of a finite basis set method [19]. Implementation of this method for computing relativistic second-order transition amplitudes is briefly discussed in Sections II A and II B. Later, in Section II C, we consider an alternative, semiclassical, approach which allows analytical evaluation of the negative-energy contributions to the two-photon matrix elements and transition rates. These two-semiclassical and fully relativistic-approaches are used in Section III to calculate the energy-differential and total decay rates for several multipole terms in the 2s 1/2 → 1s 1/2 and 2p 1/2 → 1s 1/2 two-photon decay of neutral hydrogen as well as hydrogen-like xenon Xe 53+ and uranium U 91+ ions. Based on the results of our calculations, we argue that both the total transition probabilities and the photon energy distributions can be strongly affected by the negative-state contributions; this effect is most clearly observed for the non-dipole transitions not only in high-Z but also in (non-relativistic) low-Z domain. Brief summary of these findings and outlooks are given finally in Section IV.
II. THEORY A. Differential and total decay rates
Not much has to be said about the basic formalism for studying the two-photon transitions in hydrogen-like ions. In the past, this formalism has been widely applied in order to investigate not only the total decay probabilities [17,18,19,34] but also the energy as well as angular distributions [23] and even the correlation in the polarization state of the photons [15,24]. Below, therefore, we restrict ourselves to a rather brief account of the basic expressions, just enough for discussing the role of negative-energy solutions of Dirac's equation in computing of the two-photon (total and differential) rates.
The properties of the two-photon atomic transitions are evaluated, usually, within the framework of the second-order perturbation theory.
When based on Dirac's equation, this theory gives the following expression for the differential in energy decay rate:
dw dω 1 = ω 1 ω 2 (2π) 3 c 3 ν f |A * 2 | ν ν |A * 1 | i E ν -E i + ω 1 + f |A * 1 | ν ν |A * 2 | i E ν -E i + ω 2 2 dΩ 1 dΩ 2 , (1)
where the transition operators A * j with j = 1, 2 describe the (relativistic) electron-photon interaction. For the emission of photons with wave vectors k j and polarization vectors êj these operators read as:
A * j = α • (ê j + Gk j ) e -ikj r -Ge -ikj r , ( 2
)
where α is a vector of Dirac matrices and G is an arbitrary gauge parameter. In the calculations below, following Grant [35], we employ two different gauges that are known to lead to well known non-relativistic operators. First, we use the so-called Coulomb gauge, when G =0, which corresponds to the velocity form of electronphoton interaction operator in the non-relativistic limit.
As the second choice we adopt G = (L + 1)/L in order to obtain Babushkin gauge which reduces, for the particular case of L=1, to the dipole length form of the transition operator. In Eq. (1), |i ≡ |n i κ i µ i and |f ≡ |n f κ f µ f denote solutions of the Dirac's equation for the initial and final ionic states while E i ≡ E niκi and E f ≡ E n f κ f are the corresponding one-particle energies. Because of energy conservation, E i and E f are related to the energies ω 1,2 = ck 1,2 of the emitted photons by:
E i -E f = ω 1 + ω 2 .
(
) 3
¿From this relation, it is convenient to define the so-called energy sharing parameter y = ω 1 /(ω 1 + ω 2 ), i.e., the fraction of the energy which is carried away by the "first" photon.
As usual in atomic physics, the second-order transition amplitudes in Eq. ( 1) and, hence, the two-photon transitions rates can be further simplified by applying the techniques of Racah's algebra if all the operators are presented in terms of spherical tensors and if the (standard) radial-angular representation of Dirac's wavefunctions are employed. For the interaction of electron with electromagnetic field, the spherical tensor components are obtained from the multipole expansion of the operator A * j (see Refs. [18,19,[START_REF] Varshalovich | Quantum theory of angular momentum[END_REF] for further details). By using such an expansion, we are able to re-write Eq. ( 1) as a sum of partial multipole rates
dw dω 1 = Θ1L1Θ2L2 dW Θ1L1Θ2L2 dω 1 , (4)
which describe the emission of two photons of electric (Θ j = E) or/and magnetic (Θ j = M) type carrying away the angular momenta L 1 and L 2 . For the decay of unpolarized ionic state |n i κ i , in which the emission angles as well as polarization of both photons remain unobserved, these partial multipole rates are given by [18]:
dW Θ1L1Θ2L2 dω 1 = ω 1 ω 2 (2π) 3 c 3 λΘ 1 λΘ 2 κν S jν (1, 2) 2 + S jν (2, 1) 2 + 2 κ ′ ν d(j ν , j ′ ν ) S jν (2, 1)S j ′ ν (1, 2) , (5)
where the angular coefficient d(j ν , j ′ ν ) is defined by the phase factor and 6j Wigner symbol:
d(j ν , j ′ ν ) = (2j ν + 1)(2j ′ ν + 1) × (-1) 2jν +L1+L+2 j f j ′ ν L 1 j i j ν L 2 , (6)
and the radial integral part is expressed in terms of the reduced matrix elements of the multipole (electric and magnetic) field operators:
S jν (1, 2) = nν n f κ f âλΘ 1 * L1 n ν κ ν n ν κ ν âλΘ 2 * L2 n i κ i E ν -E i + ω 2 . (7)
The summation over λ Θj in Eq. ( 5) is restricted to λ Θj = ±1 for the electric (Θ j = E) and λ Θj = 0 for the magnetic (Θ j = M) photon transitions.
Until now, we have discussed the general expressions for the two-photon transition rates which are differential in energy ω 1 of one of the photons. By performing an integration over this energy one may easily obtain the total rate that is directly related to the lifetime of a particular excited state against the two-photon decay. As it follows from Eq. ( 4), such a total rate can be represented as a sum of its multipole components:
w tot = Θ1L1Θ2L2 W Θ1L1Θ2L2 ≡ Θ1L1Θ2L2 ωt 0 dW Θ1L1Θ2L2 dω 1 dω 1 , (8)
where
ω t = E i -E f is the transition energy.
As seen from Eqs. ( 4)-( 8), any analysis of the differential as well as total two-photon decay rates can be traced back to the (reduced) matrix elements that describe the interaction of an electron with the (multipole) radiation field. Since the relativistic form of these matrix elements is applied very frequently in studying the various atomic processes, we shall not discuss here their evaluation and just refer the reader for all details to references [18,19,35]. Instead, in the next section we will focus on the summation over the intermediate states |n ν κ ν which appears in the second-order transition amplitudes (see Eq. ( 7)).
B. Summation over the intermediate states
The summation over the intermediate states in Eq. (7) runs over the complete one-particle spectrum |n ν κ ν , including a summation over the discrete part of the spectrum as well as an integration over the positive and negative-energy continuum. In practice, of course, performing such an infinite-state summation is a rather demanding task. A number of methods have been developed over the last decades in order to evaluate the second-order transition amplitudes consistently. Apart from the Green's function approach [23,36] whichin case of a purely Coulomb potential-allows for the analytical computation of Eq. ( 7), the discretebasis-set summation is widely used nowadays in twophoton studies [19]. A great advantage of the latter method is that it allows to separate the contributions from the positive-and negative-energy solutions in the intermediate-state summation. Since the effects that arise from the negative-energy spectrum are in the focus of the present study, we apply for the calculations below the finite (discrete) basis solutions constructed from the B-spline sets.
Although the B-spline basis set approach has been discussed in detail elsewhere [19,37,38], here we briefly recall its main features. In this way, we shall consider the ion (or atom) under consideration to be enclosed in a finite cavity with a radius R large enough to get a good approximation of the wavefunctions with some suitable set of boundary conditions, which allows for discretization of the continua. Wavefunctions that describe the quantum states |ν ≡ |n ν κ ν of such a "particle in box" system can be expanded in terms of basis set functions φ i ν (r) with i = 1, .., 2N which, in turn, are found as solutions of the Dirac-Fock equation,
V (r) c d dr -κν R -d dr + κν R -2c + V (r) c φ i ν (r) = ǫ i ν c φ i ν (r) , (9)
where ǫ i ν = E i ν -mc 2 and V (r) is a Coulomb potential of a uniformly charged finite-size nucleus. Due to computational reasons, each of φ i ν (r) function is expressed as a linear combination of B-splines as it was originally proposed in Ref. [37] by Johnson and co-workers.
For each quantum state |ν the set of basis functions φ i ν (r) spans both positive and negative energy solutions. Solutions labeled by i = 1, .., N describe the negative continuum with ǫ i ν < -2mc 2 while solutions labeled by i = N + 1, .., 2N correspond to the first few states of the bound-state spectrum as well as to positive continuum with ǫ i ν > 0. Thus, by selecting the proper sub-set of basis functions φ i ν (r) we may explore the role of negative continuum in computing of the properties of two-photon emission from hydrogen-like ions.
C. Semi-relativistic approximation
Based on the relativistic theory, the expressions obtained in the previous section allow to study the influence of the Dirac's negative continuum on the properties of two-photon emission from hydrogen-like ions with nuclear charge in the whole range 1 ≤ Z ≤ 92. For the low-Z ions, moreover, it is also useful to estimate the negative-energy contributions within the semirelativistic approach as proposed in the work by Labzowsky and co-workers [34]. To perform such a semirelativistic analysis let us start from Eq. ( 1) in which we retain the sum only over the negative-energy continuum states. Since the total energy of these states is E ν = -(T ν + mc 2 ), the corresponding energy denominator of the second-order transition amplitude can be written as E ν -E i + ω j ≈ -2mc 2 which leads to the following expression for the differential decay rate: 2: (Color online) Energy-differential decay rates for the (sum of the) 2M1, 2E2 and E2M1 2s 1/2 → 1s 1/2 multipole two-photon transitions in hydrogen and hydrogen-like ions. Relativistic calculations have been carried out by performing intermediate-state summation over complete Dirac's spectrum (solid line) as well as by restricting this summation to the positive-(dashed line) and negative-energy (dotted line) states only. Results of relativistic calculations are compared also with the semi-relativistic prediction (dot-dashed line) as given by Eq. (14).
dw (-) dω 1 = ω 1 ω 2 3 c 3 1 4(mc 2 ) 2 ν∈(-) f |A * 2 | ν ν |A * 1 | i 0 0.
+ f |A * 1 | ν ν |A * 2 | i 2 dΩ 1 dΩ 2 . ( 10
)
For the further simplification of this expression we shall make use of the multipole expansion of the electronphoton interaction operators (2). For the sake of simplicity, we restrict this semi-relativistic analysis to the case of Coulomb gauge (G = 0) in which operator A * j can be written as:
A * j = α • êj (1 -ik • r + 1/2 (-ik • r) 2 + ...) , (11)
if one expand the photon exponential exp(ik • r) into the Taylor series.
In contrast to the "standard" spherical tensor expansion [18,[START_REF] Varshalovich | Quantum theory of angular momentum[END_REF], the series (11) usually does not allow one to make a clear distinction between the different multipole components of the electromagnetic field. For instance, while the first term in Eq. ( 11) describes-within the non-relativistic limit-electric dipole (E1) transition, the term (-ik • r) gives rise both, to magnetic dipole (M1) and electric quadrupole (E2) channels. Such an approximation, however, is well justified for our (semirelativistic) analysis which just aims to estimate the role of negative continuum states in the different (groups of ) multipole two-photon transitions in light hydrogen-like ions. In particular, by adopting A * j = -α • êj (ik • r) for both operators in Eq. (10) we may find the contribution from the negative spectrum to the 2M1, 2E2 and E2M1 2s 1/2 → 1s 1/2 transition probabilities:
dw (-) M1,E2 dω 1 = ω 1 ω 2 (2π) 3 c 3 1 4(mc 2 ) 2 × ν∈(-) f |α • ê2 (k 2 • r)| ν ν |α • ê1 (k 1 • r)| i + f |α • ê1 (k 1 • r)| ν ν |α • ê2 (k 2 • r)| i 2 ×dΩ 1 dΩ 2 . ( 12
)
Here, summation over the intermediate states |ν is restricted by the negative-energy solutions of the Dirac equation for the electron in the field of nucleus. In the non-relativistic limits these states form a complete set of solutions of the Schrödinger equation for the particle in a repulsive Coulomb filed [START_REF] Landau | Quantum mechanics (Course of theoretical physics v[END_REF]. By employing a closure relation for such a set we re-write Eq. ( 12) in the form:
dw (-) M1,E2 dω 1 = ω 1 ω 2 (2π) 3 c 3 1 (mc 2 ) 2 × |(ê 1 ê2 ) f |(k 1 • r)(k 2 • r)| i | 2 dΩ 1 dΩ 2 , ( 13
)
where |i and |f denote now the solutions of the Schrödinger equation for the initial and final ionic states, respectively. For the particular case of 2s 1/2 → 1s 1/2 two-photon transition, i.e., when |i = |2s and |f = |1s , this expression finally reads:
dw (-) M1,E2 dω 1 = 2 22 3 13 α 10 5πZ 4 ω 3 1 ω 3 2 , (14)
if one performs an integration over the photon emission angles as well as a summation over the polarization states (see Ref. [34] for further details). Eq. ( 14) provides the differential rate for the 2M1, 2E2 and E2M1 two-photon transitions as obtained within the non-relativistic framework and by restricting the summation over the intermediate spectrum |ν to the negative energy states only. Being valid for low-Z ions, this expression may also help us to analyze the negative-energy contribution to the total decay rate, 3: (Color online) Energy-differential decay rates for the (sum of the) E1M1 and E1E2 2p 1/2 → 1s 1/2 multipole two-photon transitions in hydrogen and hydrogen-like ions. Relativistic calculations have been carried out by performing intermediate-state summation over complete Dirac's spectrum (solid line) as well as by restricting this summation to the positive-(dashed line) and negative-energy (dotted line) states only. Results of relativistic calculations are compared also with the semi-relativistic prediction (dot-dashed line) as given by Eq. ( 16).
w (-) M1,E2 = dw (-) M1,E2
where the integration over the photon energy ω 1 is performed.
Apart from the 2M1, 2E2 and E2M1 2s 1/2 → 1s 1/2 two-photon transitions, Eqs. ( 10) and (11) may also be employed to study other decay channels. For example, the negative energy contributions to the differential as well as total rates for the E1M1 and E1E2 2p 1/2 → 1s 1/2 decay read as:
dw (-) E1,M1,E2 dω 1 = 2 17 3 12 α 8 πZ 2 ω 1 ω 2 (ω 2 1 + ω 2 2 ) , (16)
and
w (-) E1,M1,E2 = (αZ) 8 2 5π 3 7 = 5.822 10 -5 (αZ) 8 , (17)
respectively [34]. Together with Eqs. ( 14) and ( 15), we shall later use these non-relativistic predictions in order to check the validity of our numerical calculations in low-Z domain.
III. RESULTS AND DISCUSSION
Having discussed the theoretical background for the two-photon studies, we are prepared now to analyze the influence of the Dirac's negative continuum on the total as well as energy-differential decay rates. We shall start such an analysis from the 2s 1/2 → 1s 1/2 transition, which is well established both in theory [18,19,23] and in experiment. For all hydrogen-like ions this transition is dominated by the 2E1 decay channel while all the higher multipoles contribute by less than 0.5% to the decay probability. The energy-differential decay rate given by Eq. ( 5) for the emission of two electric dipole photons is displayed in Fig. 1 for the decay of neutral hydrogen (H) as well as hydrogen-like xenon Xe 53+ and uranium U 91+ ions. For these ions, relativistic secondorder calculations have been done within the Coulomb gauge and by performing intermediate-state summation over the complete Dirac's spectrum (solid line) as well as over the positive-(dashed line) and negative-energy (dotted line) solutions only. As seen from the figure, the negative-energy contribution to the energy-differential decay rate is negligible for low-Z ions but becomes rather pronounced as the nuclear charge Z is increased. For the 2E1 decay of hydrogen-like uranium, for example, exclusion of the negative solutions from the intermediate-state summation in Eq. ( 7) leads to about 20 % reduction of the decay rate when compared with the "exact" result.
While for the leading, 2E1 2s 1/2 → 1s 1/2 transition the negative continuum effects arise only for rather heavy ions, they might strongly affect properties of the higher multipole decay channels in low-Z domain. In Fig. 2, for example, we display the energy distributions of photons emitted in 2M1 and 2E2 transitions. As seen from the upper panel of the figure corresponding to the decay of neutral hydrogen, negative energy part of the Dirac's spectrum gives the dominant contribution to the (sum of the) differential rates for these decay channels. With the increasing nuclear charge Z, the role of positive energy solutions also becomes more pronounced. However, these solutions allow one to describe reasonably well the differential rates (5) only if one of the photons is much more energetic than the second one, i.e., when either y < 0.1 or y > 0.9. For a nearly equal energy sharing (y ≈ 0.5), in contrast, accurate relativistic calculations of the 2M1 and 2E2 rates obviously require summation over both, the negative and the positive energy states.
Apart from the results of relativistic calculations, we also display in Fig. 2 the (sum of the) negativeenergy contributions to the 2M1, 2E2, M1E2 and E2M1 2s 1/2 → 1s 1/2 transition probabilities as obtained within the semi-relativistic approach discussed in Section II C. As expected, for low-Z ions both the relativistic (dotted line) and semi-relativistic (dot-dashed line) results basically coincide and are well described by Eq. ( 14). As the nuclear charge Z is increased, however, semirelativistic treatment leads to a slight underestimation of the negative-energy contribution to the two-photon (differential) transition probabilities. For the 2s 1/2 → 1s 1/2 decay of hydrogen-like uranium ion, for example, results obtained from Eq. ( 14) is about 30 % smaller than the corresponding relativistic predictions.
Up to now, we have been considering the 2s 1/2 → 1s 1/2 two-photon decay of the hydrogen-like ions. Apart from this -experimentally well studied-transition, recent theoretical interest has been focused also on the 2p 1/2 → 1s 1/2 two-photon decay [34]. Although such a channel is rather weak comparing to the leading onephoton E1 transition, its detailed investigation is highly required for future experiments on the parity violation in simple atomic systems [START_REF] Drukarev | [END_REF]. A number of calculations [34,42] have been performed, therefore, for the transition probabilities of the dominant E1M1 and E1E2 multipole components. In order to discuss the role of Dirac's negative continuum in these calculations, we display in Fig. 3 the energy-differential rate for the sum of the E1M1 and E1E2 2p 1/2 → 1s 1/2 two-photon transitions. Again, the calculations have been carried out within the Coulomb gauge for the electron-photon coupling and for three nuclear charges Z = 1, 54 and 92. As seen from the figure, negative-energy summation in the second-order transition amplitude ( 7) is of great importance for accurate evaluation of 2p 1/2 → 1s 1/2 transition probabilities both for low-Z and high-Z ions. That is, restriction of the intermediate-state summation to positive part of Dirac's spectrum results in an overestimation of the E1M1 and E1E2 differential in energy decay rates by factors of about 2 and 2.5 for the neutral hydrogen and hydrogen-like uranium, respectively.
Similarly to the 2s 1/2 → 1s 1/2 multipole transitions, we make use of semi-relativistic formulae from Section II C to cross-check our relativistic computations for the negative-energy contribution to the E1M1 and E1E2 2p 1/2 → 1s 1/2 decay rates in low-Z domain. Again, while for neutral hydrogen both, semi-relativistic (16) and relativistic approximations produce virtually identical results, they start to differ as the nuclear charge Z is increased.
So far we have discussed the energy-differential decay rates both for 2s 1/2 → 1s 1/2 and 2p 1/2 → 1s 1/2 two-photon transitions. Integration of these rates over the energy of one of the photons (see Eq. ( 8)) will yield the total decay rates. In Table I we display the total decay rates for the various multipole channels of 2s 1/2 → 1s 1/2 two-photon decay. In contrast to the photon energy distributions from above, here relativistic calculations have been performed in Coulomb (velocity) as well as Babushkin (length) gauges. In both gauges, negative-energy contribution to the (total) probability of the leading 2E1 transition is about eight orders of magnitude smaller than positive-energy term if decay of low-Z ions is considered but is significantly in- creased for higher nuclear charges. For the hydrogenlike uranium, for example, the total 2E1 decay rate is enhanced from 2.9041×10 12 s -1 in the velocity gauge and 2.3939×10 12 s -1 in the length gauge to the-gauge independent-"exact" value of 3.8256×10 12 s -1 if, apart from the positive-energy states, the Dirac's states with negative energy are taken into account in transition amplitude Eq. ( 7). These results clearly indicate the importance of the negative-state summation for the accurate evaluation of 2E1 2s 1/2 → 1s 1/2 total rates in both, velocity and length gauges. It worth mentioning, however, that while for velocity gauge our findings are in perfect agreement with results reported in Ref. [34] some discrepancy was found for calculations performed in length gauge for which Labzowsky and co-workers have argued that the contribution from the Dirac's negative continuum is negligible even for heaviest ions. The reason for this discrepancy is not apparent for the moment and, hence, further investigations are highly required.
In Table I, besides the leading 2E1 decay channel, we present the results of relativistic calculations for the higher multipole contributions to the 2s 1/2 → 1s 1/2 twophoton transition. The influence of Dirac's negative continuum is obviously different for various multipole combinations. While, for example, the negative-energy contribution to the intermediate-state summation in low-Z domain is negligible for the E1M2 decay it becomes of paramount importance for the 2E2 and 2M1 decay channels; an effect that has been already discussed for the case of the energy-differential decay rates (see upper panel of Fig. 2). Moreover, 2s 1/2 → 1s 1/2 transition with emission of two magnetic dipole (2M1) photons in light ions seems to happen almost exclusively via the negative energy (virtual) intermediate states. The total decay rate for this transition together with the negative-energy contribution to the probability of the 2E2 channel (evaluated in Coulomb gauge) gives in atomic units:
w 2M1 + w (-) 2E2 = 1.248 10 -6 (αZ) 10 , (18)
which is in perfect agreement with the semi-relativistic formula (15). As mentioned above for the computation of the photon energy distributions in low-Z domain, negativeenergy contribution to the intermediate-state summation is rather pronounced not only for the higher multipole terms of 2s 1/2 → 1s 1/2 decay but also for the leading E1M1 and E1M2 (two-photon) channels of 2p 1/2 → 1s 1/2 transition. Our relativistic calculations displayed in Table II indicate that one should account for negativecontinuum summation also for an accurate evaluation of the total decay rates for these two decay channels. For the decay of light elements, sizable contribution from the negative-continuum intermediate states arises both in length and velocity gauges. Again, these results partially question the predictions by Labzowsky and co-workers [34] who claimed a minor role of negative energy terms for E1M1 and E1M2 calculations in length gauge. For the velocity gauge, in contrast, our relativistic calculations:
are in a good agreement both, with the semi-relativistic prediction (17) and data presented in Ref. [34].
IV. SUMMARY AND OUTLOOK
In conclusion, the two-photon decay of hydrogen-like ions has been re-investigated within the framework of second-order perturbation theory, based on Dirac's relativistic equation. Special attention has been paid to the summation over the intermediate ionic states which occurs in such a framework and runs over complete oneparticle spectrum, including a summation over discrete (bound) states as well as the integration over the positive and negative continua. In particular, we discussed the role of the negative energy continuum in an accurate evaluation of the second-order transition amplitudes and, hence, the energy-differential as well as total decay rates. Detailed calculations of these rates have been presented for the 2s 1/2 → 1s 1/2 and 2p 1/2 → 1s 1/2 two-photon transitions in neutral hydrogen as well as hydrogen-like xenon and uranium ions. As seen from the results obtained, both the total decay probabilities and the energy distributions of the simultaneously emitted photons can be strongly affected by the negative-state summation not only for heavy ions but also for low-Z domain.
We demonstrate, however, that the role of Dirac's negative continuum becomes most pronounced for the higher (non-dipole) terms in the expansion of the electronphoton interaction; similar effect has been recently reported for the theoretical description of hydrogen-like systems exposed to intense electromagnetic pulses [30].
In the present work, we have restricted our discussion of the negative energy contribution to the second-order calculations of the total and energy-differential decay rates. Even stronger effects due to the Dirac's negative continuum can be expected, however, for the angular and polarization correlations between emitted photons. Theoretical investigation of these correlations which requires also detailed analysis of interference terms between the various (two-photon) multipole combinations is currently underway and will be published soon.
FIG. 1 :
1 FIG.1:(Color online) Energy-differential transition rates for the 2E1 2s 1/2 → 1s 1/2 two-photon decay of hydrogen and hydrogen-like ions. Relativistic calculations have been carried out by performing intermediate-state summation over complete Dirac's spectrum (solid line) as well as by restricting this summation to the positive-(dashed line) and negativeenergy (dotted line) states only.
E1E2 = 5 .
5 822 10 -5 (αZ) 8 ,
TABLE I :
I Total rates (in s -1 ) for the several multipole combinations of 2s 1/2 → 1s 1/2 two-photon decay. Relativistic calculations have been performed within the velocity and length gauges and by carrying out intermediate-state summation over the complete Dirac's spectrum (Wt) as well as over the positive-(W+) and negative-energy (W-) solutions only.
TABLE II :
II Total rates (in s -1 ) for the several multipole combinations of 2p 1/2 → 1s 1/2 two-photon decay.
Z=1 Z=54 Z=92
length velocity length velocity length velocity
W+ 4.1934 (-05) 3.2256 (-05) 3.0381 (+09) 2.2422 (+09) 2.1280 (+11) 1.4126 (+11)
E1M1 W- 1.9355 (-05) 9.6773 (-06) 1.5701 (+09) 7.7417 (+08) 1.3745 (+11) 6.5902 (+10)
Wt 9.6767 (-06) 9.6767 (-06) 6.3731 (+08) 6.3731 (+08) 3.8633 (+10) 3.8633 (+10)
W+ 3.6716 (-05) 1.2339 (-06) 2.6077 (+09) 8.6699 (+07) 1.7827 (+11) 6.3367 (+09)
E1E2 W- 4.5159 (-05) 9.6769 (-06) 3.1980 (+09) 6.7698 (+08) 2.1653 (+11) 4.4598 (+10)
Wt 6.6117 (-06) 6.6117 (-06) 4.2942 (+08) 4.2942 (+08) 2.3584 (+10) 2.3584 (+10)
Acknowledgements A.S. acknowledges support from the Helmholtz Gemeinschaft and GSI under the project VH-NG-421. This research was supported in part by FCT Project No. POCTI/0303/2003 (Portugal), financed by the European Community Fund FEDER, by the French-Portuguese collaboration (PESSOA Program, Contract No. 441.00), and by the Acções Integradas Luso-Francesas (Contract No. F-11/09) and Luso-Alemãs (Contract No. A-19/09). Laboratoire Kastler Brossel is " Unité Mixte de Recherche du CNRS, de l' ENS et de l' UPMC No. 8552." P.A. acknowledges the support of the FCT, under Contract No. SFRH/BD/37404/2007. This work was also partially supported by Helmholtz Alliance HA216/EMMI. |
04101468 | en | [
"shs.scipo"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04101468/file/VF-Time-mobility-and-sustainability-13-17.pdf | Emmanuel Munch
Mobility is the link allowing the spatial and temporal coordination of daily activities. Based on the precepts set out in the Italian Turco Law of 2000, urban time policies seek to coordinate, reconcile and harmonise the operating hours of urban transport with all the other activities that make up urban life. According to this philosophy, for instance, the arrival time of trains at the station should be coordinated with bus schedules, which in turn are coordinated with the hours of hospital nurses, children's day-care centres, public services and shops so that the various activities can be performed one after the other.
Since the early 2000s, fragmented social rhythms have been a phenomenon with a profound effect on the lifestyles of Western populations. From the point of view of social temporalities, devices, individuals, and institutions seem to function increasingly autonomously and according to an increasingly "personalised" rhythm.
We are witnessing a phenomenon of widespread desynchronisation of daily and weekly temporalities: shops open Sundays and nights, atypical working hours, shorter lunch breaks, longer daily travel time. The synchronisation of social rhythms which segmented daily life in the industrial era is now dislocated in favour of increasingly dispersed travel timing.
However, when we look at these situations from a territorial perspective, the findings are much more ambiguous than the average statistics and leading narratives would suggest.
For example, in many cities in France, Switzerland, and the United States, the increased proportion of employees able to choose their working hours -i.e. autonomy in working hours-comes alongside an increase in the synchronisation of arrivals at work during the morning rush hour.
So we are far from finished with the phenomenon of rush hour.
URBAN TIME POLICIES AND MOBILITY
On the road to "Sustimability" Since their inception, urban time policies have addressed the ambivalent impacts of empowered travel rhythms. On the one hand, they have sought to adapt the mobility offer to temporally evolved and diversified lifestyles: night buses, on-demand and intermodal transport… Strasbourg's "Revolution of Mobility", which can be found in this publication, offers an example of such diversification.
On the other hand, faced with the persistence of peak times, time offices are designing policies to manage transport demand. These take the form, among other things, of local policies to relieve peak-time transport congestion by staggering working hours. In Europe, time offices¹ were the first local authorities to successfully institute staggered working hours, mainly in universities. To our knowledge, the first such initiative took place in Poitiers in 2001. But without a doubt, the most impactful and most documented is a University of Rennes-led initiative from 2014².
Beyond these issues relating to the re-synchronisation of travel rhythms, now well mastered by time policies, I would like to point out a new issue for the temporal planning of urban spaces. It is today a blind spot in urban time policies, especially when we are interested in mobility.
PREFACE TO THE AGENDA
Transport and (re-)synchronised urban rhythms From "faster, cheaper" to "slower, closer" Often, when an urban time office analyses the temporal dimension of mobilities, it is done, implicitly, with a view to 'saving time' and maximising opportunities for interactions and exchanges between individuals, goods, and capital. Time offices' work fits into the usual framework of local action geared towards acceleration and economic growth. In this sense, someti-¹ Time offices are an institutional formulation for the Departments in charge of coordinating urban times. This designation can be found in Italy, where they first appear due to the "Turco Law" of the 2000s, and in France, since the "Aubry II" Act. ² Munch, E. (2014). "Could harmonised working times spell an end to the rush hour?". Métropolitiques.eu, 5. Available at: https://metropolitics.org/IMG/pdf/met-munch-en.pdf autonomy in working hours-comes alongside an increase in the synchronisation of arrivals at work during the morning rush hour.
So we are far from finished with the phenomenon of rush hour.
times in spite of themselves, time offices follow the precepts of the functional city. When I surveyed French, Spanish and Italian cities³, heads of time offices frequently depicted the same tendency: "the community's vocation is to absorb ever more flows".
On the upside, by enabling individuals to carry out activities previously hindered by time constraints, accelerated travel flows make it possible to diversify lifestyles. In this way, acceleration could also be conducive to a broadening of life horizons.
The downside, however, is that the effects of accelerated, and particularly motorised, mobility can be questioned in terms of road deaths and urban sprawl. Observed since the first suburban motorway constructions in the 1960s, sprawl takes people away from centres of activity rather than bringing them closer, and ultimately requires them to consume ever more fossil fuels or electricity to get around. Moreover, it is now clear that transport acceleration has a direct and intense impact on the ecological crisis we face in the 21st century. Transport speed, distance travelled, GDP growth, and CO2 consumption are historically closely linked.
Is it not time to move gradually from a model of always "faster, cheaper" to "slower, closer" to live better? The interest of this line of thought in urban spaces appears validated by the aspirations of their populations.
In 2015, an international survey⁴ on city dwellers' aspirations regarding their future lifestyles revealed that, irrespective of respondents' origin, aspirations to slow down the pace of life and return to close relationships were systematically ranked highest. More recent surveys in France confirm this⁵, and awareness-raising drawing on the Covid-19 pandemic would surely reinforce it. What form might mobility take to match these desires? Living at a less frantic pace with close relationships necessarily requires us to rethink our relationship with speed, productivism, and territorial organisation.
These readjustments can already be seen in the practices of certain catego-PREFACE TO THE AGENDA ⁴ Descarrega, B., Moati, P., 2016. « Modes de vie et mobilité. Une approche par les aspirations. Phase quantitative », Rapportde recherche, Forum Vies Mobiles, ObsoCo, Paris. Available at https://forumviesmobiles.org/recherches/3240/aspirations-liees-la-mobilite-et-auxmodes-de-vie-enquete-internationale ries of people, for whom, in 2023, shopping at local stores is likely more important than going to a distant shopping centre. In the same vein, others may now value going on holiday close to home, or a staycation. While we must be cautious about the reasons of this phenomenon, the latest national survey on household travel in France (2019) attests to this finding. It shows an increase in daily travel time, in parallel with a stagnation, or slight decrease, in distances travelled locally. For the first time in the history of national surveys, we are witnessing a slowdown in the average French person's travel speed.
In the field of mobility policies, it must be noted that local politicians have been involved, in some cases for many years, in organising mobility with a view to easing urban travel, particularly in town centres (switching to walking, cycling, boating, limiting car speeds, etc.).
PREFACE TO THE AGENDA
Slow mobility for desired sustainability: the quest for conviviality
Nevertheless, we lack a global planning model that clearly places the emphasis on preserving forms of slowness in urban metropolises, where intense paces of life are sometimes too pervasive.
The invitation, then, is to collectively define the rhythmic thresholds beyond which temporally intense activities become counterproductive for quality of life and the environment, both for individuals and the community. We are reminded of the seminal work of Ivan Illich (1974)⁶, for whom any travel exceeding 25 km/h wastes time for the community as a whole and therefore for the individual on average.
Where are the rhythmic thresholds of technical progress and economic growth, beyond which "acceleration" becomes too harmful for society? How can we define and evaluate sustainable urban rhythms based on maximum travel speed and limited accessibility? Because of their historical weight, urban time policies have an important role in creating a unified framework to regulate the mobility times in the city. Embracing sustainable objectives means putting quality before quantity and shifting priority from material to temporal prosperity. It requires transfor-Emmanuel Munch is a chrono-urbanist and socio-economist of transport. His work focuses on the problem of rush hours, the acceleration of life rhythms and the socio-ecological bifurcation.
ming our breathless functional cities into convivial ones, making clear room for a holistic vision of well-being and health in the city. This would necessarily put the spotlight on children, the elderly, and people with disabilities who are often sidelined in urban life because they are too slow to fit into the ever-accelerating course.
Finally, this new framework for urban time policies means reconciling the political with the individual. It means bringing together collective 'constraints' and individual aspirations by making slower urban mobilities a project for the common good, desirable for both the individual and the collective.
⁶ Illich, I. 1974. Energy and Equity. Harper and Row, New York. |
01245986 | en | [
"shs.hist",
"shs.hisphilso",
"shs.relig",
"shs.anthro-se"
] | 2024/03/04 16:41:20 | 2013 | https://shs.hal.science/halshs-01245986/file/KIM_Daeyeol_How%20the%20Confucians%20related%20to%20the%20Buddhist%20world.pdf | Amplified and pluri-disciplinary approaches in need
Conclusion
• Not only to re-establish certain historical facts about the relationship between these two traditions • but more generally because a better undertanding of the coexistence and exchanges between different traditions may help shed light upon the cultural plurality of Chosŏn society, a plurality that served to stimulate its social and cultural dynamism "A rose by any other name would smell as sweet" Shakespeare
•
Personal and social evolution -Observation of person's whole life -Observation of sociocultural changes upon long term |
04101540 | en | [
"math.math-fa"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04101540/file/paseky_lectures_20230520.pdf | Petru Mironescu May
email: [email protected]
Sobolev maps to manifolds
Keywords: Motivation. Program. Preliminary remarks Fractional Sobolev spaces, manifold constraint, density, trace theory, covering spaces, li ting, singularities, Jacobian. MSC classification. B, E
teaching and research institutions in France or abroad, or from public or private research centers.
Before proceeding further, let us note that a common feature of the above proofs is that the presence of topological invariants prevents the existence of extensions, or strongly approximating sequences, or other classical properties of scalar Sobolev spaces.
We now present a research program, in part initially developed by Bethuel in his groundbreaking contribution [ ], motivated by the pathologies exhibited in Proposition . .
General program
Strong density problems i) Characterize W s,p (Ω; N ) having the strong density property (C ∞ (Ω; N ) is strongly dense in W s,p (Ω; N )). ii) If the density property fails, find a class R of maps u enquoteas smooth as possible dense in W s,p (Ω; N ).
iii) If the density property fails, characterize the closure of C ∞ (Ω; N ) in W s,p (Ω; N ).
(Sequential) Weak density problems i) Characterize W s,p (Ω; N ) having the (sequentially) weak density property (C ∞ (Ω; N ) is (sequentially) weakly dense in W s,p (Ω; N )). ii) If the (sequentially) weak density property fails, characterize the (sequentially) weak closure of C ∞ (Ω; N ) in W s,p (Ω; N ).
Extension problems Here, we assume that s is not an integer. (We could also let s be an integer when p = 2.)
i) Characterize W s,p (Ω; N ) having the extension property: ∀ u ∈ W s,p (Ω; N ), ∃ U ∈ W s+1/p,p (Ω × (0, 1); N ) sucht that tr U = u.
ii) If the extension property fails, characterize tr W s+1/p,p (Ω × (0, 1); N ).
Li ting problems Let π : E → N be a non-trivial (locally isometric) covering map, with E a smooth embedded manifold.
i) Characterize W s,p (Ω; N ) having the li ting property, in the sense that for every u ∈ W s,p (Ω; N ) there exists some ϕ ∈ W s,p (Ω; E ) such that u = π • ϕ. ii) If the li ting property fails, characterize π • W s,p (Ω; E ).
In full generality, this program is still partly open (especially for the weak density problems, which are the new frontier). In what follows, I will present some of the main results in these directions, some basic tools and elements of proofs, and indicate additional results that are beyond the scope of these notes. Before proceeding, let us discard two cases: (i) the easy case where W s,p → C 0 (i. e., when sp > N or s = N and p = 1); (ii) the relatively easy case where sp = N .
Proposition . . Assume that W s,p → C 0 . Then W s,p (Ω; N ) has the strong (and thus weak) density property, the extension property (provided s is not an integer), and the li ting property.
Proof. The proofs of all the properties are similar: they rely on smoothing and nearest point projection on N . We detail the extension property. Let ρ ∈ C ∞ c (B 1 (0); R + ) be a standard mollifier, and set V (x, ε) := u * ρ ε (x), ∀ ε > 0, ∀ x ∈ Ω ε := {x ∈ Ω; d(x, ∂Ω) < ε}.
( . )
By standard trace theory (see, e. g., [ , Proof of Lemma . ] and Section . ), we have V ∈ W s+1/p,p on its domain. Let δ > 0 be such that the nearest point projection Π on N is smooth on the set {y ∈ R ; d(y, N ) ĺ δ}. Let ε 0 be such that
[0 < ε < ε 0 , x ∈ Ω ε ] =⇒ d(V (x, ε), N ) ĺ δ.
( . )
(The existence of ε 0 follows from the embedding W s,p → C 0 and the definition of V .) Set
T (x, ε) := Π • V (x, ε), ∀ 0 < ε < ε 0 , ∀ x ∈ Ω ε . ( . )
Then T is clearly N -valued and belongs to W s+1/p,p (Theorems . and . ). Let us next note that T is defined on
W := {(x, ε); 0 < ε < ε 0 , x ∈ Ω ε }.
( . )
Picking a di feomorphism Ψ : Ω × [0, 1] → W such that Ψ(x, 0) = (x, 0), ∀ x ∈ Ω, Ψ(Ω ε × {ε}) = Ω × {ε}, ∀ 0 ĺ ε ĺ ε 0 , we find that U := T • Ψ belongs to W s+1/p,p (Ω × (0, 1); N ). Finally, since U (x, ε) = Π • V • Ψ(x, ε) → Π • u • Ψ(x, 0) = u(x) as ε → 0, ∀ x ∈ Ω, we find that U has all the required properties.
QED
The limiting case sp = N is slightly more involved, and requires additional ingredients: the embedding W s,p → VMO (see Theorem . ) combined with a remarkable property of smoothing of VMO maps, made popular by Brezis and Nirenberg [ ] (see Lemma . below, with roots in Schoen and Uhlenbeck [ ] and Boutet de Monvel and Gabber [ ]).
Proposition . . Assume that sp = N . Then W s,p (Ω; N ) has the strong (and thus weak) density property and the extension property (provided s is not an integer).
Proof. We may assume that p > 1, since for p = 1 we have W N,1 → C 0 . We consider only the extension property. The proof is similar to the previous one. The main novelty stems in the proof of the existence of ε 0 satisfying ( . ) (see Lemma . ), since one cannot invoke anymore the continuity of u. Granted the existence of ε 0 , we construct U as above. To see that tr U = u, we argue as follows. (i) We clearly have tr V • Ψ = u. (Start by considering a smooth u, then pass to the limits, using trace theory.) (ii) Extend Π to a smooth compactly supported map, still denoted Π. By trace theory and Theorems . and . , for every map Y ∈ W s+1/p,p (Ω × (0, 1); R ), we have tr Π • Y = Π • (tr Y ). Applying (ii) to Y = V • Ψ and using (i), we find that, for our specific u, we have indeed tr U = u, as claimed. ( . )
Proof. For x ∈ Ω ε , we have
Bε(x)
|u(y) -u * ρ ε (x)| dy ĺ ˆBε(x) Bε(x) |u(y) -u(z)| |ρ ε (x -z)| dydz
ĺ ω N ρ L ∞ Bε(x) Bε(x)
|u(y) -u(z)| dydz.
We find that there exists some y 0 ∈ B ε (x) such that
|u(y 0 ) -u * ρ ε (x)| ĺ ω N ρ L ∞ Bε(x) Bε(x)
|u(y) -u(z)| dydz.
For such a y 0 , we have
d(u * ρ ε (x), F ) ĺ |u ε (x) -u(y 0 )| ĺ ω N ρ L ∞ Bε(x) Bε(x)
|u(y) -u(z)| dydz.
( . )
We conclude using the definition of VMO.
QED
In view of the above, in what follows, we assume, unless specified otherwise, that sp < N .
( . )
Also, in order to simplify the statements, in what follows, we assume, unless specified otherwise, that N ľ 2.
( . )
Lecture # . Li ting
Recall that the implicit assumptions in this section are N ľ 2 and sp < N .
Let π ∈ C ∞ (E , N ) be a Riemannian covering. We assume that: (i) E is connected, embedded into some R m ; (ii) π is locally isometric and non-trivial (i. e., π -1 (z) contains at least two points, ∀ z ∈ N ). A special important case is the one of the universal covering of a non-simply connected manifold N . Here are three prototypical examples.
. N = S 1 , E = R, π(t) = e ıt .
. N = RP k , E = S k (with k ľ 2), π(t) = {t, -t}.
. N = S 1 , E = S 1 (viewed as subsets of C), π(t) = t k , with k ∈ Z, |k|ľ 2.
The last two examples belong to the compact case where E is compact, while the first one belongs to the non-compact case (E is non-compact).
We next discuss the seminorm we consider on W s,p (Ω; E ) when 0 < s < 1. Set
|ϕ| p W s,p := ˆΩ ˆΩ [d E (ϕ(x), ϕ(y))] p |x -y| N +sp dxdy,
where d E is the geodesic distance on E . When E is compact, the above seminorm is equivalent to the one obtained by taking the Euclidean distance |ϕ(x) -ϕ(y)| in R m . This need not be the case in general.
We now present an important condition devised by Detaille [ ].
There exists some Φ ∈ C ∞ (R m , L (R , R m )) with bounded derivatives such that
Φ(t)((d t π)(τ )) = τ, ∀ t ∈ E , ∀ τ ∈ T t (E ). ( . )
This condition requires the global existence of a "controlled" le t-inverse of the isometry d t π : T t (E ) → T π(t) (N ). An explicit construction shows that this condition is automatically satisfied in the compact case [ ], but a counterexample in [ ] shows that it may not be satisfied in the non-compact case. Intuitively, ( . ) requires that the embedding of E "does not swirl too much".
Of importance for us is that this condition is satisfied by the universal covering π : R → S 1 (take Φ(t)(x 1 , x 2 ) := (-sin t)x 1 + (cos t)x 2 , ∀ (x 1 , x 2 ) ∈ R 2 ).
We have the following results (see [ ] for the universal covering of S 1 , Bethuel and Chiron [ ] for the non-compact case and partial results in the compact case, [ ] for the full result when 0 < s < 1, and Detaille [ ] for the role of the condition ( . )).
Theorem . . Let Ω = (0, 1) N . Assume that s ľ 1.
a) The li ting property fails when 1 ĺ sp < 2. b) In the non-compact case, when s > 1 further assume that ( . ) holds. The li ting property holds when sp ľ 2.
Theorem . . Let Ω = (0, 1) N . Assume that 0 < s < 1.
a) The li ting property holds when sp < 1.
b) The li ting property fails when 1 ĺ sp < 2.
c) In the non-compact case, the li ting property fails when 1 ĺ sp < N .
d) In the compact case, the li ting property holds when sp ľ 2.
Proofs.
Step . The li ting property fails when 1 ĺ sp < 2. Fix some point z 0 ∈ N . Assume, for simplicity, that z 0 = 0.
We first explain a gluing construction, valid for each integer 2 ĺ k ĺ N . (In our specific case, we will take k = 2.) Consider, for some 0 < a < 1, the cone C := {x = (x 1 , x ) ∈ R k ; |x |ĺ ax 1 }. Consider a sequence of maps u j = u j (x ), smooth in R k \{0}, such that u j (x ) = 0 if x ∈ C and u j ∈ W s,p (B 1 (0)). Write a point in R N in the form x = (x , y) ∈ R k × R N -k . We will define inductively points b j ∈ (0, 1) k and maps w j , such that:
(i) The truncated cones (C + b j ) ∩ [0, 1] k are mutually disjoint.
(ii) {1} × (0, 1) k-1 ⊂ ∪ iĺj (C + b i ), ∀ j.
(iii) w j (x) = u j (x -b j ), ∀ j, ∀ x ∈ Ω.
(iv) ||w j || W s,p ĺ 2 -j , ∀ j ľ 2.
Let b 1 = 0. Assume that we have chosen b 1 , . . . , b j . Pick some point (1, x ) ∈ {1} × (0, 1) k-1 \ ∪ iĺj (C + b i ). Then, for su ficiently small ε, the vertex b j := (1 -ε, x ) has all the required properties.
Moreover, clearly, by construction, we also have
(v) Ω \ ∪ jľ1 [(C + b j ) × (0, 1) N -k ] is connected.
(vi) The map w := w j , in (C + b j ) × (0, 1) N -k 0, in Ω \ ∪ jľ1 [(C + b j ) × (0, 1) N -k ] belongs to W s,p and is smooth in
Ω \ ∪ jľ1 [{b j } × (0, 1) N -k ].
We now specialize to our situation. Recall that we have assumed that z 0 = 0. Let (t j ) jľ1 be an enumeration (possibly with repetitions) of π -1 (0). Let 0 < b < π/4. By Lemma . , there exists some some v j ∈ C ∞ (R; N ) such that v j (θ) = 0, ∀ j, ∀ |θ|ľ b, and there exists no li ting
ζ ∈ C([-b, b]; E ) of v j on [-b, b] such that ζ(-b) = ζ(b) = t j . Set u j (re ıθ ) := v j (θ), ∀ r > 0, ∀ θ ∈ R.
Then, clearly, u j satisfies the assumptions at the beginning of this step (the property u j ∈ W s,p (B 1 (0)) following from Lemma . c)). Let w be as above. We claim that w has no li ting ϕ ∈ W s,p . Argue by contradiction. By Lemma . , ϕ is continuous in the connected set U := Ω \ ∪ jľ1 [{b j } × (0, 1) N -2 ]. Since w = 0 in U , there exists some j such that ϕ = t j in U . By continuity, ϕ = t j on the set [(∂C + b j ) ∩ Ω] \ [{b j } × (0, 1) N -2 ]. In particular, for small ε > 0, ϕ(b j + εe ±ıb ) = t j . Going back to the definition of w j , we find that v j has, on [-b, b] For pedagogical reasons, the case s = 1 is split into two sub-cases.
Step . The li ting property holds when s = 1 and p ľ 2: the compact case. Let first u be in the class R in ( . ), with := N -p -1 ĺ N -3. With A as in ( . ), the open set U := Ω \ A is simply connected (Lemma . ). Therefore, u has a smooth li ting ϕ in U . Since |∇ϕ|= |∇u| pointwise, we find that ϕ ∈ W 1,p (Ω; E ) and ||∇ϕ|| p = ||∇u|| p . Moreover, E being compact, we have
||ϕ|| p ĺ C := max{|t|; t ∈ E }. Let now u ∈ W 1,p (Ω; N ). Consider a sequence (u j ) ⊂ R such that u j → u in W 1,p (cf Theorem . ). The corresponding sequence (ϕ j ) of li tings is bounded in W 1,p (Ω; E ). If ϕ ∈ W 1,p (Ω; E ) is such that, up to a subsequence, ϕ j ϕ, then ϕ ∈ W 1,p
and ϕ is a li ting of u.
Step . The li ting property holds when s = 1 and p ľ 2: the non-compact case. The additional issue is passing to the weak limit the ϕ j 's, since boundedness in L p is not guaranteed anymore. This is achieved by induction on the space dimension, via the following result. Let N ľ 2 and let (u j ) ⊂ W 1,p (Ω; N ) be a convergent sequence. Then there exists a bounded sequence (ϕ j ) ⊂ W 1,p (Ω; E ) such that π • ϕ j = u j , ∀ j. (Actually, this result also holds when E is compact, but the proof in Step allows us to avoid its use.)
Step . . The induction process. Assume that the above property holds for N -1. Let u ∈ W 1,p (Ω; N ). Consider a sequence (u j ) ⊂ R such that u j → u in W 1,p . Let A j be the corresponding singular sets as in ( . ). Possibly a ter passing to a subsequence, for a.e. θ ∈ (0, 1), the partial map u j,θ := u(•, θ) belongs to W 1,p ((0, 1) N -1 ; N ), the set A j,θ := {x ∈ (0, 1) N -1 ; (x , θ) ∈ A j } is a finite union of N -p -2 planes (this condition being empty when N < p + 2) and u j,θ → u(•, θ) in W 1,p . Pick such θ. Let ψ j ∈ W 1,p ((0, 1) N -1 ; E ) be a bounded sequence of li tings of u j,θ (cf the induction assumption). By Lemma . , ψ j is smooth in (0, 1) N -1 \ A j,θ . Fix some x 0 ∈ (0, 1) N -1 \ A j,θ and let ϕ j be the smooth li ting of u j in Ω \ A such that ϕ j (x 0 , θ) = ψ j (x 0 ). (The existence of ϕ j follows from Lemma . .) Since |∇ϕ j |= |∇u j | in Ω \ A, we find that ϕ j ∈ W 1,p (Ω). On the other hand, we clearly have tr (0,1) N -1 ×{θ} ϕ j = ψ j . By the induction assumption and the standard inequality
||ϕ j || p À ||ψ j || p + ||∇ϕ j || p ,
we find that (ϕ j ) is bounded in W 1,p . As in Step , we obtain that u has a li ting ϕ ∈ W 1,p .
Step . . The case N = 2. Let (u j ) ⊂ W 1,p ((0, 1) 2 ; N ) be a convergent sequence. We let, as in the proof of Proposition . , V j (x, ε) := u j * ρ ε (x). We claim that there exists an ε 0 (depending on the sequence (u j )) such that ( . ) holds uniformly in j. When p > 2, this is clear, by Morrey's embedding W 1,p → C 0,1-2/p . When p = 2, the claim follows from an inspection of the proof of Lemma . . Extend u j by reflection across ∂Ω. Continuing the calculation ( . ), we find, for small ε, that
d(u j * ρ ε (x), F ) À Bε(x) Bε(x) |u j (y) -u j (z)| dydz Àε -2N ˆB2ε (0) ˆBε(x) |u j (z + h) -u j (z)| dzdh ĺε -2N ˆB2ε (0) ˆB3ε (x) |h| |∇u j (y)| dydh Àε 1-N ˆB3ε (x) |∇u j (y)| dy À ε 1-N +N (p-1)/p ||∇u j || L p (B 3ε (x)) .
Recalling that, in our case, N = 2 and p = 2, the claim follows from the above calculation, the assumption that (∇u j ) converges in L 2 , and Lebesgue's lemma.
Associate now to u j the map T j as in ( . ). Fix some M < ∞ such that π(B M (0) ∩ E ) = N . Fix some x 0 ∈ Ω and consider some t j ∈ E such that |t j |ĺ M and π(t j ) = T j (x 0 , ε 0 ). By construction, T j is smooth in Ω × (0, ε 0 ), and Lipschitz (with a Lipschitz constant independent of j) on Ω × {ε 0 }. In addition, we have
|∇ x T j (•, ε)| p À ||∇u j || p ( . )
(with constants independent of j and 0 < ε ĺ ε 0 ) and, by standard trace theory,
||∇T j || L p (Ω×(0,ε 0 )) À |u j | W 1-1/p,p À ||∇u j || p . ( . )
Let ζ j be the smooth li ting of T j on Ω×(0, ε 0 ) with ζ j (x 0 , ε 0 ) = t j . By the above, ζ j (•, ε 0 ) is Lipschitz, with controlled Lipschitz constant, and uniformly bounded at x 0 , and thus ||ζ j (•, ε 0 )|| p À 1. This, together with the L p bound ( . ), implies that
||ζ j (•, ε)|| p À ||∇u j || p + 1, ∀ j, ∀ 0 < ε < ε 0 .
Combining this with ( . ), we find that ζ j (•, ε) is uniformly bounded in W 1,p (Ω). We obtain the desired conclusion by letting ϕ j be any weak limit of a sequence of the form (ζ j (•, ε k )), with ε k → 0.
Step . The li ting property holds when s > 1 and sp ľ 2. Let u ∈ W s,p (Ω; N ) ⊂ W 1,sp (Ω; N ) (Corollary . ). By Step , there exists some ϕ ∈ W 1,sp (Ω; E ) such that u = π • ϕ. We find that d x u = d ϕ(x) πd x u, fo a.e. x ∈ Ω, and thus
Dϕ = Φ • ϕ Du, ( .
)
where Φ is as in ( . ). We complete this step by proving that
[s ľ 1, ϕ ∈ W 1,sp , u ∈ W s,p ∩ L ∞ , ( . ) holds] =⇒ ϕ ∈ W s,p . ( . )
Step . . Proof of ( . ) when s is an integer. The proof is by induction on s, the case s = 1 being clear.
The key fact is that ( . ) allows to express D s ϕ in terms of Dϕ, . . . , D s-1 ϕ. Let, e.g,
s = 2. We claim that, if u ∈ W 2,p ∩ L ∞ and v ∈ W 1,2p ∩ L ∞ , then v Du ∈ W 1,p . When u, v are smooth, this follows from the Gagliardo-Nirenberg embedding W 2,∞ ∩ L ∞ → W 1,2p
combined with the identity (with loose notation) D(vDu) = DuDv + vD 2 u. The general case follows by a standard limiting procedure. Combining this with ( . ), we find that ( . ) holds when s = 2. Moreover, we find that
|D 2 ϕ|À |D 2 u|+|Du| 2 .
The general case is obtained by an obvious argument. Let u ∈ W s,p ∩ L ∞ . By di ferentiating ( . ) (s -1)-times, we find that
|D s ϕ|À jľ1 1 +•••+ j =s |D 1 u|• • • |D j u|.
( . )
(again, first formally, then using a limiting procedure). In the process, we use the assumption ( . ). We conclude using the fact that, by the Gagliardo-Nirenberg embeddings
W s,p ∩ L ∞ → W k,sp/k , ∀ 1 ĺ k ĺ s -1, the right-hand side of ( . ) is in L p .
Step . . Proof of ( . ) when s is not an integer. Write s = k + σ, with 0 < σ < 1. By Step . and the Gagliardo-Nirenberg embedding W s,p ∩ L ∞ → W k,sp/k , we have ϕ ∈ W k,sp/k . By Theorem . and ( . ), we have J • ϕ ∈ W k,sp/k ∩ L ∞ , while, clearly, Du ∈ W s-1,p ∩ L sp . By ( . ) and Lemma . (with f := J • ϕ, g := Du, s -1 := k, s 2 := s -1, p 1 := sp/k, p 2 = p, r = sp), we have Dϕ ∈ W s-1,p , whence the conclusion.
Step . The li ting property fails in the non-compact case when 0 < s < 1 and 1 ĺ sp < N . We first present the simple special case of the universal cover π : R → S 1 . Assume for simplicity that Ω = B 1 (0). Let α > 0 and set ζ(x) := |x| -α and u := e ıζ . By Lemmas . b) and . , if ) then ζ ∈ W s,p , but u ∈ W s,p . Argue by contradiction and assume that u has a W s,p -li ting ϕ.
N -sp p ĺ α < N -sp sp , ( .
Then ϕ is continuous in B 1 (0) \ {0}, and thus there exists some k ∈ Z such that ϕ = ζ + 2kπ, a contradiction with the fact that ζ ∈ W s,p . We next explain how to treat the general case. As in Step , we rely on a gluing construction. We explain here the idea and postpone the explicit construction to Section (Lemma . ). Fix some z ∈ N and let π -1 ({z}) = {t j ; j ∈ J}. We will construct points x j ∈ Ω, radii r j > 0, j ∈ J, and maps ζ j : B 2r j (x j ) → E such that:
(i) The balls B 3r j (x j ) are mutually disjoint and contained in Ω.
(ii) ζ j ∈ C ∞ (B 2r j (x j ) \ {x j }). (iii) ζ j = t j in B 2r j (x j ) \ B r j (x j ). (iv) ζ j ∈ W s,p (B 2r j (x j )). (v) The map u := π • ζ j , in B 2r j (x j ) z, in Ω \ ∪ j B 2r j (x j ) belongs to W s,p .
Granted the existence of such ζ, we argue as follows. Assume, by contradiction, that u has a li ting ϕ ∈ W s,p . By Lemma . , ϕ is continuous in the connected set
V := Ω \ ∪ j {x j }. Let U := Ω \ ∪ j B 2r j (x j ), which is a connected set contained in V . Note that u = z in U .
Let j be such that ϕ -1 (t j ) ∩ U is non-empty. By connectedness of U and continuity of ϕ in U , we have ϕ = t j in U , and, again, by continuity and connectedness, ϕ = ζ j in B 2r j (x j ) \ {x j }. This contradicts item (iv).
Step . The li ting property holds in the compact case when 0 < s < 1 and sp ľ 2. We argue by density.
It su fices to find, for u ∈ R, a li ting ϕ such that |ϕ| W s,p À |u| W s,p . (Then we may pass to weak limits.) Set
ν := N -sp -1 ĺ N -3. Let u ∈ R. Then u is smooth in U := Ω \ A,
where A is a union of ν-planes parallel to the ν-coordinate planes. By Lemma . , U is simply connected, and thus u has a smooth li ting ϕ in U . Note that the set U N := {x ∈ (0, 1) N -1 ; [{x }×(0, 1)]∩A = ∅} is a null set, and the same holds for
U j := {(x , x ) ∈ (0, 1) j × (0, 1) N -j-1 ; [{x } × (0, 1) × {x }] ∩ A = ∅}, 1 ĺ j ĺ N -1.
Consider the partial function
x N → v := u(x , x N ), x N → ψ := ϕ(x , x N ), with x ∈ U N and x N ∈ (0, 1). Let 0 < θ 1 < θ 2 < 1. Let ρ := inj(N ) > 0 (the injectivity radius of N ). If v(θ) ∈ B ρ (v(θ 1 )), ∀ θ 1 ĺ θ ĺ θ 2 , then d E (ψ(θ), ψ(θ 1 )) = d N (v(θ), v(θ 1 )), ∀ θ 1 ĺ θ ĺ θ 2 .
( . )
Combining ( . ) with the fact that ϕ is uniformly bounded (since E is compact), we find that, for every θ 1 , θ 2 , we have
d E (ψ(θ 2 ), ψ(θ 1 )) À |v| W 0,∞ ((θ 1 ,θ 2 )) := v - θ 2 θ 1 v L ∞ ((θ 1 ,θ 2 ))
.
( . )
From ( . ) and Corollary . , we obtain the linear estimate . ) and similar estimates hold for each U j .
|ψ| p W s,p ((0,1)) À |v| p W s,p ((0,1)) , ∀ x ∈ U N , (
Combining ( . ) with slicing (Theorem . ), we obtain the linear estimate |ϕ| W s,p À |u| W s,p , which allows us to complete Step .
Step . The li ting property holds when sp < 1. We argue again by density. Consider a grid of size ε with faces parallel to the coordinate hyperplanes having the origin as an edge, and N -valued maps constant on each cube of the grid. By (the proof of) Theorem . , the restrictions to Ω of such maps are dense in W s,p (Ω; N ). It su fices to obtain, for each such map, a W s,p li ting with a norm control. In order to further simplify the presentation, we assume that ε = 2 -J for some integer J. (This is not relevant for the validity of the final result.) We may now formalize our program. For k ľ 0, let P k denote the collection of dyadic cubes Q k of size 2 -k in Ω. We let F k denote the set of the (step) functions constant on each Q k . We will complete Step by proving the following: for every J ľ 0 and every u : Ω → N , u ∈ F J , there exists some li ting ϕ ∈ F J of u such that
||ϕ|| W s,p À 1 + |u| W s,p . ( . )
The construction is relatively involved. We first construct approximations of u at the larger scales 2 -k , 0 ĺ k < J, as follows. Fix once for all some point z * ∈ N . Let δ > 0 be such that the nearest point projection Π on N is well-defined and smooth in the δ-neighborhood
N δ of N . Let E k (x) := ffl Q k u, ∀ Q k ∈ P k , ∀ x ∈ Q k , and set u k (x) := Π(E k (x)), if d(E k (x), N ) ĺ δ z * , if d(E k (x), N ) > δ .
Note that E k and u k belong to F k , ∀ k.
We next construct, inductively, a li ting ϕ k of u k , 0 ĺ k ĺ J, and finally set ϕ := ϕ J . The construction goes as follows. Fix once for all some t * ∈ π -1 (z * ). Let z 0 be the value of u 0 and let ϕ 0 ∈ E be a point in π -1 ({z 0 }), nearest from t * . Inductively, given Q j+1 ∈ P j+1 , let Q j ∈ P j be such that Q j+1 ⊂ Q j . If t j is the value of ϕ j on Q j and z j+1 is the value of u j+1 on Q j+1 , then the value of ϕ j+1 on Q j+1 is a point in π -1 ({z j+1 }), nearest from t j . Clearly, ϕ k ∈ F k and π•ϕ k = u k .
In order to estimate the W s,p -norm of ϕ, we rely on the following inequalities:
d E (t, π -1 ({z})) ĺ d N (π(t), z), ∀ t ∈ E , ∀ z ∈ N , ( . ) d N (u k (x), u k-1 (x)) À f k (x) := |u(x) -E k (x)|+|u(x) -E k-1 (x)|, ∀ k ľ 1, ∀ x, ( . ) d N (u 0 (x), z * ) À f 0 (x) := |z * -E 0 (x)|+|u(x) -E 0 (x)|À 1 + |u(x) -E 0 (x)|, ∀ x. ( . )
The first property is clear, since a geodesic γ of length L from π(t) to z li ts to a curve of length L from t to some point in π -1 ({z}). For the second one, if both E k (x) and E k-1 (x) are in N δ , then ( . ) holds, since
d N (u k (x), u k-1 (x)) À|u k (x) -u k-1 (x)|= |Π(E k (x)) -Π(E k-1 (x))|ĺ |E k (x) -E k-1 (x)| ĺ|u(x) -E k (x)|+|u(x) -E k-1 (x)|
(the first inequality following from the fact that the geodesic distance and the Euclidean distance are equivalent on N ).
On the other hand, if, say, E k (x) ∈ N δ , then |u(x) -E k (x)|ľ δ, so that the right-hand side of ( . ) is at least δ, while the le t-hand side of ( . ) is dominated by sup{d N (z, w); z, w ∈ N } < ∞. Thus ( . ) holds in all cases.
The proof of ( . ) is similar to the one of ( . ). Going back to the construction of ϕ, let us note that f k ∈ E k , ∀ k, and that, by combining ( . )-( . ) with the construction of the ϕ k 's we have, for every j, every Q j ∈ P j , and every x, y ∈ Q j :
d E (ϕ(x), ϕ(y)) =d E (ϕ J (x), ϕ J (y)) ĺ 1ĺkĺJ [d E (ϕ k (x), ϕ k-1 (x)) + d E (ϕ k (y), ϕ k-1 (y))] = j<kĺJ [d E (ϕ k (x), ϕ k-1 (x)) + d E (ϕ k (y), ϕ k-1 (y))] := j<kĺJ [g k (x) + g k (y)], ( . )
where
g k (x) := d E (ϕ k (x), ϕ k-1 (x)) À d N (u k (x), u k-1 (x)) À f k (x), ∀ k ľ 1. ( . )
Using ( . ) and ( . ), the fact that g k ∈ F k , ∀ k, the assumption sp < 1, and, successively, Lemmas . and . in Section , we find that
|ϕ| p W s,p À kľ1 2 spk ||g k || p p À kľ0 2 spk ||u -E k || p p À |u| p W s,p . ( . )
On the other hand, we have, by Hölder's inequality, the construction of the ϕ k 's, ( . )-( . ), ( . ), and Lemma . ,
||ϕ|| p p ĺ ˆ||ϕ 0 || p + kľ0 ||ϕ k+1 -ϕ k || p ˙p À ||ϕ 0 || p p + kľ0 2 spk ||ϕ k+1 -ϕ k || p p À1 + kľ0 2 spk ||u -E k || p p À 1 + |u| p W s,p . ( . )
Step follows from ( . ) and ( . ). The proof of Theorem . is complete.
QED
We next investigate the existence of li ting in the limiting case sp = N , which was le t apart in the previous section. We also consider the case where N = 1, which is of interest here. By Theorems . and . , a W s,p -li ting does exist when s ľ 1 or, in the compact case, when N ľ 2. We may thus assume that we are in the cases uncovered by the previous results, i. e., Assume that 0 < s < 1 and, if N ľ 2, that E is non-compact.
( . )
Let k ľ 1 be the least integer such that s + k/p ľ 1. We make a second assumption.
If s + k/p > 1 and E is non-compact, then ( . ) holds.
( . )
Proposition . . Assume ( . )-( . ). Then the li ting property holds.
Proof. Let u ∈ W s,p (Ω; N ). We first construct successive extensions of u until we reach the framework of Theorem . . This goes as follows. Arguing as in the proof of Proposition . , u has a smooth extension u 1 ∈ W s+1/p,p (Ω × (0, 1); N ). Continuing inductively as above, if k ľ 1 is the least integer such that s + k/p ľ 1, the final map u k is smooth in Ω × (0, 1) k , and belongs to W s+k/p,p (Ω × (0, 1) k ; N ). Since s + k/p ľ 1 and (s + k/p)p ľ 2, we are in position to apply Theorem . (here, when s+k/p > 1 we use the assumption ( . )), and obtain that u k has a phase ϕ k ∈ W s+k/p,p (Ω × (0, 1) k ; E ). By Lemma . , ϕ k is smooth. By trace theory adapted to E -valued maps (see Lemma . and Chiron [ , Section . ]), we have the uniform estimate
||ϕ k (•, x )|| W s,p (Ω) À ||ϕ k || W s+k/p,p (Ω×(0,1) k ) , ∀ x ∈ (0, 1) k . ( . )
By ( . ), the obvious embedding W s,p (Ω; E ) → W s,p (Ω; R m ), standard trace theory, and the compactness of the embedding W s,p (Ω; R m ) → L p (Ω; R m ), we obtain the existence of a sequence x j → 0 and a ϕ ∈ W s,p (Ω; E ) such that ϕ k (•, x j ) → ϕ a.e. and ϕ is a li ting of u.
QED Finally, we prove that, in the proof of Proposition . , the assumption ( . ) is just an artefact of the proof, and can be removed using a di ferent approach, as in [ ].
Proposition . . Assume ( . ). Then the li ting property holds.
Proof. Let V , Ω ε , and T be as in ( . )-( . ), and Ψ be a di feomorphism as in the proof of Proposition . . Let ε 0 be such that ( . ) holds. In the simply connected set W given by ( . ), T has a li ting Φ such that
|∇Φ|= |∇T |À |∇V |.
( . )
In order to explain our proof, we first invoke the following local version of the theory of weighted Sobolev spaces (see [ , proof of Lemma . ] for a proof when Ω = T N ; the argument there can be adapted to any Lipschitz bounded domain)
|Φ(•, θ)| p W s,p (Ω) ĺ C(ε 0 ) ˆΩ ˆε0 0 ε p(1-s)-1 |∇Φ(x, ε)| p dεdx, ∀ 0 < θ < ε 0 . ( . )
We call the attention of the reader to the fact that, in ( . ), the | | W s,p seminorm is calculated with respect to the Euclidean distance in R m . However, the proof of ( . ) is obtained starting from
|Φ(x + h, θ) -Φ(x, θ)|ĺ|Φ(x + h, θ) -Φ(x + h/2, θ + |h|/2)| ( . ) + |Φ(x + h/2, θ + |h|/2) -Φ(x, θ)| ĺ ˆ1 0 d dτ Φ(x + h/2 + τ h/2, θ + |h|/2 -τ |h|/2) dτ + ˆ1 0 d dτ Φ(x + τ h/2, θ + τ |h|/2) dτ, ∀ (x, h) ∈ Ω × R N s. t. |h|ĺ ε 0 2 and [x, x + h] ⊂ Ω.
Clearly, we may replace, in ( . ), the Euclidean distance with the geodesic distance d E on E , and find that ( . ) still holds for the adapted W s,p -seminorm in W s,p (Ω; E ). By ( . ), ( . ), and standard inverse trace theory (see [ ]), we have
|Φ(•, θ)| p W s,p (Ω) ĺ C(ε 0 ) ˆΩ ˆε0 0 ε p(1-s)-1 |∇T (x, ε)| p dεdx À |u| s W s,p , ∀ 0 < θ < ε 0 . ( . )
Using ( . ) and a standard limiting procedure, we find that Φ(•, θ) has a weak limit ϕ ∈ W s,p (Ω; E ) as ε → 0, satisfying π • ϕ = u.
QED
Lecture # . Strong density
Recall the implicit assumption sp < N . In this section, we let Ω = (0, 1) N .
If 0 ĺ ν ĺ N -1 is an integer, let R = R ν := {u : Ω → N ; ∃ ε > 0, ∃ a finite union A of ν-planes parallel to the ν-coordinate planes, ∃ U ∈ C ∞ ([-ε, 1 + ε] N \ A; N ) such that u = U |(0,1) N and |D k U (x)|ĺ C k [d(x, A)] -k , ∀ k ľ 0}.
( . )
The importance of the class R, devised by Bethuel [ ], is illustrated by the following result (see Bethuel [ ] for s = 1, Bousquet, Ponce, and Van Scha tingen [ ] for s = 2, 3, . . ., [ ] for 0 < s < 1, and Detaille [ ] for the remaining cases).
Theorem . . Let
ν := N -sp -1. The class R ν is dense in W s,p (Ω; N ).
Theorem . is complemented by the following result (same references as above).
Theorem . . C ∞ (Ω; N ) is dense in W s,p (Ω; N ) if and only if π sp (N ) is trivial.
The full proofs of the above results require more than hundred pages. We will present here only four elements of proof:
. The necessity of the assumption that π sp (N ) is trivial in Theorem . (following essentially Schoen and Uhlenbeck [ ]).
. Approximation with homogeneous maps when 0 < s < 1 and 1 ĺ sp < N (following [ , Section ]).
. Smoothing of homogeneous maps when s = 1 (following essentially Hang and Lin [ , Sections . and ]). Proof. We actually establish the stronger result that C ∞ (Ω; N ) ∩ W s,p (Ω; N ) is not dense in W s,p (Ω; N ). For simplicity of the formulas, we work in B 1 (0) instead of (0, 1) N .
Let
k := sp ĺ N -1. Assume first that k ľ 1. Consider some v ∈ C ∞ (S k ; N ) that is not null-homotopic. Let u(x) := v((x 1 , . . . , x k+1 )/|(x 1 , . . . , x k+1 )|)
. By Lemma . c), we have u ∈ W s,p (Ω; N ). We claim that u cannot be approximated with smooth N -valued maps. Argue by contradiction and let (u j ) ⊂ C ∞ (Ω; N ) ∩ W s,p (Ω; N ) be such that u j → u in W s,p . By slicing (Corollary . ), up to a subsequence and for some 0 < r < 1/2 and x ∈ R N -k-1 such that |x |< 1/2, we have u j (•, x ) → u(•, x ) in W s,p (rS k-1 ; N ). By Corollary . , for large j, u j (•, x ) and u(•, x ) are homotopic as continuous functions from rS k-1 to N . However, on the one hand u(•, x ) is not null-homotopic (for otherwise, v would also be), while u j (•, x ) is always null-homotopic (by a homotopy argument, since u j is smooth in B 1 (0)). The contradiction achieves the proof when k ľ 1.
Assume next that k = 0, i. e., that sp < 1. Since we work with connected N 's, π 0 (N ) is trivial. For the record, let us note that, if N is not connected, then density fails. To see this, let
C 1 , . . . , C k , k ľ 2, be the components of N . Let a m ∈ C m , m = 1, 2. Consider a ball B with B ⊂ Ω. Let u = a 1 , in B a 2 , in Ω \ B . Then u ∈ W s,p (Ω; N ) (Lemma .
). We claim that u cannot be approximated with a sequence (u j ) of smooth N -valued maps. Argue by contradiction. Up to a subsequence, we have u j → u a.e., and in particular, for a given ε > 0 and large j, the sets {x ∈ Ω; |u j (x) -a m |< ε} have positive measure. If we let, in particular, ε < min{d(C i , C m ), i = m}, we find that, for large j, u j has to take values both in C 1 and C 2 . However, this cannot happen, since the image of u j is connected.
QED
. . Approximation with homogeneous maps when 0 < s < 1
In this section, | |stands for the || || ∞ norm.
We start by describing a procedure for constructing homogeneous maps on R N . Fix some ε > 0 and t ∈ R N . Consider the mesh C N = C N,t = C N,t,ε of N -dimensional cubes (with vertices parallel to the coordinate axes) of side-length 2ε having t as one of the centers. (Thus, cubes in C N are of the form t
+ 2εk + [-ε, ε] N , with k ∈ Z N .) Let C N -1 = C N -1,
t be the (N -1)-dimensional skeleton associated with this mesh, i. e., C N -1 is the union of the boundaries of the cubes in C N . Let H N be the mapping that associates with every g :
C N -1 → R its homogeneous extension (on each cube of C N ) to R . Analytically, if C is a cube in C N , of center u, then H N (g)(x) = g(u + ε(x -u)/|x -u|), ∀ x ∈ C.
In order to keep notation reasonably simple, we will identify C j with the union of its cubes, so that we write both C ∈ C j and, if x ∈ C, x ∈ C j .
We next consider a more general situation. We start by defining the lower dimensional skeletons and cubes associated with C N . This is done by backward induction:
C N -2 = C N -2,t = C N -2,t,ε is the union of the (N -2)-dimensional boundaries of the cubes in C N -1 = C N -1,t = C N -1,t,ε
, and so on. A cube in C N is any cube of the mesh C N . A cube in C N -1 is any of the 2N faces of a cube in C N . For j ĺ N -2, a cube in C j is any of the 2(j + 1) faces of any cube in C j+1 .
For g : C j → R , let H j+1 (g) be its homogeneous extension to C j+1 .
Let 0 ĺ j < N . For ε > 0 and t ∈ R N , we associate with each map f : R N → R a map
f t = f t,ε = f t,ε,j : R N → R through the formula f t = g j := H N • H N -1 • • • • • H j+1 • g; here, we set g := f | C j .
( . )
More generally, given any map g : C j → R , the map g j given by the right-hand side of ( . ) is referred to as a j-homogeneous map or the j-homogeneous extension of g.
Here is our main result in this section.
Theorem . . Let 0 ĺ j < N, 0 < s < 1, sp < j + 1, and let f ∈ W s,p (R N ; R ). Then there exist sequences ε k → 0 and (t k ) ⊂ R N such that f t k ,ε k → f in W s,p (R N ).
For the sake of simplicity, we prove Theorem . under the extra assumptions
1 < p < ∞, j ľ 1.
( . ) (Theorem . holds without these assumptions, but the treatment of the remaining cases is more involved.) Under these assumptions, we will obtain the following improvement of Theorem . .
Theorem . . Assume 1 ĺ j < N , 0 < s < 1, 1 < p < ∞, and sp < j + 1. Let f ∈ W s,p (R N ; R ). Then, for each ε ∈ R N , there exists some t ε ∈ R N such that f tε,ε → f in W s,p (R N ) as ε → 0.
Before proceeding to the proof of Theorem . , let us note the following consequence of Theorem . . Corollary . . Assume that 0 < s < 1 and sp < N . Let F ⊂ R be an arbitrary set. Let f ∈ W s,p (Ω; F ). Then there exists a sequence of j-homogeneous maps
(f k ) ⊂ W s,p (R N ; R ) such that f k → f in W s,p (Ω) and f k is F -valued in (-1, 2) N . Proof. Extend f , by reflexions, to a map r f ∈ W s,p ((-2, 3) N ; F ). Then extend r f to a map h ∈ W s,p (R N ; R ). (We do not claim that h is F -valued.) Finally, let f k := h t k ,ε k , with ε k , t k as in Theorem .
(applied to h). Then the f k 's have, for large k (and thus small ε k ), the desired properties.
QED
Proof of Theorem . . We start by introducing some useful notation. Set Q ε := [-ε, ε] N . In order to keep notation easier to follow, we will sometimes denote a point in Q ε by x N rather than x. We denote by x N -1 the radial projection (centered at 0) of x N onto the (N -1)-skeleton (thus the boundary) of Q ε ; this projection is defined except when x N = 0. With an abuse of notation, x N -1 also denotes a "generic" point of ∂Q ε . We next let x N -2 denote the radial projection of x N -1 onto the (N -2)-skeleton of Q ε . The point x N -2 is obtained as follows: if x N -1 ∈ ∂Q ε belongs to an (N -1)-dimensional face F of ∂Q ε , and if x N -1 is not the center C of F , then the radial projection (centered at C) of x N -1 on ∂F is well-defined, and yields x N -2 . By backward induction, we define x j , 0 ĺ j ĺ N -1, as the radial projection of x j+1 onto ∂Q ε ∩ C j,0 ; this is defined for all but a finite number of x j+1 's. Again, with an abuse of notation, x j is the "generic" point of ∂Q ε ∩ C j,0 . Note that x 0 is one of the vertices of Q ε . When x j is obtained starting from x N , we will denote x j as the radial projection of x N (onto ∂Q ε ∩ C j,0 ). This projection is defined except on a set of finite H N -j-1 measure.
More generally, let j < k ĺ N . We identify x k with a "generic" point of
∂Q ε ∩ C k,0 . Then x j is the projection of x k onto ∂Q ε ∩ C j,0 (except for a set of x k 's of finite H k-j-1 measure). Let k ∈ Z N and set u = t + 2εk. Then the radial projection of u + x N onto C j is u + x j . If j < k ĺ N , then for H k -a.e. x k ∈ ∂Q ε ∩ C k,0 , the projection of u + x k onto C j is u + x j .
With the above notation, formula ( . ) is equivalent to
f t (t + 2εk + x N ) = f (t + 2εk + x j ), ∀ k ∈ Z N , for H N -a.e. x N ∈ Q ε . ( . )
We now proceed to the proof of the theorem. Set
F ε (f )(t, x) := f t,ε (x), ∀ t ∈ Q ε , ∀ x ∈ R N .
Step . An L q -estimate for
F ε (f ). Let 1 ĺ q < ∞ and f ∈ L q (R N ). We claim that lim ε→0 1 ε N ˆQε ||f -f t,ε || q q dt = 0 ( . )
and
F ε (f ) q ĺ C ε N/q f q , with C independent of ε or f . ( . )
(Here, we do not require j ľ 1.)
Set Q ε (x) := x + Q ε , ∀ x ∈ R N . Using the facts that: (i) Q ε (t + 2εk)) k∈Z n is an a.e. partition of R N ; (ii) f t,ε = f t+2εk,ε for t ∈ R N and k ∈ Z N (thanks to ( .
)), and (iii) the "change of variable"
t = x + y N , ∀ t ∈ Q ε (x) (with y N ∈ Q ε ), we have 1 ε N ˆQε ||f -f t,ε || q q dt = 1 ε N ˆQε k∈Z N ˆQε(t+2εk) |f (x) -f t,ε (x)| q dxdt = 1 ε N ˆRN ˆQε(x) |f (x) -f t,ε (x)| q dtdx = 1 ε N ˆRN ˆQε ˇˇf (x) -f `x + y N -y j ˘ˇq dydx = 1 ε N ˆQε f (•) -f (• + y N -y j ) q q dy. We next note that y N ∈ Q ε =⇒ y N -y j ∈ Q ε . Therefore, 1 ε N ˆQε ||f -f t,ε || q q dt ĺ 2 N sup{||f (•) -f (• + z)|| q q ; |z|ĺ ε}. ( . )
Finally, we note that ( . ) implies both ( . ) and ( . ).
Step . A W 1,r estimate for
F ε (f ). Let 1 ĺ j ĺ N -1, 1 ĺ r < j + 1, and f ∈ W 1,r (R N ). We claim that F ε (f ) L r (Qε;W 1,r (R N )) ĺ C ε N/r f W 1,r , with C independent of ε or f. ( . )
In view of Step , in order to obtain ( . ) it su fices to establish, with C = C(N, j, r), the estimate
ˆQε ˆRN |∇f t,ε (x)| r dxdt ĺ Cε N ˆRN |∇f (x)| r dx. ( . )
We next observe that it su fices to prove ( . ) when f ∈ C ∞ c . Indeed, assuming for the moment that ( . ) holds for such f , Step combined with ( . ) for f ∈ C ∞ c and with a standard limiting argument implies that ( . ) holds for every f ∈ W 1,r .
We finally turn to the proof of ( .
) when f ∈ C ∞ c . Let, for t ∈ R N and ε > 0, v := t+(ε, . . . , ε) and C := C N -j-1,v,ε . Then (see Lemma . ) the projection R N \ C t + 2εk + x N Ψ Þ -→ t + 2εk + x j ∈ C j,t,ε , ∀ k ∈ Z N , ∀ x N ∈ Q ε
is well-defined, locally Lipschitz, and satisfies
|∇Ψ(x)|À ε d(x, C ) . ( . )
It follows from ( . ) and the fact that C is locally a finite union of (N -j -1)-planes that
|∇Ψ|∈ L r loc (R N ), ∀ 1 ĺ r < j + 1. ( . )
Combining ( . ), the fact that f ∈ C ∞ c , Lemma . , and the observation that
f t,ε = f • Ψ, we find that f t,ε ∈ W 1,r (R N ), ∀ 1 ĺ r < j + 1, and df t,ε = [(df ) • Ψ] dΨ in the sense of distributions.
A ter these preliminary remarks, we proceed to the proof of ( . ). By symmetries of the formula defining Ψ, it su fices to establish ( . ) when R N is replaced by
R N * := ∪ k∈Z N (t + 2εk + Q * ε ), with Q * ε := {x N ∈ Q ε ; x 1 ľ • • • ľ x N -j ľ |x m |, ∀ m > N -j}.
We note that, when
x N ∈ Q * ε \ C , we have, ∀ k ∈ Z N , Ψ(t + 2εk + x N ) =t + 2εk + z(x N ), with z(x N ) := (ε, . . . , ε, εx N -j+1 /x N -j , . . . , εx N /x N -j ), ( . )
and
|∇Ψ(t + 2εk + x N )|À ε x N -j , ∀ x N ∈ Q * ε . ( . )
Using ( . ) and ( . ), we find that
ˆQε ˆRN * |∇f t,ε (x)| r dxdt = ˆQε k∈Z N ˆt+2εk+Q * ε |∇f t,ε (x)| r dxdt À ˆQε k∈Z N ˆt+2εk+Q * ε ε r x r N -j |∇f (t + 2εk + z(x)| r dxdt =||∇f || r r ˆQ * ε ε r x r N -j dr ∼ ε N ||∇f || r r ,
where the last line uses the definition of Q * ε and the fact that r < j + 1. We find that ( . ) holds for R N * and thus, as explained above, for R N .
Step is now completed.
Step . Average estimate for f -f t,ε and conclusion. (Here, we use 1 < p < ∞ and j ľ 1.) Let 0 < s < 1, 1 < p < ∞, and 1 ĺ j ĺ N -1 be such that sp < j + 1. We claim that there exist q, r such that
1 < q < ∞, 1 < r < j + 1, 1 p = s r + 1 -s q . ( . )
Indeed, the existence of r and q as in ( .) is equivalent to
s j + 1 + 1 -s ∞ < 1 p < s 1 + 1 -s 1 ,
which clearly holds.
We next recall three classical interpolation results. Given two Banach spaces X and Y , we use the standard notation [X, Y ] s,p ; see e. g. [ , Section . ]. First, when ( . ) holds we have [ , Section . . , Theorem (a), eq. ( ), p. ]
[W 1,r , L q ] s,p = W s,p . ( . )
Next, if X and Y are Banach spaces and s, p, q, r are as above, then [ , Section . . , Theorem, eq. ( ), p.
]
[L r (Ω; X), L q (Ω; Y )] s,p = L p (Ω; [X, Y ] s,p ). ( . )
By ( . ) and ( . ),
∀ r, q as in ( . ), [L r (Q ε ; W 1,r (R N )), L q (Q ε ; L q (R N ))] s,p = L p (Q ε ; W s,p (R N )). ( . )
Final classical result. Let s, p, q, r, X, and Y be as above. Let F be a linear continuous operator from X into L r (Ω; X) and from
Y into L q (Ω; Y ). Then F is linear continuous from [X, Y ] s,p into L p (Ω; [X, Y ] s,p
) and satisfies the norm inequality
F L ([X,Y ]s,p;L p (Ω;[X,Y ]s,p)) ĺ F s L (X;L r (Ω;X)) F 1-s L (Y ;L q (Ω;Y )) . ( . )
By ( . ), ( . ), and ( . ), we find that
F ε (f ) L p (Qε;W s,p (R N )) ĺ C ε N/p f W s,p (R N ) , with C independent of ε. ( . )
(In principle, the constant C in ( . ) may depend on ε, since we apply the interpolation result ( . ) in an ε-dependent domain. The fact that C does not depend on ε is obtained by a straightforward scaling argument: we consider, instead of F ε , the map
G ε (f ) : Q 1 × R N → R , G ε (f )(t, x) = f εt,ε (x).
We obtain ( . ) by applying ( . ) to
G ε (f ) in Q 1 .) A clear consequence of ( . ) is 1 ε N ˆQε ||f t,ε -f || p W s,p (R N ) dt ĺ C||f || p W s,p (R N ) . ( . )
Arguing as above and using ( . ) instead of ( . ), we improve ( . ) to
lim ε→0 1 ε N ˆQε ||f t,ε -f || p W s,p (R N ) dt = 0. ( . )
Clearly, ( . ) and a mean-value argument yield the conclusion of Theorem . .
QED
. . Smoothing in W 1,p
We start by explaining how Theorem . is used in the proof of Theorem . (when
0 < s < 1). Let 0 < s < 1, 1 < p < ∞, 1 ĺ j ĺ N -1 be such that sp < j + 1. (Here and in what follows, j is fixed.) Let f ∈ W s,p (R N ; R ).
It will be convenient here to consider f as a everywhere defined Borel map (rather than an equivalence class). By ( . ), for a.e. t ∈ Q ε we have
f t,ε,j ∈ W s,p . ( . )
On the other hand (by a "generalized slicing" argument, see e. g. [ , Lemma . ]) for a.e. t ∈ Q ε we have
f |Cm,t,ε ∈ W s,p (C m,t,ε ), ∀ 0 ĺ m ĺ N -1.
( . )
(The discussion here being rather informal, we do not give the precise definition of the space W s,p on a skeleton. We will be precise in the case s = 1 detailed below.) Moreover, when sp > 1, we have, for a.e.
t ∈ Q ε [ , Appendix E], tr(f |Cm,t,ε ) = f |C m-1,t,ε , ∀ 1 ĺ m ĺ N. ( . )
Note the assumption sp > 1, which implies that trace theory makes sense in W s,p . (The assumption sp > 1 can be relaxed to sp ľ 1, provided we replace, when sp = 1, the notion of trace with the one of good restriction; see [ , Appendix B, Appendix E].) Combing these facts with ( . ), we find that there exists t = t ε ∈ Q ε such that ( . )-( . ) hold and, in addition f t,ε,j → f in W s,p as ε → 0.
Assume now that we start from u ∈ W s,p (Ω; N ), that we first extend by reflexions to (-2, 3) N , next to a map f ∈ W s,p (R N ; R ). Then, by construction, for small ε,
f t,ε,j is N -valued in (-1, 2) N .
( . )
Up to now, the fact that N is a manifold was irrelevant. The next step consists of taking advantage of the smoothness of N and of the properties ( . )-( . ). More specifically, if: (i) 1 ĺ j ĺ sp < j +1; (ii) ε is fixed; (iii) f ∈ W s,p and t are such that ( . )-( . ) hold, then one may prove that it is possible to approximate, in W s,p ((-1/2, 3/2) N ), f t,ε,j with a map v:
(j) N -valued; (jj) locally Lipschitz in [-1/2, 3/2] \ C N -j-1,v,ε ; (jjj) satisfying |∇v(x)|ĺ C(ε)/d(x, C N -j-1,v,ε ).
By Lemma . , we have v ∈ W 1,q , ∀ q < j + 1. Granted the existence of v as above, we thus obtain that: under the assumptions 0 < s < 1, 1 ĺ sp < N , and 1 ĺ q < sp + 1, each u ∈ W s,p ((0, 1) N ; N ) can be approximated in W s,p with maps v ∈ W 1,q ((0, 1) N ; N ) such that each v is locally Lipschitz outside a finite union of (N -sp -1)-planes. Then a rather standard smoothing procedure (see, e. g., Brezis and Li [ , Proposition A. ] when s = 1) allows to further smoothen the v's and obtain approximation with maps in the class R.
To summarize, the heart of the transition from f t,ε,j to maps in the class R is the construction of v as above. When 0 < s < 1, this is performed in [ , Section ], following a scheme conceptually similar to the one of Hang and Lin for s = 1 [ , Section . , Section ]. In order to keep the presentation technically simple but yet relevant concerning the main ideas, we present here the s = 1 counterpart of the above, consistent with the schemes in [ , ]. More specifically, we will prove the following.
Proposition . . Let ε = 1/2 and t = 0. Let M ∈ N and Ω = (-M -1/2, M + 1/2) N . Let 1 ĺ j < p < j + 1 ĺ N . Let u ∈ W 1,p (Ω; N ) be such that u |C m,0,1/2 ∈ W 1,p (C m,0,1/2 ), ∀ 1 ĺ m ĺ j, ( . ) tr(u |C m,0,1/2 ) = u |C m-1,0,1/2 , ∀ 1 ĺ m ĺ j. ( . )
Then, for every λ > 0, there exists some Lipschitz map g : C j,0,1/2 → N such that the jhomogeneous extension v = g j of g (given by ( . )) satisfies u 0,1/2,j -v W 1,p < λ. † Before proceeding to the proof of Proposition . , let us precise some notation and assumptions. With an abuse of notation, C j = C j ∩C , 0 ĺ j ĺ N -1, where C := ∪ k∈Z N ,|k|ĺM Q 1/2 (k) is the part of the grid C N = C N,0,1/2 corresponding to Ω. C denotes a generic cube in C m . The meaning of ( . )-( . ) is that, for each 1 ĺ m ĺ j, and each cube C of C m , u |C belongs to W 1,p (C) and, in addition, the trace of u |C to ∂C is u |∂C . (Recall that we consider everywhere defined maps.) We naturally define
||u|| p L p (Cm) := C∈Cm ||u|| p L p (C) , |u| p W 1,p (Cm) = ||∇u|| p L p (Cm) := C∈Cm ||∇u|| p L p (C) , ||u|| W 1,p (Cm) := ||u|| L p (Cm) + |u| W 1,p (Cm) .
In view of Lemma . , the conclusion of Proposition . follows from the following fact, that we will establish below: if 1 ĺ j < N , j < p < ∞, and g :
C j → N satisfies g |C m,0,1/2 ∈ W 1,p (C m,0,1/2 ), ∀ 1 ĺ m ĺ j, ( . ) tr(g |C m,0,1/2 ) = g |C m-1,0,1/2 , ∀ 1 ĺ m ĺ j, ( . ) then ∀ λ > 0, ∃ r g ∈ Lip(C j ; N ) s. t. ||r g -g|| W 1,p (C j ) < λ. ( . )
(Note the wider range j < p < ∞ instead of j < p < j + 1.) † Proposition . still holds when p = j, but when j ľ 2 the case p = j requires a separate argument (in the spirit of the proof of Proposition . ), since the embedding W 1,p (R j ) → C 0 holds for p > j, but fails for p = j. For simplicity, we do not consider here the case p = j.
Proof of Proposition . . Step . Choice of a continuous representative. Assume that 1 ĺ j ĺ N and p > j. Assume that g satisfies ( . )-( . ). We claim that there exists a continuous function r g : C j → N such that r g |Cm = g |Cm H m -a. e., ∀ 0 ĺ j ĺ m. The construction of r g is performed successively on each C m , by induction on m. For m = 0, we simply let r g = g. Assuming r g constructed on C m-1 , with 1 ĺ m ĺ j, we let, for C ∈ C m , r g be the continuous representative of g on C. It is easy to see that r g |C m-1 agrees with the map already constructed on C m-1 , is continuous, and that the final map constructed on C j has all the required properties. From now on, we assume that g is continuous.
Step . Reduction to almost N -valued maps. Fix some small δ > 0 such that the projection Π : N δ → N is well-defined and smooth in the δ-neighborhood N δ of N . Assume that we are able to construct a Lipschitz map r g :
C j → N δ such that ||r g -g|| W 1,p (C j ) < λ. Then, clearly, Π • r
g is Lipschitz and, by Lemma . , ||(Π • r g) j -g|| W 1,p < F (λ), for some function F such that lim λ→0 F (λ) = 0. In conclusion, it su fices to prove ( . ) in the apparently weaker form
∀ λ > 0, ∃ r g ∈ Lip(C j ; N δ ) s. t. ||r g -g|| W 1,p (C j ) < λ. ( . )
Step . Approximation on a fixed cube.
Let g ∈ W 1,p (C j ; R ). Let C be a cube in C j , of center 0 C . The projection of the point 0 C + x j ∈ C on ∂C is 0 C + x j-1 , where x j-1 = x j /(2|x j |).
We first define convenient approximations of g as follows. For 0 < µ < 1, we set, with the above notation,
g µ (0 C + x j ) = g(0 C + x j-1 ), if |x j |ľ (1 -µ)/2 g ˆ0C + x j 1 -µ ˙, if |x j |< (1 -µ)/2 .
Note that g µ is continuous on C j , N -valued, and clearly satisfies ( . )-( . ). The following fact is straightforward.
[1 ĺ p < ∞, 1 ĺ j ĺ N, C ∈ C j , g ∈ W 1,p (C)] =⇒ g µ → g in W 1,p (C) as µ → 0. ( . ) Let ρ ∈ C ∞ c ((-1/2, 1/2) j ) be a standard mollifier. Given h ∈ L 1 (C j ; R ), the convolution h * ρ t is well-defined and smooth in the set ∪ C∈C j {0 C + x j ; |x j |ĺ (1 -t)/2}. (Here, we naturally identify each C with a subset of R j .) Fix some function η ∈ C ∞ c ([0, 1/2); [0, 1]
) such that η(θ) = 1 for small θ. For small t, the map
C 0 C + x j → g t (0 C + x j ) := η(|x j |) g * ρ t (0 C + x j )
is well-defined and smooth in C.
We also set
g 0 (0 C + x j ) := η(|x j |) g(0 C + x j ).
The following is straightforward.
[1 ĺ p < ∞, 1 ĺ j ĺ N, C ∈ C j , g ∈ W 1,p (C)] =⇒ g t → g 0 in W 1,p (C) as t → 0. ( . )
Given f : ∂C → R , set (with H j the homogeneous extension from C j-1 to C j )
T (f )(0 C + x j ) := (1 -η(|x j |)) H j (f )(0 C + x j ), ∀ 0 C + x j ) ∈ C. ( . )
We note the following consequence of (the proof of) Lemma . .
[1 ĺ p < ∞, 1 ĺ j ĺ N, C ∈ C j ] =⇒ [W 1,p (∂C) f → T (f ) ∈ W 1,p (C) is continuous]. ( . )
Finally, let us note the following consequence of the embedding W 1,p (C) → C 0 , valid when p > j.
[1 ĺ j < p < ∞, 1 ĺ j ĺ N, C ∈ C j , g ∈ W 1,p (C; N )] =⇒ ∃ t 0 s. t. [t ĺ t 0 , |x j |ĺ (1 -t)/2 =⇒ g * ρ t (0 C + x j ) ∈ N δ ]. ( . )
(The validity of ( . ) when p = j requires a separate argument, relying on Lemma . .)
Step . Proof of ( . ) when j = 1. By ( . ), it su fices to prove ( . ) when g is replaced by g µ . Since j = 1 and thus C 0 is a finite collection of points, we may thus assume that g is constant near each point in C 0 :
∃ 0 < µ < 1 s. t. [C ∈ C 1 , |x 1 |ľ (1 -µ)/2] =⇒ g(0 C + x 1 ) = g(0 C + x 0 )]. ( . ) Let now η ∈ C ∞ ([0, 1/2); [0, 1]) be such that η(θ) = 1, if 0 ĺ θ ĺ 1/2 -µ/4 0, if θ > 1/2 -µ/6 . ( . )
When 0 < t ĺ µ/6, the map
C 1 x = 0 C + x 1 → G t (x) = η(|x 1 |) g * ρ t (0 C + x 1 ) + `1 -η(|x 1 |) ˘g(0 C + x 0 )
is well-defined everywhere on C 1 , and is Lipschitz. Moreover, by ( . ) and the choice of η, we clearly have G t → g in W 1,p as t → 0.
It remains to prove that, for small t, we have
G t (0 C + x 1 ) ∈ N δ , ∀ C ∈ C 1 , ∀ 0 C + x 1 ∈ C. ( . )
By ( . ) and ( . ), property ( . ) holds when |x 1 |ĺ 1/2-µ/5. Clearly, ( . ) holds also when |x 1 |ľ 1/2 -µ/6. Finally, when 1/2 -µ/5 ĺ |x 1 |ĺ 1/2 -µ/6 and t ĺ µ/6, we have
G t (0 C + x 1 ) = g * ρ t (0 C + x 1 ) = g(0 C + x 0 ) ∈ N .
This completes Step .
Step . Proof of Proposition . by induction on j. (Here, we use the assumptions ( . )-( . ) at all dimensions 1 ĺ m ĺ j.) Let 2 ĺ j ĺ N . Let f be the restriction of g to C j-1 . By ( . ), we may assume that there exists some µ ∈ (0, 1) such that
g(0 C + x j ) = f (0 C + x j-1 ), ∀ C ∈ C j , ∀ 0 C + x j ∈ C s. t. |x j |ľ (1 -µ)/2.
( . ) By ( . )-( . ) and the induction hypothesis, the map f is the limit in W 1,p of a sequence (F k ) ⊂ Lip(C j-1 ; N ). With η as in ( . ) and 0 < t ĺ µ/6, we define, everywhere on C j , the Lipschitz maps
C j 0 C + x j → G k,t (0 C + x j ) :=η(|x j |) g * ρ t (0 C + x j ) + (1 -η(|x j |)) F k (0 C + x j-1 ).
By ( . ) and ( . ), we have
lim k→∞ lim t 0 G k,t = g in W 1,p (C j ).
In order to complete Step and the proof of Proposition . , it remains to prove that, for large k and su ficiently small t (possibly depending on k) we have
G k,t (0 C + x j ) ∈ N δ , ∀ C ∈ C j , ∀ 0 C + x j ∈ C. ( . )
As in Step , ( . ) holds when |x j |ĺ 1/2 -µ/5 or |x j |ľ 1/2 -µ/6. When 1/2 -µ/5 ĺ |x j |ĺ 1/2 -µ/6, we argue as follows. By the Sobolev embeddings, we have
F k → f uniformly. Let k 0 be such that F k -f ∞ ĺ δ, ∀ k ľ k 0 .
( . ) By ( . ) and the continuity of f , for every fixed k we have
lim t 0 G k,t (0 C +x j ) = η(|x j |) f (0 C + x j-1 ) + (1 -η(|x j |)) F k (0 C + x j-1 )
uniformly in the set
C∈C j {0 C + x j ; 1/2 -µ/5 ĺ |x j |ĺ 1/2µ/6}. ( . )
We complete the proof of ( . ) using ( . ) and ( . ).
QED
. . Singularities removing technique in W 1,p
One of our purposes here is the proof of Theorem . when 0 < s < 1, under the necessary condition that π sp (N ) is trivial. We have seen in Sections . and . that maps of the form g j , where g ∈ Lip(C j ; N ), are dense in W s,p (Ω; N ), at least when 1 ĺ j < sp < j + 1 ĺ N .
We have already noted that g j actually belongs to the space W 1,q (Ω; N ), ∀ 1 ĺ q < j + 1. We will prove below that g j can be approximated, in W 1,q (Ω), ∀ 1 ĺ q < j + 1, with Lipschitz Nvalued maps. This fact, combined with the Gagliardo-Nirenberg inequalities (Corollary . ) and a straightforward smoothing argument, implies Theorem . when 0 < s < 1.
A ter these introductory remarks, we present and prove the main result of this section (see Bethuel [ ], with roots in White [ , Section ], for the main idea of the proof (Step below) and, for the presentation we give here, also Hang and Lin [ , Section ] and Bousquet, Ponce, and Van Scha tingen [ , Section ]). The result is stated, with no loss of generality, in Ω = (-M -1/2, M + 1/2) N . Proposition . . Let 1 ĺ j ĺ N -1 and 1 ĺ q < j + 1. Assume that π j (N ) is trivial. Then, for every g ∈ Lip(C j ; N ), the map g j is strong limit in W 1,q of maps in Lip(Ω; N ).
Proof.
Step . Construction of a Lipschitz N -valued extension h of g to C j+1 . (Here, we use the assumption on π j (N ).) Let C ∈ C j+1 . Since ∂C is bi-Lipschitz homeomorphic with S j , and, by assumption, π j (N ) is trivial, there exists a homotopy
G C : ∂C × [0, 1] → N such that G C (x, 0) = g(x), ∀ x ∈ ∂C, and G C (x, θ) = b C , ∀ x ∈ ∂C, ∀ θ ľ 1/2, for some constant b C ∈ N .
Moreover, by a smoothing argument, we may assume that G C is Lipschitz. The map
C j+1 h(0 C + x j+1 ) := G C (0 C + x j , 1 -2|x j+1 |), if |x j+1 |ľ 1/4 b C , if |x j+1 ĺ 1/4 is a Lipschitz N -valued extension of g to C j+1 .
Step . Construction of a Lipschitz N -valued extension k of g to C N . (Here, we use the existence of the map h from the previous step.) We rely on the following geometrically obvious fact (see Lemma . for a formal proof). There exists a Lipschitz homotopy G = G(x, θ) :
C N × [0, 1] → C N such that: a) G(x, 0) = x, ∀ x ∈ C N . b) G(x, θ) = a, for some (fixed) point a ∈ C j+1 , ∀ x ∈ C N , ∀ θ ľ 1/2. c) G(x, θ) ∈ C j+1 , ∀ x ∈ C j , ∀ θ.
Granted the existence of G, and with h as in Step , we let, ∀ 0
C + x N ∈ C ∈ C N , k(0 C + x N ) := h(G(0 C + x j , 2d(0 C + x N , C j ))), if d(0 C + x N , C j ) ĺ 1/4 h(a), if d(0 C + x N , C j ) ľ 1/4 .
Clearly, k is a Lipschitz N -valued extension of g to C N .
Step . Approximation of g j . (Here, we use the assumption q < j + 1.) For 0 < µ < 1/2, consider the following sets and functions:
U µ := {x ∈ C N ; d(x, C j ) ĺ 1/2 -µ}, V µ := {x ∈ C N ; 1/2 -µ ĺ d(x, C j ) ĺ 1/2 -µ/2}, W µ := {x ∈ C N ; d(x, C j ) ľ 1/2 -µ/2}, f 1 : [1/2 -µ, 1/2] → [0, 1], f 1 (θ) := 0, if θ ľ 1/2 -µ/2 (1 -2θ)/µ -1, if 1/2 -µ ĺ θ ĺ 1/2 -µ/2 , f 2 : [1/2 -µ, 1/2] → [0, 1], f 2 (θ) := 1, if θ ľ 1/2 -µ/2 -(1 -2θ)/µ + 2, if 1/2 -µ ĺ θ ĺ 1/2 -µ/2 , d j (x) := d(x, C j ).
We define the following approximation of g j :
C N x = 0 C + x N → F µ (x) := g j (x) = g(0 C + x j ), if x ∈ U µ , k(f 1 (d j (x))(0 C + x j ) + f 2 (d j (x))x), if x ∈ V µ ∪ W µ .
We note that F µ is well-defined, Lipschitz, N -valued, equals g j in U µ , and is Lipschitz (with Lipschitz constant independent of µ) in W µ . Since |W µ |→ 0 as µ → 0, in order to prove that F µ → g j in W 1,q (Ω) as µ → 0 it remains to prove that ||∇F µ || L q (Vµ) → 0 as µ → 0. In turn, this follows from the fact that, from the definition of F µ and the fact that k is Lipschitz, we have ˆVµ
|∇F µ | q À 1 µ q |V µ |∼ µ j+1 µ q → 0 as ε → 0. QED Lecture # .
Hearing singularities
Let us return to the all purposes counterexample in Proposition . . It relies on the existence of a non-trivial topological invariant (in that case, the winding number of maps f ∈ C 0 (S 1 ; S 1 )) and on the construction of a map "carrying" the topological invariant around a singular point. This raises several questions: In this section, we discuss the best understood situation, the one of sphere-valued maps. In order to further simplify the presentation and focus on analytical (rather than geometrical measure theory) issues, we first assume that the space dimension N and the dimension k of the sphere are related by N = k +1. In Section . , we provide a glimpse of the general case and of the additional di ficulties it raises.
Contents
. . The distributional Jacobian
We let, in Sections . -. , u : Ω → S N -1 , where Ω = (0, 1) N and N ľ 2. Recall that we always assume that sp < N . If sp < N -1, then C ∞ (Ω; S N -1 ) is dense in W s,p (Ω; S N -1 ) (by Theorem
. and the fact that π j (S N -1 ) is trivial when j < N -1). Therefore, the interesting range is
N -1 ĺ sp < N. ( . )
For such s and p, maps in the class R = R 0 (i. e., maps as in ( . ), with A a finite subset of Ω) are dense in W s,p (Ω; S N -1 ) (Theorem . ). When u ∈ R, one can define the singular set simply as A. However, this is not a tractable definition, since it is not clear how to pass to the limits sets of points. The appropriate substitute is the distribution
Ju := C N a∈A deg(u, a)δ a ∈ D (Ω), ( . )
where
C N := |B 1 (0)| (the volume of the unit ball in R N ).
Here, deg(u, a) is the (Brouwer) degree of the map u |Sε(a) : S ε (a) → S N -1 , for small ε. Clearly, this integer does not depend on (small) ε.
The main result here is the following (see [ ] for the full result, and, for special cases, Bethuel, Brezis, and Coron [ ], Jerrard and Soner [ , ], Hang and Lin [ ]).
Theorem . . Assume ( . ). Then the map
R u J Þ -→ Ju ∈ D (Ω) ( . )
has a unique extension by continuity, still denoted J, to W s,p (Ω; S N -1 ).
In addition, Ju belongs to the space [Lip 0 (Ω)] * , the mapping W s,p (Ω; S N -1 ) u → Ju ∈ [Lip 0 (Ω)] * is continuous, and we have the estimate
||Ju|| [Lip 0 (Ω)] * À |u| (N -1)/s W s,p .
( . )
Proof.
Step . A first convenient formula for Ju. We first derive a tractable formula for Ju when u ∈ R. This formula (which will explain the title of this section) appears in Brezis, Coron, and Lieb [ ], with roots in Ball [ ] and Morrey [ ]. To start with, we note that R ⊂ W 1,q , ∀ q < N . (For this step, R ⊂ W 1,N -1 su fices.) Let ω = ω N -1 be the standard volume form on S N -1 , given by
ω N -1 := N j=1 (-1) j-1 x j dx 1 ∧ . . . ∧ dx j ∧ . . . ∧ dx N . ( . )
Denoting u ω the pullback by u of ω, i. e.,
u ω = N j=1 (-1) j-1 u j du 1 ∧ . . . ∧ du j ∧ . . . ∧ du N ∈ L 1 (Ω; Λ N -1 ),
we claim that
Ju = 1 N d(u ω) in D (Ω), ( . )
where, in ( . ), we have identified a scalar distribution (the le t-hand side) with a N -form whose density is a distribution (the right-hand side).
To justify ( . ), a first important fact is that, when u is C 2 in some open set V ⊂ Ω, we have, in V ,
d(u ω) = N du 1 ∧ • • • ∧ du N = N (Jac u) dx 1 ∧ • • • ∧ dx N = 0, ( . )
where Jac stands for the Jacobian determinant. The first equality is a clear consequence of the exterior calculus rules, and justifies the designation of d(u ω) as (up to a constant factor N ) distributional Jacobian. The last equality is justified by the fact that the N vectors d 1 u(x), . . . d N u(x),
x ∈ V , are all in the (N -1)-dimensional tangent hyperplane T S N -1 (x), and thus Jac u(x) = 0, ∀ x ∈ V .
A second important fact is Kronecker's formula (see, e. g., Dinca and Mawhin [ , Section . , Section . ]): if S is any sphere in R N (with the usual orientation), then For further use, let us note that we have proved that, when u ∈ R, we have
deg(v, S) = 1 |S N -1 | ˆS v ω, ∀ v ∈ C 1 (S; S N -1 ). ( . ) Let now u ∈ R. Combining: (i) the definition of the distributional derivative; (ii) the fact that u ∈ W 1,N -1 ; (iii) ( . ); (iv) the divergence theorem; (v) the fact that u ∈ R; (vi) ( . ), we find that, ∀ ϕ ∈ C ∞ c (Ω), d(u ω)(ϕ) = -ˆΩ dϕ ∧ (u ω) = -lim ε→0 ˆΩ\∪ a∈A Bε(a) dϕ ∧ (u ω) = -lim ε→0 ˆΩ\∪ a∈A Bε(a) d[ϕ(u ω)] =
Ju(ϕ) = - 1 N ˆΩ dϕ ∧ (u ω), ∀ ϕ ∈ C ∞ c (Ω), ( . ) and that, if u ∈ C 2 (Ω; R N ), we have ˆΩ(Jac u)ϕ = - 1 N ˆΩ dϕ ∧ (u ω), ∀ ϕ ∈ C ∞ c (Ω; R). ( . )
Step . The easy case s ľ 1. The right-hand side of ( . ) is clearly continuous (with respect to the u's satisfying |u|ĺ 1) in W 1,N -1 . We complete this step by noting that, when s ľ 1 and sp ľ N -1, we have W s,p ∩ L ∞ → W 1,N -1 (Corollary . ).
In the remaining part of the proof, we assume that 0 < s < 1.
Step . A second convenient formula for Ju. This step appears in [ ], but was essentially known before (see Dunford and Schwartz [ , p. ]).
Let u ∈ C 2 (Ω; R N ), respectively ϕ ∈ C ∞ c (Ω; R). Let W = W (u) ∈ C 2 (Ω × [0, 1); R N ) be an extension of u, respectively Φ = Φ(ϕ) ∈ C 1 c (Ω × [0, 1
); R) be an extension of ϕ. Let, for each j ∈ {1, . . . , N + 1}, E j =E j (W ) denote the determinant whose columns are the N partial derivatives ∂ 1 W, . . . , z ∂ j W ,. . . , ∂ N +1 W . We claim that
N +1 j=1 (-1) j-1 ∂ j E j = 0. ( . ) Indeed, identifying (E 1 , . . . , E N +1 ) with the N -form ζ := dW 1 ∧ • • • ∧ dW N , ( .
) amounts to the trivial equality dζ = 0 (see [ , Lemma . ] for details).
Combining: (i) ( . ); (ii) the divergence theorem; (iii) the fact that, on Ω × {0}, we have E N +1 Φ = (Jac u)ϕ; (iv) ( . ), we find that ˆΩ×(0,1)
N +1 j=1 (-1) N +j E j ∂ j Φ = ˆΩ×(0,1) N +1 j=1 ∂ j ((-1) N +j E j Φ) = ˆΩ×{0} E N +1 Φ = ˆΩ(Jac u)ϕ = - 1 N ˆΩ dϕ ∧ (u ω), so that 1 N ˆΩ dϕ ∧ (u ω) = - ˆΩ×(0,1) N +1 j=1 (-1) N +j E j ∂ j Φ. ( . )
At this stage, we know that ( . ) holds when u ∈ C 2 and W ∈ C 2 . By a straightforward argument, ( . ) still holds provided ϕ ∈ Lip 0 (Ω), Φ ∈ Lip 0 (Ω × [0, 1)) and
u ∈ W 1,N -1 loc (Ω; R N ) ∩ L ∞ , W ∈ W 1,N loc (Ω × (0, 1); R N ) ∩ L ∞ , W (•, ε) → u in W 1,N -1 loc (Ω) as ε → 0. ( . )
Combining the first two steps, we find the useful identity
Ju(ϕ) = - ˆΩ×(0,1) N +1 j=1 (-1) N +j E j ∂ j Φ, ∀ u ∈ R, ∀ W as in ( . ), ( . )
and also the fact that the right-hand side of ( . ) is well-defined (under the assumption ( . )) for Φ ∈ Lip 0 (Ω × [0, 1)).
The heart of the proof of Theorem . consists of proving the existence, for each u ∈ R, of a convenient extension W = W (u) such that the right-hand side of ( . ) is continuous in W s,p (with respect to u).
Step . The main geometric estimate. (Here, we do not use the assumption sp ľ N -1.) Consider a linear continuous operator W s,p u = u(x) Section . ). By slicing, trace theory, and Sobolev embeddings, for a.e. x ∈ Ω we have (j)
→ U = U (x, ε), x ∈ Ω, 0 < ε ĺ 1 such that: (i) U ∈ C ∞ ; (ii) U ∈ W s+1/p,p ; (iii) tr U = u; (iv) |U | W s+1/p,p À |u| W s,p ; (v) ||U || ∞ À ||u|| ∞ if u ∈ L ∞ ; (vi) if u ∈ W σ,q , then U (•, ε) ∈ W σ,q and |U (•, ε)| W σ,q À |u| W σ,q (see
lim ε→0 U (x, ε) = u(x) ∈ S N -1 ; (jj) U (x, •) ∈ W s+1/p,p ((0, 1)) → C s ([0, 1]). Define the function d(x) := inf{0 < ε ĺ 1; |U (x, ε)|ĺ 1/2}, with the convention that inf ∅ = ∞. We claim that ˆΩ 1 [d(x)] sp dx À |u| p W s,p . ( . )
Indeed, if x is such (j) and (jj) hold and d(x) < ∞, then d(x) > 0 and |U (x, d(x))|= 1/2, and therefore
1/2 ĺ |u(x) -U (x, d(x))|À [d(x)] s |U (x, •)| C s ([0,1]) À [d(x)] s |U (x, •)| W s+1/p,p ((0,1)) . ( . )
Using ( . ), slicing, and (iv), we find that
ˆΩ 1 [d(x)] sp dx À ˆΩ|U (x, •)| p W s+1/p,p ((0,1)) dx À |u| p W s,p ,
so that ( . ) holds, as claimed.
Step . Construction of W (u) and Φ(ϕ). Fix some
ζ ∈ C 1 c ([0, 1); R) such that ζ(0) = 1. Let Φ(ϕ)(x, ε) := ζ(ε)ϕ(x), ∀ x ∈ Ω, ∀ ε ∈ [0, 1]. Clearly, ϕ → Φ is linear and continuous from Lip 0 (Ω) into Lip 0 (Ω × [0, 1)).
Let Π ∈ C ∞ (R N ; R N ) be such that Π(x) = x/|x| when |x|ľ 1/2. (Here, | | stands for the Euclidean norm.) Let U be as in the previous step. Set W := Π • U . If u ∈ R, then u ∈ W 1,N -1 and, by the construction of U , we have ||∇U (•, ε)|| N -1 À ||∇u|| N -1 . By the formula of W , we also have ||∇W (•, ε)|| N -1 À ||∇u|| N -1 . On the other hand, we have U (•, ε) → u in W 1,N -1 as ε → 0 and U ∈ L ∞ . By Theorem . and the fact that Π • u = u, we find that W (•, ε) → u in W 1,N -1 as ε → 0. Therefore, ( . ) holds, and thus ( . ) holds when u ∈ R and W = W (u).
Step . Conclusion. It remains to prove that the right-hand side of ( . ) is continuous from R (with the distance inherited from W s,p ) into [Lip 0 (Ω)] * . Consider the open set
V = V (u) :={(x, ε) ∈ Ω × (0, 1); |U (x, ε)|> 1/2} ⊂{(x, ε) ∈ Ω × (0, 1); 0 < ε < min{d(x), 1}}.
In V , we have |W |= 1, and thus E j = 0, ∀ j (see the proof of ( . )). On the other hand, for (x, ε) ∈ V , we have, by the construction of U (see Section . )
|∇W (x, ε)|À |∇U (x, ε)|À 1 ε ||u|| ∞ À 1 ε . ( . )
Using: (i) ( . ); (ii) ( . ); (iii) the fact that, when (x, ε) ∈ V , we have ε ľ d(x); (iv) ( . ), we find that
|Ju(ϕ)|À||∇Φ|| ∞ ˆΩ ˆεľd(x) 1 ε N dεdx À ||∇ϕ|| ∞ ˆΩ 1 [d(x)] N -1 dx À||∇ϕ|| ∞ |u| (N -1)/s W s,p , ( . )
where the last line uses ( . ), Hölder's inequality, and the assumption sp ľ N -1.
In order to complete the proof, it su fices to prove the continuity of R u → Ju(ϕ) for a fixed ϕ. Consider a sequence (u j ) ⊂ R converging in W s,p to some u. By: (i) trace theory; (ii) slicing; (iii) the converse to the dominated convergence theorem, there exist a subsequence, still denoted (u j ), and a function F ∈ L p (Ω) such that |U j (x, •)| W s+1/p,p ĺ F (x) for each j and a. e. x ∈ Ω. An inspection of the proof of ( . ) shows that, for each j and a. e. x, we have d(x) Á [F (x)] -1/s , and therefore the corresponding sets V (u j ) satisfy
[Ω × (0, 1)] \ V (u j ) ⊂ Z := {(x, ε) ∈ Ω × (0, 1); ε Á [F (x)] -1/s }.
( . )
(Note that Z does not depend on the (sub)sequence (u j ).) Using: (i) ( . ); (ii) the fact that, clearly,
W (u j )(x, ε) → W (u)(x, ε), ∀ x, ∀ ε; (iii) ( . ); (iv) the fact that (x, ε) → 1/ε N ∈ L sp/(N -1) (Z) ⊂ L 1 (Z); (v)
dominated convergence, we find that (possibly along a subsequence) (J(u j )(ϕ)) converges to the right-hand side of ( . ) corresponding to u. Finally, the uniqueness of the limit implies that convergence holds for the full original sequence.
Moreover, using the above domination and the explicit construction of Φ(ϕ), the continuity of W s,p (Ω; S N -1 ) u → Ju ∈ [Lip 0 (Ω)] * is routine. The estimate ( . ) easily follows from ( . ) and a limiting argument.
QED
. . The range of the distributional Jacobian
Recall that we consider maps u : Ω ⊂ R N → S N -1 , with N ľ 2. The main result here is the following.
Theorem . . Assume ( . ).
. If u ∈ W s,p (Ω; S N -1 ), then there exist points P j , N j ∈ Ω, j ľ 1, such that
j |P j -N j |À |u| (N -1)/s W s,p , ( .
)
Ju = C N j (δ P j -δ N j ) in D (Ω). ( . )
. Conversely, given points P j , N j ∈ Ω satisfying ( . ) and j |P j -N j |< ∞, there exists u : Ω → S N -1 such that, for every s, p satisfying sp = N -1:
(i) u ∈ W s,p (Ω; S N -1 ); (ii) ( . ) holds; (iii) |u| p W s,p À inf k | r P k -r N k |; k (δ r P k -δ r N k ) = j (δ P j -δ N j ) in D (Ω) . ( . )
See [ , ] for the general case, and [ , , ] for special cases. When s = 1 and p = N -1, the above theorem is a special case of the main result in Alberti, Baldo, and Orlandi [ , Theorem . ], but obtaining Theorem . from [ , Theorem . ] requires an additional argument. The proof involves two important ingredients: a duality formula and a dipole construction, both due to Brezis, Coron, and Lieb [ ], complemented with a dipole insertion technique due to Bethuel [ ].
Remark . . Note the range N -1 ĺ sp < N in item , and the range sp = N -1 in item . Thus, item is not the exact converse of item . When N = 2 (i. e., we consider S 1 -valued maps), the exact converse of item is known (see Bousquet [ ]), i. e., when 1 < sp < 2, it is possible to characterize the set {Ju; u ∈ W s,p (Ω; S 1 )}. The counterpart of the result in [ ] is not known when N > 2.
Elements of proof of Theorem . . Step . A pseudometric and a duality formula. Set, for P, N ∈ Ω,
d(P, N ) = min{|P -N |, dist(P, ∂Ω) + dist(N, ∂Ω)}. ( . )
Clearly, d is a pseudometric, and, for each P, N ∈ Ω: (i) either d(P, N ) = |P -N | and the interior of the segment [P, N ] is completely contained in Ω, or: (ii) there exist points
P 1 , N 1 ∈ ∂Ω such that |P -N 1 |= d(P, ∂Ω), |P 1 -N |= d(N, ∂Ω), and d(P, N ) = |P -N 1 |+|P 1 -N |.
Moreover, in the latter case, if, for example, P ∈ Ω, then the line segment [P, N 1 ] is normal to ∂Ω at N 1 , and its interior is completely contained in Ω.
Given P j , N j ∈ Ω, 1 ĺ j ĺ m, set L((P j ), (N j )) := min σ∈Sm j d(P j , N σ(j) ).
( . )
It is clear from the definitions ( . ) and ( . ) that, given initial collections (P j ), (N j ), 1 ĺ j ĺ m, we may find new collections, still denoted, for simplicity, (P k ), (N k ) (containing, possibly, more points), such that new points
(δ P k -δ N k ) = initial points (δ P j -δ N j ) in D (Ω), ( . ) L((P j ), (N j )) = L((P k ), (N k )) = k |P k -N k |, ( . )
for each k, the points P k , N k are distinct, at least one of them is in Ω, and, if
P k ∈ ∂Ω or N k ∈ ∂Ω, the segment [P k , N k ] is normal to ∂Ω. ( . )
Note that, if ϕ ∈ Lip 0 (Ω) and P, N ∈ Ω, then ( . )
ϕ(P ) -ϕ(N ) ĺ d(P, N )|ϕ| Lip , (
For further use, let us note the following consequence of ( . ) combined with ( . ) below and ( . )-( . ):
||Ju|| [Lip 0 (Ω)] * = C N min k |P k -N k |; P k , N k ∈ Ω,Ju = C N (δ P k -δ N k ) , ∀ u ∈ R 0 .
( . )
Step . Proof of item . We start with a preliminary remark. If u ∈ R then (in view of the definition ( . ) of Ju), possibly a ter adding fictitious points on ∂Ω, we may always write
Ju = C N j (δ P j -δ N j ) in D (Ω) ( . )
(where the sum contains a finite number of terms).
Let now u ∈ W s,p (Ω; S N -1 ). Let u 0 be any constant in S N -1 . Consider a sequence
(u i ) iľ1 ⊂ R such that u i → u in W s,p , |u 1 | W s,p À |u| W s,p , and ||Ju i+1 -Ju i || [Lip 0 (Ω)] * ĺ 2 -i |u| (N -1)/s W s,p
, ∀ i ľ 1 (see Theorem . ). Combining the observation ( . ) with ( . ), ( . )-( . ), and the estimate ( . ), we find that there exist sequences
(P k,i ) k , (N k,i ) k such that Ju i+1 -Ju i = C N k (δ P k,i -δ N k,i ) in D (Ω), ∀ i ľ 0, ( . ) k |P k,0 -N k,0 |À |u| (N -1)/s W s,p , ( . ) k |P k,i -N k,i |À 2 -i |u| (N -1)/s W s,p , ∀ i ľ 1. ( . )
Combining ( . )-( . ) with the continuity of J, we find that ( . )-( . ) hold.
Step . Partial proof of item : setting and strategy. We present the proof of a weaker result: we let N ľ 3 and we fix 0 < s < N -1 and 1 < p < ∞ such that sp = N -1. (For an "all couples" (s, p procedure, based on a diagonal process and Gagliardo-Nirenberg, in a similar context, see [ , proof of Theorem . ].) For such s, p, and N , and all sequences (P j ), (N j ) as in item , we prove the existence of a map u satisfying ( . ). For the proof of item in full generality, we refer the reader to [ ].
For pedagogical purposes, we temporarily assume that
k |P k -N k | 1/p < ∞; ( . )
we will remove this assumption in the final step.
It will be more convenient to work in the full space R N , N ľ 3. More precisely, given sequences
(P k ) kľ1 , (N k ) kľ1 ⊂ R N such that k |P k -N k |< ∞ and P k = N k , ∀ k,
and a point a ∈ S N -1 , we will construct a map u : R N → S N -1 such that:
a) u -a ∈ W s,p (R N ). b) Ju = C N k (δ P k -δ N k ) in D (R N ). c) ||u -a|| p W s,p À k |P k -N k |< ∞.
Clearly, the existence of such a map implies item of the theorem.
The map u will be obtained as the limit of a sequence of maps, each iterative step consisting of dipoles insertions.
Step . The dipole construction. We fix a map
f ∈ C ∞ ([0, 1]; [0, 1]) such that f (0) = f (1) = 0, f (0) > 0, f (1) < 0, and f (θ) > 0, ∀ θ ∈ (0, 1).
Given a line segment S in R N , say, in order to simplify the statement, S = [0, Le N ], and 0 < ε ĺ L, there exists a map
u ε ∈ C ∞ (R N \ {0, Le N }; S N -1 ) such that u ε ∈ R 0 , ∀ ε, ( . ) |u ε | q W σ,q À L, ∀ σ, q such that σq = N -1, ∀ ε, ( .
) Lemmas . and . ). A similar conclusion holds for an arbitrary segment. A noticeable fact is that the estimate ( . ) involves the length of the segment.
Ju ε = C N (δ 0 -δ Le N ) in D (R N ), ∀ ε, ( . ) supp(u ε -a) ⊂ {(x , x N ) ∈ R N -1 × R; 0 ĺ x N ĺ L, |x |ĺ Lεf (x N /L)}, ∀ ε ( . ) (see
Step . The iterative construction. We will construct a sequence (v k ) such that v 0 = a and
Jv m = C N nĺm (δ Pn -δ Nn ) in D (R N ), ∀ m, ( . ) ||v m -v m-1 || p W s,p À |P m -N m |, ∀ m ľ 1, ( . ) ||v m || p W s,p -||v m-1 || p W s,p À |P m -N m |, ∀ m ľ 1. ( . )
Assuming ( . )-( . ) and using the temporary assumption ( . ), we find that the limiting map u := a + mľ1 (v m -v m-1 ) has all the desired properties.
To start with, we let, as in the dipole construction,
v 1 ∈ C ∞ (R N \ {P 1 , N 1 }; S N -1 ) satisfy Jv 1 = C N (δ P 1 -δ N 1
) and the estimates ( . )-( . ). (This is possible, for su ficiently small ε, since P 1 = N 1 .) Assume next that we were able to construct v 1 , . . . , v k-1 such that ( . )-( . ) hold, and, in addition, there exists an increasing sequence of finite sets
A m ⊂ R N , 1 ĺ m ĺ k-1, such that: v m is smooth in R N \ A m , ( . ) v m ∈ R 0 , ( . ) for each x ∈ A m , there exists a non-empty open conical cap C x with vertex x such that v m (x) = a in C x . ( . )
Note that these assumptions are satisfied when m = 1, with A 1 = {P 1 , N 1 }. We next construct v k according to the position of P k and N k with respect to the set A k-1 .
Case . [P
k , N k ] ∩ A k-1 = ∅. We first modify v k-1 in a convenient small open neighborhood V of [P k , N k ],
such that the modified map, still denoted v k-1 , continues to satisfy ( . )-( . ), and v k-1 = a in V . (Intuitively, this is possible since a segment in R N , with N ľ 3, has zero W s,pcapacity.) The rigorous existence of such a modified map is established in Lemma . . We next consider, with an abuse of notation, the map
v k = v k-1 , in R N \ V u ε , in V,
where u ε is the map in the dipole construction corresponding to the singularities P k , N k . Clearly, in view of ( . )-( .) and of the Brezis-Lieb type Lemma . , for small ε the map v k satisfies ( . )-( .), with
A m := A m-1 ∪ {P k , N k }.
Case .
[P k , N k ] ∩ A k-1 = ∅.
In this case, we may construct a finite chain
D = [Q 1 , Q 2 ] ∪ . . . ∪ [Q t-1 , Q t ]
without self intersections and such that:
(i) If x ∈ D \ {P k , N k }, then x ∈ A k-1 ∪ {P k , N k }. In particular, Q 2 , . . . , Q t-1 ∈ A k-1 ∪ {P k , N k }. (ii) If P k ∈ A k-1 , then, near P k , the segment [Q 1 , Q 2 ] is contained in C P k , where C P k is as in ( . ). Similarly for N k . (iii) j |Q j+1 -Q j |À |P k -N k |. We next modify v k-1 in a neighborhood of D \ {P k , N k } such that: (j) ( . )-( . ) still hold; (jj) v k-1 equals a in a neighborhood of D \ {P k , N k }.
The construction of the modified map and the corresponding estimates are established in Lemma . . Finally, we insert (t-1)-dipoles u j,ε j ,
1 ĺ j ĺ t -1, satisfying Ju j,ε j = C N (δ Q j+1 -δ Q j )
. By the multi-sequences Brezis-Lieb lemma . , for convenient small ε j , the new map
v k (x) = a + u j,ε j -a, in supp(u j,ε j -a) v k-1 -a, in R N \ ∪ j supp(u j,ε j -a)
has all the required properties, with
A k = A k-1 ∪ {Q 1 , . . . , Q t }.
Step . Removing the assumption ( . ). Let S :
= k |P k -N k |. We consider integers 1 = j 0 < j 1 < j 2 < . . . such that j k-1 ĺj<j k |P j -N j |À 2 -k S, ∀ k.
We let v 0 = a and construct, as explained in
Step , Case (using several chains and the multi-sequences Brezis-Lieb Lemma . ), a sequence
(v k ) such that Jv k = C N j<j k (δ P j -δ N j ) in D (R N ), ∀ k, ||v k -v k-1 || p W s,p À j k-1 ĺj<j k |P j -N j |À 2 -k S, ∀ k ľ 1, ||v k || p W s,p -||v k-1 || p W s,p À j k-1 ĺj<j k |P j -N j |À 2 -k S, ∀ k ľ 1, v k is smooth in R N \ A k , for some finite A k , v k ∈ R 0 , for each x ∈ A k , there exists a non-empty open conical cap C x with vertex x such that v k (x) = a in C x .
Then (v k -a) converges in W s,p to some map v with v -a ∈ W s,p and such that Jv = C N j (δ P j -δ N j ).
QED
. .
Inserting singularities
Recall that we consider maps u : Ω ⊂ R N → S N -1 , with N ľ 2. The main result here is due to Bethuel when s = 1 and N ľ 3 [ ].
Theorem . . Let u ∈ R 0 .
. Let N ľ 3. There exists some map v ∈ R 0 such that Jv = 0 in D (Ω) and
|v -u| p W s,p À ||Ju|| [Lip 0 (Ω)] * , ∀ 0 < s < N -1, 1 < p < ∞ s. t. sp = N -1. ( . )
. Let N = 2. There exists some map v ∈ R 0 such that Jv = 0 in D (Ω) and
|v -u| p W s,p À ||Ju|| [Lip 0 (Ω)] * , ∀ 0 < s ĺ 1, 1 ĺ p < ∞ s. t. sp = 1. ( . )
Elements of proof. For simplicity, we consider only the case where N ľ 3, and we prove the estimate ( . ) for a fixed couple (s, p). (For N = 2, see [ , Proposition . , Proposition . ].) The proof is very similar to the one of item in Theorem . . Let u ∈ R 0 and the sets U , A as in ( . ). We may assume that Ju = 0. Write, as in ( . ),
Ju = C N k (δ P k -δ N k )
, where the points P k , N k satisfy ( . )-( . ).
Step . Modification of u near its singularities. The purpose of this step is to obtain a new map, r u ∈ R 0 such that . ) near each of its singularities in Ω, r u satisfies the assumption (i) of Lemma . .
J r u = Ju in D (Ω), ( . ) ||r u -u|| p W s,p À ||Ju|| [Lip 0 (Ω)] * , (
( . )
This modification is performed in Lemma . . For this step, we require 0 < s < N -1 and we exclude the couple (s, p) = (N -1, 1). For the record, it is possible to extend the validity of Lemma . to this couple, if, for each singularity x ∈ A, we have deg(u, x) = 0.
Step . Dipole insertion, and conclusion. By Step , we may assume that ( . ) holds for u (instead of r u). We next construct a map v ∈ R 0 such that
Jv = 0 in D (Ω), ||v -u|| p W s,p À k |P k -N k |. ( .
)
The construction is performed using the procedure explained in Steps and of the proof of Theorem . , by inserting, at each singularity P k (respectively N k ) a dipole of degree -1 (respectively +1).
We complete the proof by noting that ( . ), ( . ), and ( . ) imply ( . ). Theorem . . Let N ľ 1, 0 < s ĺ 1, 1 ĺ p < ∞ be such that N -1 ĺ sp < N . Then, for u ∈ W s,p (Ω; S N -1 ), we have
u ∈ C ∞ (Ω; S N -1 ) W s,p ⇐⇒ Ju = 0.
Elements of proof. The implication " =⇒ " follows from Theorem . and the fact that, when u ∈ C ∞ (Ω; S N -1 ), we have Ju = 0. The remaining part of the proof is devoted to the reverse implication. In Step , we limit ourselves to the case 0 < s < 1, since we rely on our constructive proof of Theorem . for 0 < s < 1. However, as explained in [ ], we could have completed Step even for s = 1, by combining our argument with Bethuel's constructive proof of Theorem . when
s = 1 [ ].
Step . " ⇐= " holds when: (i) sp = N -1; (ii') 0 < s < N -1 when N ľ 3; (ii") 0 < s ĺ 1 when N = 2. By Theorem . , there exists a sequence (u i ) ⊂ R 0 such that u i → u in W s,p . By Theorem . and the assumption Ju = 0, we have Ju i → 0 in [Lip 0 (Ω)] * as i → ∞. By Theorem . , there exists a sequence (v i ) ⊂ R 0 such that Jv i = 0, ∀ i, and v i → u in W s,p . In order to complete this step, it remains to prove that
[v ∈ R 0 , Jv = 0] =⇒ v ∈ C ∞ (Ω; S N -1 ) W s,p .
( . )
In order to prove ( . ), we argue as in the proof of Lemma . . Since deg(v, x j ) = 0 near each singularity x j of v, we find that the restriction of v to a small sphere S δ (x j ) around x j is homotopic to a fixed constant a ∈ S N -1 , and then, for every µ > 0, we may construct, as in the proof of Lemma . , a smooth map r v : Ω → S N -1 such that ||r v -v|| W s,p ĺ µ and, near each x j , r v = a. This construction completes Step .
Step . " ⇐= " holds when 0 < s < 1 and N -1 < sp < N . (Sketch of proof.) We work with |x|:= ||x|| ∞ . Let Ω δ := {x ∈ Ω; d(x, ∂Ω > δ)}. We will prove that, for each δ > 0, u |Ω δ can be approximated in W s,p with maps in C ∞ (Ω δ ; S N -1 ). The same conclusion on Ω will then follow by a standard argument based on domain di feomorphisms. In order to simplify the formulas, we work in Ω rather than Ω δ , and then we may assume that u ∈ W s,p (U ; S N -1 ) and Ju = 0 in D (U ), where U is the larger domain {x ∈ R N ; d(x, Ω) < δ}. By Step , we have
u ∈ C ∞ (U ; S N -1 ) W σ,q , ∀ 0 < σ < N -1, 1 < q < ∞ s. t. σq = N -1. ( . )
Consider an extension of u, denoted f , to R N , such that f ∈ W s,p (R N ; R N ). (We do not claim that the extension is S N -1 -valued.) Let 0 < ε < δ/2. Using the notation in the proof of Theorem . , formula ( . ) holds for f (by Theorem . ). Moreover, as explained at the beginning of the Section . , for a. e. t ∈ Q ε , f satisfies ( . ) with j = N -1 and ( . ) with m = N -1. By
Step , there exists a sequence (u i ) ⊂ C ∞ (U ; S N -1 ) such that u i → u in W (N -1)/p,p (U ; S N -1 ). By slicing [ , Lemma . ], possibly a ter passing to a subsequence, still denoted (u i ), for a. e. t ∈ Q ε , we have
u i|C N -1,t,ε ∩U → u |C N -1,t,ε ∩U in W s,p .
( . )
Consider now a t ∈ Q ε such that ( . ), ( . ), and ( . ) hold, and any fixed cube C ∈ C N,t,ε contained in U . Let v i := u i|∂C , v := u |∂C . Note that v has a continuous representative (since ( . ) holds with j = N -1). By the above and a homotopy argument, we have
0 = lim i deg(u i , S ε (t)) = deg(v, S ε (t)), ( .
)
where the second equality follows from the embedding W s,p (∂C) → C 0 and the stability of the Brouwer degree under uniform convergence.
Consider now the smoothing process described in Section . : on the cubes C ∈ C N,t,ε contained in U and such that ( . ), ( . ), and ( . ) hold, we may approximate u with a (N -1)homogeneous map w such that its restriction on the boundary of each C is Lipschitz and (by the above stability argument) has zero degree. By Lemma . , we may approximate, in W s,p , w (and thus u) with Lipschitz S N -1 -valued maps. By an additional smoothing argument, we may approximate, in W s,p , u with maps in C ∞ (Ω; S N -1 ).
QED
For the record, let us note that the second step above has little to do with sphere-valued maps. It reveals a more general scheme that we formalize in the next statement.
Proposition . . Let:
(i) N ľ 2. (ii) 0 < s ĺ 1, 1 ĺ p < ∞, 1 ĺ j < N such that j < sp < j + 1. (iii) If j = N -1, Ω is any Lipschitz bounded domain. If j < N -1, we take Ω = (0, 1) N . Set σ := j/p < s. Let u ∈ W s,p (Ω; N ). Then u ∈ C ∞ (Ω; N ) W s,p ⇐⇒ u ∈ C ∞ (Ω; N ) W σ,p
. Sketch of proof. " =⇒ " is clear. For the reverse implication, we argue essentially as in Step above, and let U as there. First, we introduce an ad hoc notation. Let r
C N = r C t,N,ε := ∪{C ∈ C N ; C ⊂ U } and, for 0 ĺ j ĺ N -1, r C j := C j ∩ r C N . Let t ∈ Q ε be such that ( . )-( . ) hold and u i | r C j → u | r C j in W s,p , ∀ 0 ĺ j ĺ N -1. ( . )
By the stability argument leading to ( . ), for every face C ∈ r C j+1 , u |∂C : ∂C → N is null homotopic. By the smoothing process described in Section . and the multi-sequences Brezis-Lieb Lemma . , we may approximate in W s,p , on r C j , u | r C j with Lipschitz maps, null homotopic on each ∂C with C as above. We now invoke Lemma . and approximate, on each C, the homogeneous extension of u |∂C to C with a Lipschitz map. Then apply again Lemma . to obtain a global W s,p -approximation. To summarize, we have sketched the argument of the fact that, for a "generic" t ∈ Q ε , the homogeneous extension H j+1 of u | r C j to r C j+1 can be approximated, in W s,p , with Lipschitz maps. If j = N -1, then we found that u itself may be approximated with Lipschitz (and then smooth) N -valued maps, and we are done. When j < N -1, we use the fact that r C N is, up to an a fine transformation, a cube, and apply the singularities removing technique described in Section . to approximate, in W 1,q (and thus in W s,p , by Gagliardo-Nirenberg),
H N • • • • • H j+1 (u | r C j )
with Lipschitz N -valued maps and thus, finally, u with smooth N -valued maps. QED . .
Overview of the higher co-dimensional case
We let here N > k ľ 1 and consider maps u : Ω ⊂ R N → S k . The case where N = k + 1 corresponds to Sections . -. ; here, we rather focus on the case where N ľ k + 2. For the exposition in this section and beyond, we refer the reader to Alberti [ ] and[
. . Jacobian and singularities
We will consider a slightly di ferent route from the one in Section . . We start with the analytical definition of the distributional Jacobian (analogue of ( . )) rather than its geometric definition (analogue of ( . )). For
u ∈ W 1,k (Ω; S k ), set u ω k := k+1 j=1 (-1) j-1 u j du 1 ∧ . . . ∧ du j ∧ . . . ∧ du k+1 ∈ L 1 (Ω; Λ k ), ( . )
where ω k is the standard volume form ω k on S k (ω k corresponds to the choice N = k + 1 in ( . )). Note that, in particular, the definition ( . ) makes sense for u ∈ R N -k-1 . Then define
Ju := 1 k + 1 d(u ω k ) ∈ D (Ω; Λ k+1 ). ( . )
The above definitions are consistent with the ones in Section . . Let us note that, in the previous sections, it turned out to be more convenient to identify the N -form Ju with its density (a scalar distribution). The same situation occurs in any dimension: it will be more convenient to work with the (N -k -1)-form * Ju (where * stands for the Hodge operator) rather than with the (k + 1)-form Ju. In this perspective, and with the notation of the present section, we should have written, on the le t-hand side of ( . ), * Ju(ϕ) rather than Ju(ϕ).
We start with a fundamental example connecting Jacobians and singularities; see Jerrard and Soner [ , Section ] (and, also, Bousquet [ ]) for the first item, and Alberti, Baldo, and Orlandi [ , Theorem . ] for the second one.
Theorem . . . Let Γ be a smooth connected oriented (N -k -1)-submanifold Γ without boundary (in Ω). Let u ∈ W 1,k (Ω; S k ) ∩ C(Ω \ Γ). Then, with m := deg(u, Γ), we have * Ju = C k+1 m Γ in D (Ω; Λ N -k-1 ), ( . )
where C k+1 is the volume of the unit ball in R k+1 and Γ is identified, as usual, with an (Nk -1)-current.
. Given any connected Γ of the form Γ = r Γ ∩ Ω, with r Γ ⊂ R N a smooth closed oriented (N -k -1)-manifold, and any m ∈ Z, there exists u ∈ W 1,k (Ω; S k ) such that ( . ) holds.
Remark . . Some comments are in order concerning the statement of Theorem . . a) The requirement that Γ is oriented is important only when N -k -1 ľ 2. Indeed, points and curves are always orientable, but not, for example, surfaces in R 4 .
b) Given an oriented submanifold Γ in R N , we may always choose a coherent orientation of the normal spaces
: if {e 1 , . . . , e N -k-1 } is a direct basis of T x Γ, then a basis {e N -k , . . . , e N } of N x Γ is direct if {e 1 , . . . , e N } is a direct basis of R N .
c) The degree m = deg(u, Γ) is defined as follows. Let x ∈ Γ. Consider a (small) k-dimensional (hyper)sphere, denoted S ε (x), of radius ε, in the (geometric) normal (k + 1)-plane to Γ at x. The map u |Sε(x) : S ε (x) → S k is continuous. By a homotopy argument, its Brouwer degree, denoted here m, does not depend on x or (small) ε.
d) The meaning of ( . ) is the following:
ˆΩ dϕ ∧ (u ω k ) = (-1) N -k (k + 1)C k+1 m ˆΓ ϕ, ∀ ϕ ∈ Lip 0 (Ω; Λ N -k-1 ). ( . )
e) Item cannot be an exact converse to item , in the sense that the condition that Γ has no boundary in Ω is not su ficient for the existence of u as in item . Here is an example. Let Ω := B 10 (0) \ B 1 (0) and let Γ be the oriented segment from (0, 0, 1) to (0, 0, 10). Then there exists no u ∈ C(Ω \ Γ; S 1 ) such that deg(u, Γ) = 0 (and thus, in particular, for this Γ and for m = 1, the conclusion of item does not hold). Indeed, argue by contradiction, and consider some u ∈ C(Ω \ Γ; S 1 ), such that deg(u, Γ) = m = 0. We note that every circle
C ε (x) as in the definition of deg(u, Γ) is homotopic, in Ω \ Γ, to the circle x 2 1 + x 2 2 = 1 x 3 = -2
, which in turn is homotopic to a point. Via a homotopy argument, we find that m = 0 -a contradiction.
We next present a limitation of the use of the Jacobian as a "singularities detector" (see [ , Section . . ]). andH ∈ C 1 (S 3 ; S 2 ), then Ju = 0 in D (Ω). (Note that this u belongs to W 1,p (Ω; S 2 ), ∀ p < 4, by Lemma . a).) This implies that Ju does not detect lower dimensional "topological singularities," since H may carry a non-trivial Hopf degree.
Proposition . . Let u ∈ W 1,k (Ω; S k ) ∩ C(Ω \ Γ), where Γ ⊂ Ω is a closed set such that H N -k-1 (Γ) = 0. Then Ju = 0. Remark . . Proposition . implies that, if u(x) = H(x/|x|), where x ∈ Ω ⊂ R 4 , 0 ∈ Ω,
On the other hand, if H is topologically non-trivial, then, by the argument in Section . , u cannot be strongly approximated, in W 1,p (Ω; S 2 ), 3 ĺ p < 4, with smooth maps. This implies that the condition Ju = 0 is not su ficient for approximability with smooth maps.
. . The distributional Jacobian. Disintegration (slicing)
We present here the higher-dimensional counterpart of Section . . We assume that
0 < s < ∞, 1 ĺ p < ∞, k ĺ sp < k + 1. ( . )
The main result here is the following (see [ ]).
Theorem . . Assume ( . ). Then the map
R N -k-1 u J Þ -→ Ju ∈ D (Ω; Λ k+1 ) ( . )
has a unique extension by continuity, still denoted J, to W s,p (Ω; S k ).
In addition, Ju belongs to the space [Lip 0 (Ω; Λ N -k-1 )] * , the mapping W s,p (Ω; S k ) u → Ju ∈ [Lip 0 (Ω; Λ N -k-1 )] * is continuous, and we have the estimate
||Ju|| [Lip 0 (Ω;Λ N -k-1 )] * À |u| k/s W s,p .
( . )
The proof follows essentially Steps -of the proof of Theorem . .
We next connect the distributional Jacobian defined above with the definition in Theorem . . For simplicity, we let Ω = (0, 1) N . Set
I(N -k -1, N ) := {α ⊂ {1, . . . , N }; Card α = N -k -1}. For α ∈ I(N -k -1, N ), set α := {1, . . . , N } \ α ∈ I(k + 1, N ). Let ϕ ∈ C ∞ c (Ω; Λ N -k-1
). Then we may write
ϕ = α∈I(N -k-1,N ) ϕ α dx α = α∈I(N -k-1,N ) (ϕ α ) xα (x α ) dx α .
Here, dx α denotes the canonical (N -k -1)-form induced by the coordinates x j , j ∈ α, and
(ϕ α ) xα (x α ) := ϕ α (x α , x α ) belongs to C ∞ c ((0, 1) N -k-1 ; R) (for fixed x α ). Given u ∈ W s,p
(Ω; S k ), by slicing (Corollary . ), for a. e. x α ∈ (0, 1) N -k-1 the partial map (u α ) xα := x α → u(x α , x α ) belongs to W s,p ((0, 1) k+1 ; S k ). Assuming that s, p satisfy ( . ), for such x α the distributional Jacobian J(u α ) xα (or rather, as we have explained, * J(u α ) xα ) is welldefined (via Theorem . ) as an element of D (Ω).
We have the following disintegration result.
Proposition . . Assume ( . ). Let Ω = (0, 1) N and u ∈ W s,p (Ω; S k ). Then, with appropriate ε(α) ∈ {-1, 1} depending only on k, N , and
α ∈ I(N -k -1, N ), we have * Ju(ϕ) = α∈I(N -k-1,N ) ε(α) ˆ(0,1) N -k-1 * J(u α ) xα ((ϕ α ) xα ) dx α . ( . )
When s ľ 1, ( . ) follows from the Fubini theorem. The case where 0 < s < 1 is more delicate. See [ , Lemma . ] for a proof when s = 1/p and k = 1. (The argument there works in the general case.)
. . The range of the distributional Jacobian I will try not to appeal here to the language of geometric measure theory. For the same story (moderately) using this language, see [ , Chapter , Chapter ]. Let n := N -k ĺ N -1. Consider a C 1 oriented n-dimensional submanifold Σ of Ω, and a Borel set A ⊂ Σ such that H n (A) < ∞. Then A acts by integration on smooth compactly supported n-forms, through the formula
A(ζ) := ˆA ζ, ∀ ζ ∈ C ∞ c (Ω; Λ n ). ( . )
Note that the integral makes sense since A is oriented.
This allows us to identify A with a linear object (a distribution, or rather a current), and thus allow operations as (infinite) sums.
Consider the set
F n := T = j A j ; A j ⊂ Ω is a Borel subset of a C 1 oriented n-dimensional submanifold Σ j of Ω, j H n (A j ) < ∞ . ( . )
Given T ∈ F n (or, more generally, a distribution acting on smooth compactly supported n-forms), we define the boundary ∂T of T through the formula
∂T (ϕ) := T (dϕ), ∀ ϕ ∈ C ∞ c (Ω; Λ N -1 ). ( .
)
The terminology is justified by the fact that, when T is the integration over a compact oriented manifold with boundary, ∂T is the integration over the geometric boundary ∂T of T . (In this case, ( . ) is simply the Stokes theorem.)
We may now present the higher co-dimensional counterpart of Theorem . .
Theorem . . Assume ( . ).
. If u ∈ W s,p (Ω; S k ), then there exists some
T = j A j ∈ F N -k such that j H N -k (A j ) À |u| k/s W s,p , ( . ) * Ju = C k+1 ∂T in D (Ω; Λ N -k-1 ). ( . )
. Conversely, given T = j A j ∈ F N -k , there exists u : Ω → S k such that, for every s, p
satisfying sp = N -1: (i) u ∈ W s,p (Ω; S k ); (ii) ( . ) holds; (iii) |u| p W s,p À inf k H N -k ( r A k ); ∂ k r A k = ∂ j A j in D (Ω; Λ N -k-1 ) . ( . )
Already the fact that Theorem . is the special case of Theorem . with N = k + 1 is not obvious (but not di ficult to prove). When s = 1, item is due to Alberti, Baldo, and Orlandi [ , Theorem . ], who extended to general maps an argument relying on the co-area formula devised by Almgren, Browder, and Lieb [ ]. This main idea is illustrated, when u is su ficiently smooth, in [ , Section . ]. Item with s = 1 is also due to Alberti, Baldo, and Orlandi [ , Theorem . ]. It relies on a delicate dipole insertion technique, reminiscent of the one in Brezis, Coron, and Lieb [ ], but technically much more involved. The general case (arbitrary s, p) was obtained in [ ].
. . Characterization of the closure of smooth maps
We have the following counterpart of Theorem . (with the same references as for Theorem . ).
Theorem . . Assume that 0 < s ĺ 1, 1 ĺ p < ∞, k ĺ sp < k + 1. Moreover, when k < sp < k + 1 and N > k + 1, assume that Ω = (0, 1) N . Then, for u ∈ W s,p (Ω; S k ), we have
u ∈ C ∞ (Ω; S k ) W s,p ⇐⇒ Ju = 0.
Appendix # . Standard & less standard properties of Sobolev spaces
We present here, mostly without proofs, some of the basic properties of Sobolev maps that we use in the main text. For the full proofs, some useful general references are Triebel [ , ] In what follows, Ω ⊂ R N is a Lipschitz bounded domain. Occasionally, it could be R N or a half space. Unless specified otherwise, the Sobolev spaces W s,p and the corresponding norms are considered with respect to Ω.
Contents
Slicing and characterization via di ferences
For simplicity, we only consider the case of one-dimensional slices, i. e., given a map u = u(x 1 , . . . , x N ), we connect its regularity with the one of its one-dimensional slices u(x 1 , . . . , x k-1 , •, x k+1 , . . . , x N ), k = 1, . . . , N , but similar results are available for -dimensional slices. Theorem . . Let s > 0 be non-integer. Then
||u|| p W s,p ∼ ||u|| p p + N k=1 ˆ1 -1 ˆˆ|δ M te k u(x)| p |t| 1+sp dx ˙dt. ( . )
Here:
. t is a one-dimensional variable.
. M is any integer satisfying M > s.
. The integral in x is computed over the set {x ∈ Ω; [x, x + M te k ] ⊂ Ω}.
. δ h u(x) := u(x + h) -u(x), and
δ M h := δ h • . . . • δ h l jh n M times .
For a proof, see [ , Section . . ], [ , Section . . ]. An immediate consequence of ( . ) is the following Corollary . . Let s > 0 be non-integer and 1 ĺ p < ∞. Then
u p W s,p (R N ) ∼ N k=1 ˆ u(x 1 , . . . , x k-1 , •, x k+1 , . . . , x N ) p W s,p (R) dx k . ( . )
In particular, for a.e. (x 1 , . . . , x N -1 ) ∈ R N -1 , we have u(x 1 , . . . , x N -1 , •) ∈ W s,p (R).
Here, dx j := dx 1 . . . dx k-1 dx k+1 . . . dx N .
A straightforward consequence of ( . ) and of the "converse" to the dominated convergence theorem is the following Fubini type convergence result.
Corollary . . Let s > 0 be a non-integer and 1 ĺ p < ∞. Assume that u j → u in W s,p (R N ). Then, possibly up to a subsequence, we have
u j (x 1 , . . . , x N -1 , •) → u(x 1 , . . . , x N -1 , •) in W s,p (R) for a.e. (x 1 , . . . , x N -1 ) ∈ R N -1 .
A variant of ( . ) holds for both fractional and integer Sobolev spaces. Let u : R N → R. If ω ∈ S N -1 , let ω ⊥ denote the hyperplane orthogonal to ω, and consider the partial functions ω ⊥ x → u x ω , with u x ω (t) := u(x + t ω), ∀ t ∈ R. Then we have the following Proposition . . Let s ľ 0 and 1 ĺ p < ∞. Let u : R N → R. Then
u p W s,p (R N ) ∼ ˆSN-1 ˆω⊥ u x ω p W s,p (R) dx dH N -1 (ω). ( . )
For a proof, see [ , Lemma ].
. .
Sobolev embeddings
Optimal Sobolev embeddings are of the form W s,p (Ω) → W r,q (Ω), where s, r, p, q satisfy s > r ľ 0, 1 ĺ p < q ĺ ∞, s -N p = r -N q .
( . )
Note that we allow the value q = ∞.
The following result incorporates the classical Sobolev, Morrey, and Gagliardo-Nirenberg embeddings.
Theorem . . Let s, r, p, q, N satisfy ( . ). Then we have
W s,p (Ω) → W r,q (Ω) ( . )
with the following exceptions, when ( . ) does not hold.
(a) When N = 1, s is an integer ľ 1, p = 1, 1 < q < ∞, and r = s -1 + 1/q, ( . ) we have W s,1 (Ω) → W s-1+1/q,q (Ω).
( . )
In particular, we have W 1,1 ((0, 1)) → W 1/q,q ((0, 1)), 1 < q < ∞.
( . ) . .
(b) When N ľ 1, 1 < p < ∞, q = ∞,
Gagliardo-Nirenberg inequalities
Consider the estimate
u W s,p À u θ W s 1 ,p 1 u 1-θ W s 2 ,p 2 , ∀ u ∈ W s 1 ,p 1 ∩ W s 2 ,p 2 , ( .
)
where s 1 , s 2 , s ľ 0 and 1 ĺ p 1 , p 2 , p ĺ ∞ are related by
s = θs 1 + (1 -θ)s 2 1 p = θ p 1 + 1 -θ p 2
for some θ ∈ (0, 1).
( . )
With no loss of generality, we may assume that
s 1 ĺ s 2 . ( . )
The following condition plays an essential role in the validity of ( . ):
s 2 is an integer ľ 1, p 2 = 1 and s 2 -s 1 ĺ 1 - 1 p 1 . ( . )
(The latter condition can also be written in the more symmetric form s .1 -1/p 1 ľ s 2 -1/p 2 .)
We have the following result [ ] incorporating the classical Gagliardo-Nirenberg inequalities [ , ].
Theorem . . Let s, s 1 , s 2 , p, p 1 , and p 2 satisfy ( . ) and ( . ). Then, ( . ) holds if and only if ( . ) fails.
More precisely, we have . If ( . ) fails, then, for every θ ∈ (0, 1),
u W s,p À u θ W s 1 ,p 1 u 1-θ W s 2 ,p 2 , ∀ u ∈ W s 1 ,p 1 ∩ W s 2 ,p 2 . ( . )
Moreover, if s 1 < s < s 2 , then we have (in a bounded domain) the estimate ) . If ( . ) holds, there exists some u ∈ W s 1 ,p 1 ∩ W s 2 ,p 2 such that u ∈ W s,p , ∀ θ ∈ (0, 1).
|u| W s,p À u θ W s 1 ,p 1 |u| 1-θ W s 2 ,p 2 , ∀ u ∈ W s 1 ,p 1 ∩ W s 2 ,p 2 . ( .
Here is the special case of the above theorem we use the most o ten in this text.
Corollary . . The embedding
W s,p ∩ L ∞ → W θs,p/θ , ∀ 0 < θ < 1,
and the corresponding estimate ( . ) hold, ∀ s > 0, ∀ 1 ĺ p ĺ ∞, except when s = p = 1.
. .
Trace theories
We consider, to simplify the presentation, only maps u : R N → R. In order to obtain the corresponding results on domains, either we extend maps from domains to R N , or we work directly in domains and then deform the domain {(x, ε); 0 < ε < ε 0 , x ∈ Ω, d(x, ∂Ω) > ε} to the cylinder Ω × (0, ε 0 ) (as explained in the proof of Proposition . ). The classical results presented here are due to several authors, including Gagliardo [ ] and Uspenskiȋ [ ]. For the proofs, see, e. g., [ , Section ], [ ], [ , Lemma . ].
Let u : R N → R, ρ be a standard mollifier, and set U (x, ε)
:= u * ρ ε (x), ∀ x ∈ R N , ∀ ε > 0.
We first present trace theory in weighted Sobolev spaces.
Theorem . . (Inverse trace theory in weighted Sobolev spaces) Let 0 < s < m, with s is noninteger and m integer. We have, with C = C(m, s, p, N ),
|α|=m ˆRN ˆ∞ 0 ε p(m-s)-1 |∂ α U (x, ε)| p dεdx ĺ C|u| p W s,p , ∀ u : R N → R. ( . )
Theorem . . (Direct trace theory in weighted Sobolev spaces) Let 0 < s < m, with s is noninteger and m integer.
Let V ∈ C ∞ (R N × (0, ∞); R). We have, with C = C(m, s, p, N ), ∀ 0 < ε < 1/2, ||V (•, ε)|| p W s,p ĺ C |α|=m ˆRN ˆ∞ 0 ε p(m-s)-1 |∂ α V (x, ε)| p dεdx + C||V || p L p (R N ×(0,1)) . ( . )
Moreover, if the right-hand side of ( . ) is finite, then the limit u := lim ε→0 V (•, ε) exists in W s,p and satisfies
||u|| p W s,p ĺ C |α|=m ˆRN ˆ∞ 0 ε p(m-s)-1 |∂ α V (x, ε)| p dεdx + C||V || p L p (R N ×(0,1)) . ( . )
We next present trace theory in fractional Sobolev spaces. (The two theories coincide when s + 1/p is an integer.)
Let V ∈ C ∞ (R N × (0, ∞); R). We have, with C = C(s, p, N ), ∀ 0 < ε < 1/2, ||V (•, ε)|| W s,p ĺ C|V | W s+1/p,p +C||V || L p (R N ×(0,1)) . ( . )
Moreover, if the right-hand side of ( . ) is finite, then the limit u := lim ε→0 V (•, ε) exists in W s,p and satisfies
||u|| W s,p ĺ C|V | W s+1/p,p +C||V || L p (R N ×(0,1)) .
( . )
. .
Superposition operators
Given Φ : R → R and u : Ω → R (or Φ : R M → R K and u : Ω → R M ), set
F (u) := Φ • u. ( . )
Here are the results on the continuity of F we rely on.
Theorem . . Assume that Φ is Lipschitz. Let 0 < s ĺ 1, 1 ĺ p < ∞. Then F (given by ( . )) is continuous from W s,p in itself.
The non-trivial part of the above result is continuity. For a proof when s = 1, see Marcus and Mizel [ ]. For the case 0 < s < 1, see, e. g., [ , proof of ( . )]).
Theorem . . Let s > 1 and 1 ĺ p < ∞. Let m denote the first integer ľ s. Let Φ ∈ C m . Then F (given by ( . )) maps W s,p ∩ L ∞ into W s,p and is continuous in the following sense:
if u n → u in W s,p and u n L ∞ ĺ C, then Φ(u n ) → Φ(u) in W s,p .
For an elementary proof of the above result for arbitrary s, see Escobedo [ ].
Theorem . . Let s > 1 and let 1 ĺ p < ∞. Let m be the least integer ľ s. Let Φ ∈ C m (R M ; R K ) have bounded derivatives of order ĺ m. Then, for every u ∈ W s,p ∩ W 1,sp (Ω; R M ), we have F (u) ∈ W s,p (Ω; R K ). In addition, F (given by ( . )) is continuous from W s,p ∩ W 1,sp to W s,p .
For a proof, see [ ]. (The result there is stated for p > 1, but exactly the same proof applies when p = 1.) See also Maz'ya and Shaposhnikova [ ]. Corollary . . Let 0 < s < ∞, 1 ĺ p < ∞. If either s ĺ 1 or sp ľ N , then ϕ → e ıϕ acts continuously from W s,p (Ω; R) to W s,p (Ω; S 1 ). Proof. When s ĺ 1, this is a special case of Theorem . . When s > 1, by assumption we have sp ľ N , and in this case we rely on Theorem . and on the Sobolev embedding W s,p → W 1,sp .
QED
. . Products
The most used product property of the Sobolev spaces is that W s,p ∩ L ∞ is an algebra, in the following sense.
Lemma . . Let s > 0, 1 ĺ p < ∞. If u, v ∈ W s,p ∩ L ∞ , then uv ∈ W s,p and uv W s,p ĺ C( u L ∞ v W s,p + v L ∞ u W s,p ). ( . )
In addition, the map (u, v) → uv is continuous in the following sense: if u n → u and v n → v in W s,p , and if
u n L ∞ , v n L ∞ ĺ C, then u n v n → uv in W s,p .
For a proof, see, e. g., Runst and Sickel [ ].
When s > 1, the above result can be strengthened as follows.
Lemma
. . Let 1 < s < ∞, 1 ĺ p < ∞. Let u, v ∈ W s,p ∩ L ∞ . Then uDv ∈ W s-1,p .
For a proof, see, e. g., [ , Appendix . ].
A refinement of Lemma . is provided by the following result [ , Lemma . ]. † Lemma . . A couple (s, p), with s > 0 and 1 ĺ p < ∞, is regular if either p > 1 or p = 1 and s is not an integer.
Let (s 1 , p 1 ), (s 2 , p 2 ) be two regular couples such that s 1 > s 2 and s 1 p 1 > s 2 p 2 . Let 1 < r < ∞ be defined by
1 r = 1 p 2 - s 2 s 1 p 1 . If f ∈ W s 1 ,p 1 ∩ L ∞ (Ω) and g ∈ L s 2 ,p 2 ∩ L r (Ω), then f g ∈ W s 2 ,p 2 (Ω).
. . Gluing
We have the following straightforward versions of the Brezis-Lieb lemma [ ].
Lemma . . Let f ∈ W s,p (Ω). Consider a bounded sequence of maps f j ∈ W s,p (Ω) such that
lim j→∞ |∪ kľj supp f k |= 0. Then |f + f j | p W s,p = |f | p W s,p +|f j | p W s,p +o(1) as j → ∞, ||f + f j || p p = ||f || p p + ||f j || p p + o(1) as j → ∞. Lemma . . (Multi-sequences Brezis-Lieb lemma) Let f ∈ W s,p (Ω). Consider m bounded se- quences (f j,1 ), . . . (f j,m ) ⊂ W s,p (Ω) such that lim j→∞ |∪ kľj supp f k,i |= 0, ∀ i, |f j,i | p W s,p ĺ C i , ∀ j, ∀ i, ||f j,i || p p ĺ D i , ∀ j, ∀ i.
Then there exist j 1 , . . . , j m such that
|f + f j 1 ,1 + • • • + f jm,m | p W s,p ĺ |f | p W s,p +2(C 1 + . . . + C m ), |f j 1 ,1 + • • • + f jm,m | p W s,p ĺ |f | p W s,p +2(C 1 + . . . + C m ), ||f + f j 1 ,1 + • • • + f jm,m || p p ĺ ||f || p p + 2(D 1 + . . . + D m ), ||f j 1 ,1 + • • • + f jm,m || p p ĺ 2(D 1 + . . . + D m ).
In view of the applications, let us note that, in Lemma . , the indices j k may be chosen inductively with respect to k. † The result in [ ] is stated for 1 < p 1 , p 2 < ∞, but the proof there is valid for all regular couples. . .
Quantitative suboptimal Sobolev embeddings
In view of the applications we consider, we work here in the unit ball Ω (for some norm | |in R N ) and with less than one derivative, but what follows can be adapted to any domain and to higher order spaces.
Consider parameters satisfying
0 ĺ α < s < 1, 1 ĺ p < ∞, 1 ĺ q ĺ ∞, α - N q < s - N p , p ĺ t ĺ ∞. ( . ) When α = 0, set |u| W 0,q (Br(x)) := u - Br(x) u L q (Br(x))
.
Lemma . . Assume ( . ). Set
β := s -α -N ˆ1 p - 1 q ˙> 0, ( . )
U := {(x, r); x ∈ Ω, r > 0, B r (x) ⊂ Ω}. ( . )
Then (with the obvious modification when t = ∞)
˜ˆU |u| t W α,q (Br(x)) r βt dxdr r N +1 ¸1/t À |u| W s,p , ∀ u : Ω → R. ( . )
Proof. Let α < σ ĺ s be such that
γ := σ -α -N ˆ1 p - 1 q ˙> 0.
By Sobolev's embedding, Poincaré's inequality, and scaling, we have
|u| W α,q (Br(x)) À r γ |u| W σ,p (Br(x)) , ∀ (x, r) ∈ U. ( . )
The choice σ = s yields ( . ) for t = ∞. It thus su fices to obtain ( . ) when t = p; the general case will follow via Hölder's inequality. For this purpose, we note that, with α < σ < s as above and δ := β -γ = s -σ > 0, we have (using ( . ))
ˆU |u| p W α,q (Br(x)) r βp dxdr r N +1 À ˆU 1 r δp+N +1 ¨Br(x) |u(w) -u(z)| p |w -z| N +σp dwdz dxdr = ¨Ω2 |u(w) -u(z)| p |w -z| N +σp ¨{(x,r)∈U; z,w∈Br(x)} 1 r δp+N +1 dxdr dwdz ĺ ¨Ω2 |u(w) -u(z)| p |w -z| N +σp ˆ∞ |w-z|/2 ˆBr(w) dx 1 r δp+N +1 dr dwdz ∼ ¨Ω2 |u(w) -u(z)| p |w -z| N +σp ˆ∞ |w-z|/2 1 r δp+1 dr dwdz ∼ ¨Ω2 |u(w) -u(z)| p |w -z| N +σp+δp dwdz = |u| p W s,p .
QED
We will use the following special case of Lemma . [ , Section . ].
Corollary . . Assume that 0 < s < 1, 1 ĺ p < ∞, and sp > 1. Then
ˆ1 0 ˆ1 0 |u| p W 0,∞ ((x,y)) |y -x| 1+sp dxdy À |u| p W s,p , ∀ u : (0, 1) → R. ( . )
Proof. Setting r := |y -x|/2, z := (y + x)/2, we see that the le t-hand side I of ( . ) satisfies
I À ˆ1 0 ˆ{r; Br(z)⊂(0,1)} |u| p W 0,∞ (Br(z))
r 1+sp drdz.
We conclude via ( . ) applied with α = 0, q = ∞, t = p, and N = 1.
QED
. . Martingales and Sobolev spaces
The framework is the one of Step of the proof of Theorem . . We work in Ω = [0, 1) N and with the || || ∞ norm, denoted for simplicity | |. For k ľ 0, let P k denote the collection of dyadic cubes of size 2 -k in Ω. Q k denotes a generic cube in P k , and, for x ∈ Ω, Q k (x) is the only cube in P k containing x. We let F k denote the set of the (step) functions constant on each
Q k . If u : Ω → R , let E k (u) ∈ F k be defined by E k (u)(x) := Q k (x)
u.
For the next result, see [ , Proof of Theorem A. , Step ].
Lemma . . Let 0 < s < 1. Then, for each f ∈ L p (Ω; R ), kľ0 2 spk ||f -E k (f )|| p p À |f | p W s,p .
We next present a variant of [ , Theorem A. ] adapted to our purposes.
Lemma . . Let 0 < s < 1 and 1 ĺ p < ∞ be such that sp < 1. Let Φ : Ω 2 → [0, ∞) be measurable and let, for each k ľ 1,
g k ∈ F k , g k ľ 0. If [x, y ∈ Q j ] =⇒ Φ(x, y) ĺ k>j (g k (x) + g k (y)), ∀ j ľ 0, ∀ Q j ∈ P j , (
(x) = Q k (y)}, t(x, y) := min{k; |x -y|> 2 -t }.
We note the following obvious facts: (i) s and t are symmetric; (ii) if k > s(x, y), then y ∈ Q k (x); (iii) t(x, y) > s(x, y); (iv) 2 -t(x,y) < |x -y|ĺ 2 1-t(x,y) .
In view of ( . ), in order to obtain ( . ) it su fices to prove the inequality ˆΩ ˆΩ ˆ k>s(x,y)
g k (x) ˙p dxdy |x -y| N +sp À K := kľ1 2 spk ||g k || p p . ( . )
Since, by Hölder's inequality, we have
ˆ k>s(x,y) (g k (x)) ˙p À kľt(x,y) (k -t(x, y) + 1) p g p k (x) + s(x,y)<k<t(x,y) (t(x, y) -k) p g p k (x),
it su fices to establish the inequalities
I := ˆΩ ˆΩ kľt(x,y) (k -t(x, y) + 1) p g p k (x) dxdy |x -y| N +sp À K, ( . )
J := ˆΩ ˆΩ s(x,y)<k<t(x,y) (t(x, y) -k) p g p k (x) dxdy |x -y| N +sp À K.
( . )
Step . ( . ) holds for 0 < s < 1 and measurable g k 's (without the assumptions sp < 1 or g k ∈ F k ). We have
I = kľ1 1ĺjĺk (k -j + 1) p ˆΩ g p k (x) ˆ{y∈Ω; t(x,y)=j} dy |x -y| N +sp dx À kľ1 1ĺjĺk (k -j + 1) p ˆΩ g p k (x) 2 spj dx À kľ1 2 skp ˆΩ g p k (x) dx = K.
Step . An auxiliary estimate. Let Q be a(ny) cube of size 2 -k in R N . If sp < 1, then
I k := j>k (j -k) p ¨{(x,y); x∈Q, y ∈Q; t(x,y)=j} dxdy |x -y| N +sp À 2 (sp-N )k . ( . )
Indeed, the le t-hand side of ( . ) does not depend on the center of the cube, and, by a scaling argument, we have I k = 2 (sp-N )k I 0 . It therefore su fices to prove that, with Q := (-1/2, 1/2) N , we have
I 0 = j>0 j p ¨{(x,y); x∈Q, y ∈Q; t(x,y)=j} dxdy |x -y| N +sp < ∞.
( . )
For this purpose, let us note that
|{x ∈ Q; |x|ľ 2 -1 -ε}|À ε, ∀ ε > 0, ( . ) [x ∈ Q, y ∈ Q, t(x, y) = j] =⇒ [2 -j < |x -y|ĺ 2 1-j and |x|ľ 1/2 -2 1-j ].
( . )
Combining ( . )-( . ) and using, at the end, the assumption sp < 1, we find that
I 0 ĺ j>0 j p ¨{(x,y); x∈Q, |x|ľ1/2-2 1-j , 2 -j <|x-y|ĺ2 1-j } dxdy |x -y| N +sp À j>0 j p ˆ{x; x∈Q, |x|ľ1/2-2 1-j } 2 spj dx À j>0 j p 2 spj 2 1-j = 2 j>0 j p 2 -(1-sp)j < ∞.
Step . ( . ) holds when sp < 1 and
g k ∈ F k . For Q k ∈ P k , let g k (Q k ) denote the value of g k on Q k . For k > s(x, y), we have y ∈ Q k (x).
Combining this observation with ( . ), we find that
J = kľ1 Q k ∈P k j>k (j -k) p g p k (Q k ) ¨{(x,y); x∈Q k , y ∈Q k ; t(x,y)=j} dxdy |x -y| N +sp À kľ1 Q k ∈P k g p k (Q k ) 2 (sp-N )k = kľ1 2 spk ||g k || p p = K. QED . .
Adapted trace theory
The following result is presented, with a sketch of proof, in Chiron [ , Section . ]. In the statement below, we impose an extra smoothness assumption on ζ that makes the arguments in [ ] essentially complete.
Lemma . . Let 0 < s < 1. Let k ľ 1 be the smallest integer such that s + k/p ľ 1. Let ζ ∈ W s+k/p,p ((0, 1) N +k ; E ) ∩ C ∞ . Then, for every x ∈ (0, 1) k , ||ζ(•, x )|| W s,p ((0,1) N ;E ) À ||ζ|| W s+k/p,p ((0,1) N +k ) .
. . Homotopy and VMO
We consider two closed embedded Riemannian manifolds, M ⊂ R k and N ⊂ R .
Lemma . . Let u j , u ∈ C(M ; N ) be such that u j → u in BMO and L 1 . Then, for large j, u j and u are homotopic in C(M ; N ).
Proof. Given a continuous map v : M → R , set v 0 := v and, for small ε > 0,
v ε (x) := Bε(x)
v(y) dy.
By the proof of Lemma . , we have
d(v ε (x), N ) ĺ sup y∈M Bε(y) Bε(y) |v(w) -v(z)| dwdz. ( . )
Let δ > 0 be such that the nearest projection on N is continuous on the δ-neighborhood N δ of N . By ( . ) and the continuity of u, there exists some ε 0 such that
d(u ε (x), N ) < δ/2, ∀ x ∈ M , ∀ ε ĺ ε 0 . ( . )
On the other hand, since u j → u in BMO, there exists some j 0 such that
sup y∈M Bε(y) Bε(y) |v(w) -v(z)| dwdz < δ/2, ∀ j ľ j 0 , ∀ y ∈ M , ∀ ε ĺ ε 0 . ( . )
Combining ( . )-( . ), we find that, for j ľ j 0 and 0 < ε ĺ ε 0 , (u j ) ε and u ε take values into N δ .
Let Π : N δ → N be the nearest point projection. Then, clearly, the map
[0, ε 0 ] ε → Π • [(u j ) ε ] ∈ C(M ; N )
is continuous, and therefore u j and Π • [(u j ) ε 0 ] are homotopic. A similar conclusion holds for u.
On the other hand, since
u j → u in L 1 , we clearly have Π • [(u j ) ε 0 ] → Π • [u ε 0 ]
uniformly, and thus, for large j, we have
u j ∼ Π • [(u j ) ε 0 ] ∼ Π • [u ε 0 ] ∼ u.
. . Characteristic functions
Lemma . . Let 0 < s < 1 and 1 ĺ p < ∞ be such that sp < 1. Let Q := (0, 1) N and Ω := (0, 1) N -1 × (-1, 1). Then ϕ N := χ Q belongs to W s,p (Ω).
Proof. In view of Theorem . (applied with M = 1), the conclusion of the lemma is equivalent to the straightforward fact that ˆ0 -1 ˆ1 0 1 |x -y| 1+σ dxdy < ∞, ∀ 0 < σ < 1.
( . )
Lemma . . Let s > 0 and 1 ĺ p < ∞. Let ω be a non-empty smooth relatively compact domain such that ω ⊂ Ω. Then ϕ := χ ω ∈ W s,p (Ω) ⇐⇒ sp < 1.
Proof. If we straighten the coordinates around a point of ∂ω, we find that, up to a constant factor, the le t-hand side of ( . ), with σ := sp, is a lower bound for |ϕ| p W s,p . When sp ľ 1, this implies that ϕ ∈ W s,p . Assume next that sp < 1. Let (U j ) be a finite covering of Ω such that, for each j, we have either ϕ = 0 in U j , or there exists a bi-Lipschitz di feomorphism Φ j of (0, 1) N -1 × (-1, 1) onto U j such that ϕ • Φ j = ϕ N (with ϕ N as in the previous lemma). For any such j, we have ϕ • Φ j ∈ W s,p ((0, 1) N -1 × (-1, 1)) (and thus ϕ ∈ W s,p in U j ). Finally, we have ϕ ∈ W s,p (U j ) for each j, which implies that ϕ ∈ W s,p (Ω). ( . )
Proof. If s is an integer, the conclusion is clear. We may therefore assume that s is not an integer and that α < N (the latter condition is equivalent to u ∈ L 1 ). We rely on Theorem . . Before going further, let us note that, since the specific ϕ we consider is smooth outside the origin, we have ϕ ∈ W s,p if and only if the double integral in ( . ) considered over the larger set [-1, 1] × Ω is convergent.
We use ( . ) with an integer M satisfying, in addition to M > s, the condition (α + M )p > N.
( . )
For simplicity, we drop the subscript j in e j . Using the homogeneity of ϕ, we find that δ M te ϕ(x) = 1 |t| α δ M e ϕ(x/t). Hence,
I := ˆ1 -1 ˆB1 (0)
|δ M te ϕ(x)| p t 1+sp dxdt = 2 ˆ1 0 ˆB1/t (0) t N -(α+s)p-1 |δ M e ϕ(y)| p dydt. ( . )
" ⇐= " Assume that (α + s)p < N . In this case, we show that I < ∞. Indeed, we have Here,we use the fact that ˆB1 (0) |δ M e ϕ(y)| p dy > 0 (since ϕ is not a polynomial in the e direction). QED Remark . . For a di ferent approach, see the proof below of Lemma . b).
. . Homogeneous maps
By repeating the proof of Lemma . , we obtain the following result. Then, u ∈ W s,p (B 1 (0)) if and only if sp < k.
c) Even more generally, let v : S k-1 → R be a smooth non constant map and
u : R N → R k , u(x 1 , . . . , x N ) := v ˆ(x 1 , . . . , x k ) |(x 1 , . . . , x k )| ˙, ∀ x ∈ R N \ ({0} × R N -k ).
Then, u ∈ W s,p (B 1 (0)) if and only if sp < k. In particular, Lemma . a) holds.
A more general result is the following Proof. We consider only the more complicated case where d ĺ N -2. Without loss of generality, we may assume that sp > N -d -1. Furthermore, we may also suppose that s > N -d -1, and thus N -d -1 < s < N -d (since sp < N -d). Indeed, if s ĺ N -d -1, fix any θ ∈ (0, 1) such that
1 p < θ < s N -d -1 ĺ 1,
and let r := s/θ > N -d -1, and q := pθ > 1. From the Gagliardo-Nirenberg embedding W r,q ∩ L ∞ → W s,p (see Corollary . ), and the fact that u |Ω ∈ L ∞ (by ( . ) with = 0), it su fices to prove that u |Ω ∈ W r,q . Thus, as claimed, it su fices to prove the lemma under the extra assumption s > N -d -1, which implies that N -d -1 < s < N -d.
Let m := N -d -1 and write s = m + σ, with 0 < σ < 1. The assumption sp < N -d reads (m + σ)p < N -d.
We set d(x) := dist (x, S). Since S is d-dimensional, we find that ˆΩ 1
d q (x) dx < ∞, ∀ 1 ĺ q < N -d. ( . )
We next invoke the following well-known result. Remark . . The conclusion of Lemma . is still valid under the weaker assumption that S is a finite union of d-dimensional submanifolds. This has the following important (for us) consequence: the class R defined in ( . ) satisfies R → W r,q (Ω; N ), ∀ r, q such that rq ĺ N --1.
Remark . . When s = 1, the above result (or versions of it) can be proved using more elementary arguments; see, e. g., [ , Lemma . ] for the following version of Lemma . .
Lemma .
. Let 1 ĺ r < ∞. Assume that N ľ 2 and let U ⊂ R N be an open set. Let K be a closed subset of U such that H N -1 (K) = 0. Let u ∈ W 1,1 loc (U \ K) be such that ´U\K |∇u| r < ∞. Then u ∈ W 1,r loc (U ) and the Sobolev gradient of u is the Sobolev gradient of u |U \K .
Proof of Lemma . b).
" ⇐= " Case s < 1. By Lemma . , we have ψ ∈ W 1,sp+ε , and thus also u ∈ W 1,sp+ε , for small ε > 0. By Corollary . , we find that u ∈ W s,p .
Case s = 1. The conclusion follows directly from Lemma . .
Case s > 1. By Lemma . , we have ϕ ∈ W s,p ∩ W 1,sp . We conclude via Theorem . . " =⇒ " Case s = 1. We have |∇u|= |∇ϕ|∼ |x| α-1 in Ω \ {0}, and the conclusion is clear.
Case s > 1. By Corollary . , we have u ∈ W 1,sp , and thus (as in the case s = 1) we find that (α + 1)sp < N . On the other hand, by di ferentiating the equality u = e ıϕ , we find that Dϕ = -ıuDu. Lemma . implies that Dϕ ∈ W s-1,p , and thus ϕ ∈ W s,p . Lemma . implies the second condition, (α + s)p < N .
Case 0 < s < 1. This case is more involved. Setting I(r 1 , r 2 , β) := ˆS(0,r 1 ) ˆS(0,r 2 ) dH (x)dH (y) |x -y| β , ( . )
we have, with β := N + sp, using the changes of variables t = r -α 1 , τ = r -α 2 , Lemma . below, and the fact that N -β + p > 0, |u| p W s,p =2 ˆ1 0 ˆr1 0 e ı/r α 2 -e ı/r α 1 p I(r 1 , r 2 , β) dr 2 dr 1 ∼ ˆ∞ 1 ˆ∞ t |e ıτ -e ıt | p I(t -1/α , τ -1/α , β)τ -1/α-1 t -1/α-1 dτ dt Á ˆ∞ 1 ˆt+1 t |e ıτ -e ıt | p t -(N -1)/α (t -1/α -τ -1/α ) N -β-1 τ -1/α-1 t -1/α-1 dτ dt ∼ ˆ∞ 1 ˆt+1 t (τ -t) p t -2-(N +1)/α (t -1/α -τ -1/α ) N -β-1 dτ dt Á ˆ∞ 1 ˆt+1 t (τ -t) p t -2-(N +1)/α (τ -t) N -β-1 t -(N -β-1)(1/α+1) dτ dt = ˆ∞ 1 ˆt+1 t (τ -t) N -β+p-1 t -(N -β+1)-(2N -β)/α dτ dt
Á ˆ∞ 1 t -(N -β+1)-(2N -β)/α dt.
Therefore, the exponent of t in the last integral above has to be < -1, and this amounts to (α + 1)sp < N . QED Lemma . . Fix β > 0. For r 2 < r 1 ĺ 2r 1 , we have
I(r 1 , r 2 , β) Á r N -1 1 (r 1 -r 2 ) N -β-1 .
Proof. Write r 1 = (1 + t)r 2 , with 0 < t ĺ 1. By scaling and invariance with respect to isometries, we have Write x = (x , x N ). We have, using the facts that t ĺ 1 and 1 -? 1 -s 2 ĺ t when 0 ĺ s ĺ t ĺ 1, and the change of variable x = ty ,
I(r 1 , r 2 , β) = σ N r N -1 1 r N -β
J(t, β) ľ ˆ|x |ĺ1 dx |(x , 1 + t - ? 1 -x 2 )| β ľ ˆ|x |ĺt dx |(x , 1 + t - ? 1 -x 2 )| β ľ ˆ|x |ĺt dx |(x , 2t)| β = t N -β-1
ˆ|y |ĺ1 dy |(y , 2)| β , so that, by ( . ),
I(r 1 , r 2 , β) Á r N -1 1 r N -β-1 2 t N -β-1 = r N -1 1 (r 1 -r 2 ) N -β-1 . QED . .
Gluing maps
Lemma . . Let π : E → N be a covering. Fix some z ∈ N and let J be at most countable such that π -1 ({z}) = {t j ; j ∈ J}. Then there exist: points x j ∈ Ω, radii r j > 0, j ∈ J, and maps ζ j : B 2r j (x j ) → E such that:
(i) The balls B 3r j (x j ) are mutually disjoint and contained in Ω.
(ii) ζ j ∈ C ∞ (B 2r j (x j ) \ {x j }).
(iii) ζ j = t j in B 2r j (x j ) \ B r j (x j ).
(iv) ζ j ∈ W s,p (B 2r j (x j )).
(v) The map u := π • ζ j , in B 2r j (x j ) z, in Ω \ ∪ j B 2r j (x j ) belongs to W s,p .
Proof. We consider only the non-compact case where J = {1, 2, . . .}. We will construct inductively a sequence of maps (u j ) such that its weak limits has all the required properties. Start with u 0 :≡ z. Assume that we have constructed ζ k satisfying the above properties for J = {1, . . . , k}, we construct ζ k+1 . Consider a point x k+1 ∈ Ω \ ∪ jĺk B 2r j (x j ). Let γ = γ t k+1 be as in Lemma . . We fix λ : [0, ∞) → [0, ∞) such that λ(θ) = 0 if θ ĺ 1 and λ(θ) = 1 if θ ľ 2. Let α be as in ( . ). Let 0 < ε < 1 be be fixed later, and set
ψ ε (x) := γ • λ((ε/|x -x k+1 |) α ), ∀ x ∈ B 2ε (x k+1 ),
extended with the value t k+1 outside B 2ε (x k+1 ). Clearly,
ψ ε is smooth in R N \ {x k+1 }. If we set v ε := π • ψ ε , then v ε -z ∈ C ∞ (R N \ {x k+1 })
and is supported in B ε (x k+1 ).
We claim the following:
ψ ε ∈ W s,p (B 2ε (x k+1 )), ∀ ε > 0, ( . ) ||v ε -z|| W s,p → 0 as ε → 0.
( . )
Taking temporarily for granted the two above properties, we conclude as follows. By ( . ) and Lemma . , for su ficiently small ε we have ||u k + v ε -z|| W s,p ĺ ||u k || W s,p + 2 -k . We complete the proof via a straightforward inductive process, by setting u k+1 := u k + v ε -z.
Proof of ( . ). Assume, for simplicity, that x k+1 = 0. If |x|ĺ 2 -1/α ε, then λ((ε/|x|) α ) = 1. Therefore, with r := 2 -1/α ε, we have where we have used successively Lemma . , ( . ), and Lemma . . Proof of ( . ). By construction, γ is 1-Lipschitz. We find that |∇ψ ε (x)|À ε α /|x| α+1 , and therefore ||∇ψ ε || sp → 0 as ε → 0. Therefore, π • ψ ε -z → 0 in W 1,sp (R N ) as ε → 0. Since the ψ ε 's are uniformly bounded, we conclude via the Gagliardo-Nirenberg inequality ( . ). QED
QED
Lemma . . Let u ∈ VMO(Ω; F ), where F ⊂ R . Let ρ ∈ C c (B 1 (0); R + ) be such that ´ρ = 1. Then lim ε→0 sup x∈Ωε d(u * ρ ε (x), F ) = 0.
, a continuous li ting ζ (given by ζ(θ) := ϕ(b j + εe ıθ )) such that ζ(-b) = ζ(b), a contradiction.
(a) what is a topological singularity? (b) can one detect such singularities? (c) do standard properties of Sobolev spaces hold for maps in W s,p (Ω; N ) without topological singularities? In full generality, the answer to question c) is negative (seeBethuel and Demengel [ ],Bethuel [ ], or [ ] for examples of smooth maps with no extensions or li tings). Depending on the answer we choose to question a), the answer to question (b) could be positive. However, a full theory allowing to encode singularities and/or to clarify their role as only possible obstructions is, for the time being, out of reach. Let us mention several topological invariants that have been investigated so far in the literature: (i) Brouwer degree of maps f : S k → S k (starting with Brezis, Coron, and Lieb [ ]); (ii) spherical homology (starting with Giaquinta, Modica, and Souček [ ]); (iii) (Hopf) degree of maps f : S 3 → S 2 (starting with Rivière [ ]); (iv) higher homotopy groups of general manifolds, under restrictive assumptions on the lower homotopy groups (Pakzad and Rivière [ ]); (v) rational homotopies(Hardt and Rivière [ ]).
.
The distributional Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The range of the distributional Jacobian . . . . . . . . . . . . . . . . . . . . . . . Inserting singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Characterization of the closure of smooth maps . . . . . . . . . . . . . . . . . . . Overview of the higher co-dimensional case . . . . . . . . . . . . . . . . . . . .
deg(u, a) = N Ju(ϕ), so that ( . ) holds.
j ), (N j )) ľ max j (ϕ(P j ) -ϕ(N j )); ϕ ∈ Lip 0 (Ω), |ϕ| Lip ĺ 1 . ( . )Remarkably, we actually have equality in ( . ) (see [ ] for the original result and, for other proofs, Brezis [ ] and [ ]): L((P j ), (N j )) = max j (ϕ(P j ) -ϕ(N j )); ϕ ∈ Lip 0 (Ω), |ϕ| Lip ĺ 1 .
Recall that we consider maps u : Ω ⊂ R N → S N -1 , with N ľ 2. The main result we present here is due to Demengel [ ] when s = 1, 1 ĺ p < 2, and N = 2, Bethuel [ ] when s = 1 and p = N -1, and has been announced, with indications of proof, byBethuel, Coron, Demengel, and Coron [ ] when s = 1 and N -1 < p < N , respectively Mucci [ ] when 0 < s < 1. See alsoPonce and Van Scha tingen [ ].
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The distributional Jacobian. Disintegration (slicing) . . . . . . . . . . . . . . . . . . . The range of the distributional Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . Characterization of the closure of smooth maps . . . . . . . . . . . . . . . . . . .
, Runst and Sickel [ ], Maz'ya [ ], Leoni [ ]; an elementary, but partial, account can be find in [ ]. See also the specific references indicated below.
, see, for example, [ , Appendix]. In the limiting case sp = N , we have the following substitute of the Sobolev non-embedding W s,p → L ∞ (Brezis and Nirenberg [ ], with roots in Boutet de Monvel and Gabber [ ]). Theorem . . Assume that sp ľ N . Then W s,p → VMO, i. e., u ∈ W s,p =⇒ lim ε→0 sup x∈Ωε Bε(x) Bε(x) |u(y) -u(z)| dydz = 0. ( . ) For a proof, see, e. g., [ , Proof of Lemma . ].
Theorem . . (Inverse trace theory in fractional Sobolev spaces) Let s > 0 be non-integer. We have, with C = C(s, p, N ),|U | W s+1/p,p ĺ C|u| W s,p , ∀ u : R N → R.( . )Theorem . . (Direct trace theory in fractional Sobolev spaces) Let s > 0 be non-integer.
QED
Corollary . . Let s, p be such that sp ľ dim M . If u j , u ∈ C ∞ (M ; N ) and u j → u in W s,p then, for large j, u j and u are homotopic in C(M ; N ). Proof. It su fices to combine Lemma . with the continuous embeddings W s,p → VMO (Theorem . ) and W s,p → L 1 . QED Appendix # . Standard examples of maps in Sobolev spaces Contents . Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power-type functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homogeneous maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gluing maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The dipole construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Let Ω := B 1 (0) ⊂ R N . Let α > 0 and set ϕ(x) := 1 |x| α , ∀ x ∈ Ω \ {0}. Then ϕ ∈ W s,p (Ω) ⇐⇒ (α + s)p < N.
I ĺ 2
2 ˆ1 0 t N -(α+s)p-1 dtˆRN|δ M e ϕ(y)| p dy.Note that the assumption on α implies that ´1 0 t N -(α+s)p-1 dt < ∞. On the other hand, we have ´RN |δ M e ϕ(y)| p dy < ∞, since |δ M e ϕ(y)|∼ 1 |y| α+M at infinity and |δ M e ϕ| p has integrable singularities. (These singularities, located at y = 0, -e, ..., -M e, behave like |x| -αp .) This proves " ⇐= "." =⇒ " We note that ( . ) implies that I ľ 2 ˆ1 0 ˆB1 (0) t N -(α+s)p-1 |δ M e ϕ(y)| p dydt = ∞.
u(x) := x |x| , ∀ x ∈ R 2 \ {0}. Let s > 0, 1 ĺ p < ∞. Then, u ∈ W s,p (B 1 (0)) if and only if sp < 2.b) More generally, let k ∈ {1, . . . , N } and letu : R N → R k , u(x 1 , . . . , x N ) := (x 1 , . . . , x k ) |(x 1 , . . . , x k )| , ∀ x ∈ R N \ ({0} × R N -k ).
Lemma . .
. Let U be a neighborhood of Ω and let S be a d-dimensional submanifold of U , withd ĺ N -1. Let u ∈ C ∞ (U \ S) satisfy |D u(x)|ĺ C [dist (x, S)] -, = 0, . . . , N -d, ∀ x ∈ U \ S.( . )Then, u |Ω ∈ W s,p (Ω) provided sp < N -d.
Lemma . .
. Let S be a d-submanifold of the open set Ω ⊂ R N , with d ĺ N -2. Let u ∈ C 1 (Ω \ S) be such that ∇u ∈ L 1 loc (Ω). Then u ∈ W 1,1 loc (Ω).Proof of Lemma . continued. Combining ( . ) with ( . ) and Lemma . , we find that D j u ∈ L p (Ω), j = 0, . . . , m. It remains to check thatˆΩ ˆΩ |D m u(x) -D m u(y)| p |x -y| N +σp dxdy < ∞. ( . )For this purpose, we note that, with constants depending on u but not on x or y, we have:|D m u(x) -D m u(y)|ĺ C max |x m u(x) -D m u(y)| p |x -y| N +σpdx dy = 2(I + J), where I := ¨|x-y|<d(y)ĺd(x) . . . , J := ¨d(y)ĺmin{d(x), |x-y|} . . . Using ( . ) (respectively ( . )), in order to estimate I (respectively J), we find that ¨|D m u(x) -D m u(y)| p |x -y| N +σp dx dy À ¨|x-y|<d(y)ĺd(x) |x -y| p-N -σp d (m+1)p (y) dx dy + ¨d(y)ĺmin{d(x), |x-y|} dx dy |x -y| N +σp d mp (y) À ˆ1 d mp+σp (y) dy < ∞, by ( . ) (since mp + σp < N -d). QED Remark . . In Lemma . , when d ĺ N -2, the assumptions on u can be weakened to u ∈ C N -d (Ω \ S) and |D N -d u(x)|ĺ C[dist (x, S)] d-N .
dH N -1 (x) |x -(1 + t)e N | β .
|ψ ε | p W s,p ľ ˆBr(0) ˆBr(0) d E (γ((ε/|x|) α ), γ((ε/|y|) α )) p |x -y| N +sp dxdy = ˆBr(0) ˆBr(0) |(ε/|x|) α -(ε/|y|) α )| p |x -y| N +sp dxdy = ∞,
.
Slicing and characterization via di ferences . . . . . . . . . . . . . . . . . . . . . Sobolev embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gagliardo-Nirenberg inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . Trace theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Superposition operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gluing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantitative suboptimal Sobolev embeddings . . . . . . . . . . . . . . . . . . . . Martingales and Sobolev spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adapted trace theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homotopy and VMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. .
Motivation. Program. Preliminary remarks
We discuss a few natural questions concerning the Sobolev spaces W s,p (Ω; N ), in the following setting: a) N is a smooth connected closed Riemannian manifold isometrically embedded into some R . b) 0 < s < ∞, 1 ĺ p < ∞. c) Ω ⊂ R N is "smooth" and bounded. In most cases, Ω is a ball or a cube.
. .
The dipole construction
We describe and analyze here the dipole construction of Brezis, Coron, and Lieb [ ], in a form and functional setting adapted to our purposes. (ii) a ∈ S N -1 .
(iii) f ∈ C ∞ ([0, 1]; [0, 1]) be such that f (0) = f (1) = 0, f (0) > 0, f (1) < 0, and f (θ) > 0, ∀ θ ∈ (0, 1).
(iv) v ∈ C ∞ (R N -1 ; S N -1 ) be such that v(x) = a if |x|ľ 1/2 and deg v = 1.
Set, for 0 < ε ĺ 1,
.
Then
)
supp(u ε -a) ⊂ {(x , θ) ∈ R N ; 0 ĺ θ ĺ 1, |x |ĺ εf (θ)}, ∀ ε.
( . )
Proof when N ľ 3. We present the proof when N ľ 3. As we will see, it relies on a Gagliardo-Nirenberg embedding that fails when N = 2. For a full proof (including the case N = 2), see [ , ].
Step . Preliminary remarks. Property ( . ) is clear. It is also clear that u ε is smooth except at 0 and e N . Moreover, we claim that deg(u ε , 0) = 1 (and, similarly, deg(u ε , e N ) = -1). Indeed, let δ > 0 be small. By a homotopy argument,
and the latter number is 1, by the definitions of v and u ε .
The claim, combined with ( . ) and the fact that, as we will see, u ε ∈ R 0 , implies ( . ).
Step . The main estimate and conclusion. Set
By a tedious calculation, using the fact that |x /(εf (θ))|ĺ 1 when x ∈ V ε , we find that
On the one hand, using ( . ) and the assumptions on f , we find that u ε ∈ R 0 . On the other hand, ( . ) implies that, for
In particular, ( . ) holds when s = N -1. By the Gagliardo-Nirenberg inequality (Corollary . ) and the fact that N ľ 3, we find that ( . ) still holds in the full range given in ( . ).
QED
By scaling, Lemma . implies the following result.
Lemma . . (Dipole construction, scaled version) Let N , a, f , v, s, p be as above. Let A, B ∈ R N . Set ξ := (B -A)/|B -A| and H := ξ ⊥ , oriented by an orthonormal basis (e 1 , . . . , e N -1 ) such that (e 1 , . . . , e N -1 , ξ) is a direct basis of R N . Let R be an orientation preserving linear isometry from H to R N -1 . Write a point x in R N as x = x + θξ, with x ∈ H and θ ∈ R. Let L := |B -A| and set, for 0 < ε ĺ L,
.
Then
)
We next explain how to "make room" for inserting a dipole into an already existent map u : Ω → S N -1 . Lemma . . (Making room when u is locally smooth) Let N ľ 3. Let 0 < s < N -1 and 1 < p < ∞ be such that sp = N -1. Let a ∈ S N -1 . Let γ be a smooth simple compact curve in R N . Let u : R N → S N -1 be smooth in an open neighborhood U of γ. Fix δ, µ > 0. Then, for small ε > 0, there exists a map r u : R N → S N -1 such that:
Similarly in the neighborhood of a finite union of smooth simple compact curves.
Proof.
Step . Making u constant near its endpoints. Assume, for simplicity, that the origin is an end-
, it is easy to see (using the assumption that u is smooth) that, for small ε, r u satisfies a)-c) with γ replaced by 0, and, in addition, D N -1 (r u -u) 1 → 0 as ε → 0. By the Gagliardo-Nirenberg inequality Corollary . (recall that N ľ 3 and that u is bounded), we find that d) holds as well.
Therefore, in what follows, we may assume that u is constant near the endpoints of γ.
Step . Construction of r u and conclusion. Since N ľ 3, the set u(γ) ∪ {a} is contained in the interior of a closed spherical cap Σ = S N -1 . By a standard argument, there exists a smooth map
Set, for small ε,
Clearly: (j) the definition is consistent when
then conclude via Gagliardo-Nirenberg. In order to prove ( . )-( . ), we note that: (i) thanks to the assumption that u is constant near the endpoints of A and B of γ, we may replace V ε with the smaller set
Combining these facts with the definition ( . ), we find that
Integrating ( . ), we find, with the help of the coarea formula, that
Let γ be a smooth simple compact curve in R N . Assume that 0 is one of the endpoints of γ, and that γ (0) = e N .
Set, for α, β > 0,
Assume that:
(i) There exist α > 0, β > 0, and a ∈ S N -1 such that
Fix δ, µ > 0. Then, for small ε > 0, there exists a map r u : R N → S N -1 such that:
Sketch of proof. The proof is essentially the same as the one of Lemma . , with, as additional ingredient, the use of property (iii) of the homotopy H. The definition ( . ) has to be modified to
. Details are le t to the reader.
QED
Lemma . . (Making room near a singular endpoint ( )) Let:
(iii) µ > 0.
(iv) u : R N → S N -1 .
Assume that, in the unit ball B, u belongs to R 0 . Let 0 < δ < 1 be such that u is smooth in B δ (0) \ {0}. Then there exists some map r u : R N → S N -1 such that:
e) ||r u -u|| p W s,p ĺ µ.
Similarly for N -valued maps, provided we replace conclusion c) with "r u and u are homotopic on small spheres around the origin".
Proof. We write points in R N \ {0} in the form x = rσ, r > 0, σ ∈ S N -1 . Consider a map w ∈ C ∞ (S N -1 ; S N -1 ) of degree deg(u, 0) and such that w = a near the North Pole e N . Let H ∈ C ∞ (S N -1 × (-∞, δ]; S N -1 ) be such that:
Let, for small ε,
Note that: (j) the definition is consistent when εδ/2 ĺ |x|ĺ εδ; (jj) a)-d) hold. Next, we note that
Integrating ( . ), we find that ||r u -u|| W N -1,1 → 0 as ε → 0. We conclude via the Gagliardo-Nirenberg inequality.
Assume that, for each cube C ∈ C N , g |∂C : ∂C → N is null homotopic. Let f : C N → N be the (N -1)-homogenous extension of g. Then there exists a sequence
Sketch of proof. We work with |x|:= ||x|| ∞ . Let δ be the size of the cubes in C N . Fix some point a ∈ N . Consider, for each cube
The fact that, for fixed arbitrarily small µ > 0, may choose the ε j 's such that r f -f W s,p ĺ µ follows from the multi-sequences Brezis-Lieb Lemma . .
QED
Appendix # . More on homogeneous maps
For the next result, we use the notation in Section . . Let Ψ = Ψ j,t,ε : R N → C j = C j,t,ε be the projection on the j-skeleton of size 2ε of R N obtained from C N,t,ε . Set v := t + (ε, . . . , ε).
Lemma . . The mapping Ψ : R N \ C N -j-1,v,ε → C j,t,ε is locally Lipschitz, and satisfies, with C independent of ε and t,
Sketch of proof. By scaling, we may assume that ε = 1 and t = 0. In this case, C := C N -j-1,v,1 is given by
For further use, let us also note that, if
and τ m x m ľ 0, ∀ 1 ĺ m ĺ N -j}. . ) and a similar formula holds for a general σ ∈ S N -j,N .
Using ( . ) and its analogues, one can easily prove that Ψ is continuous on R N \ C . On the other hand, the sets S σ,τ form a polyhedral a.e.-partition of Q 1 and, in the interior of S id,τ and for k ∈ Z N we have (using ( . ) and ( . ))
(and similar formulas hold in each S σ,τ ). Combining ( . ) and its analogues, we obtain ( . ). QED For the next result, we are in the context of Proposition . , and we use the notation there.
Lemma . . Let 1 ĺ p < j + 1 ĺ N . Assume that g ∈ W 1,p (C j ; R ). Then we have
)
Equivalently, the map W 1,p (C j ) g → g j ∈ W 1,p (C ) is continuous.
Proof. By a straightforward induction argument, it su fices to prove that (with the notation at the beginning of the Section . ) the map h := H j+1 (g) satisfies
)
The validity of ( . ) on each cube C ∈ C j+1 , and thus on C j+1 , is clear (here, we use p < j + 1). We next check ( . ) first "cube by cube", next globally. Fix first a cube
. (Here, we use again the fact that p < j + 1.) The case of a general map g ∈ W 1,p (∂C) follows by approximation.
It remains to prove that h ∈ W 1,p (C j+1 ) (globally). This amounts to tr(h |C ) = g |∂C , ∀ C ∈ C j+1 . This is clear when g |∂C is Lipschitz; the general case follows by approximation. Lemma . . Assume that E is non-compact. Then, for each t ∈ E , there exists some smooth map γ
For the next result, we are in the context of Propositions . and . , and we use the notation there.
Lemma . . Let 0 ĺ j ĺ N -1. Then there exists a Lipschitz homotopy G = G(x, θ) :
Proof. The proof is by induction on N -j and "concatenation" of successive homotopies. When
Assume next that N -j = k ľ 2 and the the lemma has been established for N -1 and j. Let us note that
Using this, we see that
is a homotopy satisfying conclusions a) and c) above. Moreover, if we set r
It then su fices to concatenate G 1 with an appropriate map (given by the induction hypothesis)
In what follows, we let Ω = (0, 1) N .
For the next result, see [ , Theorem B. ].
Lemma . . Let F ⊂ R k be a discrete closed set. Let f ∈ W s,p (Ω; F ). If sp ľ 1, then f is constant.
Lemma . . Assume that sp ľ 1. Let ϕ ∈ W s,p (Ω; E ) be such that u := π • ϕ is continuous. Then ϕ is continuous. Moreover, if u is smooth, then ϕ is smooth.
Proof. We may assume that s ĺ 1. Given the local nature of the problem, we may also assume that u(Ω) ⊂ B for some geodesic ball or radius r smaller than the injectivity radius of N . Write π -1 (B) as an at most countable union of balls B j , each one di feomorphic with B. Pick j such that the set ϕ -1 (B j ) has positive measure. Let ζ be the continuous li ting of u with values into B j . By Theorem . , we have ζ ∈ W s,p and, by choice of B j , the set [ϕ = ζ] has positive measure. We claim that ϕ = ζ a.e. (and this completes the proof).
For this purpose, we consider some smooth map g : [0, ∞) → [0, 1] such that g(0) = 0 and g(θ) = 1 when θ ľ r. Let ( . )
Since ϕ, ζ ∈ W s,p and g is Lipschitz, we obtain from ( . ) that f ∈ W s,p (recall that s ĺ 1).
On the other hand, we clearly have, by the formula of g and the property π • ϕ = π • ζ, that f : Ω → {0, 1}. By Lemma . and the choice of B j , we find that f = 0 a.e., and thus ϕ is continuous. QED |
04096145 | en | [
"sdv"
] | 2024/03/04 16:41:20 | 2023 | https://u-paris.hal.science/hal-04096145/file/Manuscript%20Heschl.pdf | Douglas Henderson
Ihsane Bichoutar
Bernard Moxham
Virginie Faidherbe
Odile Plaisant
Dr Alexis Guédon
email: [email protected]
Descriptive and functional anatomy of the Heschl Gyrus: historical review, manual labelling and current perspectives
Keywords: Heschl Gyrus, Primary Auditory Cortex, Temporal Lobe, Auditory System
Purpose
The Heschl Gyrus (HG), which includes the Primary Auditory Cortex (PAC), lies on the upper surface of the superior temporal gyrus (T1). It has been the subject of growing interest in the fields of neuroscience over the last decade. Given the considerable interhemispheric and interindividual variability of its morphology, manual labelling remains the gold standard for its radio-anatomical study. The aim of this study was to revisit the original work of Richard L. Heschl, to provide a broad overview of the available anatomical knowledge and to propose a manually labelled 3D digital model.
Methods
We reviewed existing works on the HG, from Heschl's original publication of 1878, Dejerine neuroanatomical atlas of 1895 to the most recent digital atlases (Julich-Brain Cytoarchitectonic Atlas, the Human Connectome Project). Our segmentation work was based on data from the BigBrain Project and used the MRIcron 2019 software.
Results
The original publication by Heschl has been translated into French and English. We propose a correspondence of previous nomenclatures with the most recent ones, including the Terminologia Neuroanatomica. Finally, despite the notable anatomical variability of the HG, clear and coherent segmentation criteria allowed us to generate a 3D digital model of the HG.
Discussion and Conclusion
Heschl's work is still relevant and could impulse further anatomical and functional studies. The segmentation criteria could serve as a reference for manual labelling of the HG. Furthermore, a thorough, and historically based understanding of the morphological, microstructural and functional characteristics of the HG could be useful for manual segmentation.
Introduction
The Heschl Gyrus (HG), which includes the Primary Auditory Cortex (PAC), lies on the upper surface of the superior temporal gyrus (T1). It has been the subject of growing interest in the fields of neuroscience, neuroradiology, psychology and psychiatry over the last decade. Given the considerable interhemispheric and interindividual variability of its morphology, manual labelling remains the gold standard for its radio-anatomical study.
Due to the vast body of literature devoted to this structure, originating from various disciplines, dating back to 1878 and Richard L. Heschl's princeps article, it is not easy to encompass all the knowledge related to this structure. Moreover, the macroanatomical descriptions of the HG are sometimes heterogenous and do not always use the latest rigorous official neuroanatomical terminology, constituting a potential confounding factor in studies relying on manual segmentation.
The aim of this study was to clarify its morphology and functional connectivity, to revisit the original work of Richard L. Heschl, to provide a broad overview of the available knowledge related this structure, to propose coherent and reliable segmentation criteria and a 3D manually labelled digital model.
Biography
Richard Ladislaus Heschl (1824 -1881) received his medical doctorate in 1849 from the University of Vienna, where he trained under the surgeons Franz Schuh (1804 -1865) and Joseph Wattmann (1789 -1866). The following year he became the first assistant to the pathologist Carl von Rokitansky (1804-1878) at the same university. He then pursued his career across the Austrian Empire, which became Austria-Hungary in 1867, being appointed professor of anatomy in Olmütz in 1854, professor of pathological anatomy in Krakow in 1855, in Graz in 1863 and finally in Vienna in 1875 as successor to Rokitansky. He died of pulmonary consumption (most likely tuberculosis) in Vienna on May the 26 th 1881, aged 56 years old [START_REF] Gurlt | On the superior temporal gyrus by R.L. Heschl: English translation of "Über Die Vordere Quere Schläfenwindung Des Menschlichen Großhirns[END_REF] (Fig. 1).
As Amédée Dechambre stated in his "Encyclopedic Dictionary of Medical Sciences" of 1888, Heschl left behind "une foule d'articles et de mémoires dans des recueils périodiques" (a series of articles and memoirs in periodicals), many tomes, including the "Compendium of general and special pathological anatomy" of 1855, the "Sections-Technik" of 1859, and his "On the anterior transverse temporal gyrus" of 1878 that described the gyrus named after him [START_REF] Dechambre | Dictionnaire encyclopédique des sciences médicales[END_REF]19]. He also founded a pathological anatomy museum in Graz.
Fig. 1 Portrait photography of Richard L. Heschl in Graz, October 2 nd , 1872, at the age of 48 (courtesy of A.
Guédon).
Materials and methods
Recent and historical literature
A literature review was conducted in databases such as Pubmed, Google Scholar, Web of Science using keywords such as "Heschl Gyrus", "Primary Auditory Cortex", "Temporal Lobe", "Auditory System". We also used different historical (Brodmann, Von Economo and Koskinas) and recent digital anatomical atlases (Julich-Brain Cytoarchitectonic Atlas, Human Connectome Project's multimodal cortical parcellation 1.0). Richard L. Heschl's historical description [19] is available on Google Book (https://books.google.fr/). An English translation has been recently published [20]. We have done our own translation of the whole article into French and of the most salient passages (the anatomical description of the HG) into English.
Imaging
The radiological illustrations are from the Neuroradiology Department of Lariboisière Hospital, AP-HP, Paris with a three-dimensional T1 gradient echo sequence (MPRAGE) on a 3T MRI (Siemens Skyra, Siemens Healthineers, Germany) and a 3D CT angiogram performed with a biplane angiography system (Artis Zee, Siemens Healthineers, Germany).
Portrait photography
Théophile Le Comte, member of the Malacological Society of Belgium, naturalist living in Lessines, gathered in 1872/73, a beautiful album regrouping the European naturalists of which an unpublished photograph of Richard Ladislaus Heschl dating from 1872.
Manual labelling
The BigBrain Project, devised by the Montreal Neurological Institute and the German Forschungszentrum Jülich (https://bigbrain.loris.ca/main.php), provides a high-resolution 3D digital model of the human brain. It is based on a total of 7404 coronal brain sections of a 65-year-old woman that have been stained according to the Merker method and then digitalized [START_REF] Amunts | BigBrain: An Ultrahigh-Resolution 3D Human Brain Model[END_REF].
For the present study, using MRIcron 2019 software (available at nitrc.org), the left HG on 100 consecutive sections of the 400μm reconstruction (full16_400um_2009b_sym.nii) were manually labelled. The 3D reconstruction was performed using Mango version 4.1 software (http://rii.uthscsa.edu/mango/).
Atlases
The Julich-Brain Atlas [START_REF] Amunts | EBRAINS -Whole-brain parcellation of the Julich-Brain Cytoarchitectonic Atlas (v2.9)[END_REF] relies on histological sections of 23 post-mortem brains. Those tissues were fixed in formalin, MR-imaged, embedded in paraffin, serially cut, stained, digitized, and registered to stereotaxic reference coordinate spaces (the MNI Colin 27 and / or the MNI ICBM 152). The Human Connectome Project Multi-Modal Parcellation [START_REF] Van Essen | The Human Connectome Project: A data acquisition perspective[END_REF] relies on topographic, myelin, cortical thickness, connectivity, and functional maps.
Results
Anatomy of the Heschl Gyrus (HG)
Following the terminology of the 2017 version of the Terminologia Neuroanatomica (TNA) [15], most recent works [START_REF] Amunts | Chapter 36 -Auditory System[END_REF][START_REF] Da Rocha | TASH: Toolbox for the Automated Segmentation of Heschl's gyrus[END_REF][START_REF] Nieuwenhuys | The human central nervous system : a synopsis and atlas[END_REF] describe Heschl's Gyrus/Gyri (HG), or "transverse temporal gyri", as a circumvolution that originates from the depth of the lateral sulcus (of Sylvius) in the retro-insular region, that extends forwards, downwards and laterally along the upper surface of the superior temporal gyrus (T1) towards its lateral bulge. As described by Ten Donkelhaar et al. [START_REF] Ten Donkelaar | Toward a Common Terminology for the Gyri and Sulci of the Human Cerebral Cortex[END_REF], the HG separates the Planum polare (PP) anteriorly from the Planum temporale (PT) posteriorly on the upper surface of T1. In the left cerebral hemisphere, dominant for language, the PT includes Wernicke's area that is involved in the comprehension of written and spoken language. Nevertheless, both MRI and postmortem studies show that there are considerable interhemispheric, and interindividual variations in this transverse temporal gyrus [START_REF] Sluming | Heschl gyrus and its included primary auditory cortex: Structural MRI studies in healthy and diseased subjects[END_REF].
In accordance with TNA 2017, the anteromedial limit of the HG is defined by the anterior transverse temporal sulcus (ATTS) that unites with the circular sulcus of the insula (of Reil). The posterolateral limit is determined by the posterior transverse temporal sulcus (PTTS) (also known as Heschl's sulcus), which lies almost parallel to the ATTS. HG's posteromedial border is formed by a line drawn from the medial ends of the PTTS and the ATTS, while its anterolateral edge is limited by the visible ending of the circumvolution [START_REF] Sluming | Heschl gyrus and its included primary auditory cortex: Structural MRI studies in healthy and diseased subjects[END_REF]. Finally, the inferior boundary of the HG is formed by a line joining the bases of the anterior and posterior TTS (Fig. 2). Owing to the considerable anatomical variations, an intermediate transverse temporal sulcus (ITTS) (of Beck) has been described when this sulcus' length is comparable to those of the anterior and posterior TTS. The ITTS is parallel, between the ATTS and PTTS, in about 40% of the cerebral hemispheres of healthy living subjects [START_REF] Marie | Descriptive anatomy of Heschl's gyri in 430 healthy volunteers, including 198 left-handers[END_REF].
The ITTS separates 2, 3, 4 and up to 5 gyri, with the frequency of the gyrification pattern decreasing with increasing number of gyri, a 5-gyri pattern being extremely rare [START_REF] Amunts | Chapter 36 -Auditory System[END_REF]. In the case of a duplication, the TNA 2017 defines an anterior transverse temporal gyrus and a posterior transverse temporal gyrus.
On sagittal or coronal planes, the HG has been described as having an omega shape; alternatively shaped as a mushroom, a heart, a bow tie or a two separate Ss [START_REF] Sluming | Heschl gyrus and its included primary auditory cortex: Structural MRI studies in healthy and diseased subjects[END_REF]. With regard to what is the minimal length of the ITTS when there are two HGs, the most consensual view in the literature is of an ITTS with a length of at least half of the HG, a trait associated with a depth comparable to those of the TTS [START_REF] Penhune | Interhemispheric Anatomical Differences in Human Primary Auditory Cortex: Probabilistic Mapping and Volume Measurement from Magnetic Resonance Scans[END_REF]; a shorter ITTS is commonly associated with a reduced depth.
From a microanatomical perspective, the main structure of interest embodied in the HG is the primary auditory cortex (PAC), initially described by Alfred Walter Campbell (1868 -1937) in 1905 as an "auditory-sensory area". This area is also commonly known as Area 41, or "Area temporalis transversa interna" according to Korbinian Brodmann (1868Brodmann ( -1918) ) in 1909 [START_REF] Brodmann | Vergleichende Lokalisationslehre der Grosshirnrinde[END_REF], or as Area Te1 after Morosan et al. [START_REF] Morosan | Human Primary Auditory Cortex: Cytoarchitectonic Subdivisions and Mapping into a Spatial Reference System[END_REF] (Table 1 and Fig. 3). The grey matter of the neocortex is traditionally described as a superposition of six layers, numbered from the surface (I) to the depth (VI). Histological investigations make it possible to characterise, at the cellular and molecular levels, specific areas of the neopallium linked to particular functions. Thus, the synthesis of cyto-, myelo-, chemo-, and receptor-architectonic studies gives us an overview of the main microstructural characteristics of the PAC: a koniocortex surrounded by pro-and para-koniocortices, a high density of granular cells, small pyramidal cells in layer III, a well-developed and highly myelinated layer IV, a low cell density in layer V, a strong expression of acetylcholine esterase in the neuropile and a high concentration of cholinergic muscarinic M2 receptors [START_REF] Sluming | Heschl gyrus and its included primary auditory cortex: Structural MRI studies in healthy and diseased subjects[END_REF][START_REF] Amunts | Chapter 36 -Auditory System[END_REF] (Fig. 3E).
The inclusion of the PAC in the HG is widely understood. In the case of a duplication of the HG, the PAC most commonly tends to occupy the anterior HG [START_REF] Sluming | Heschl gyrus and its included primary auditory cortex: Structural MRI studies in healthy and diseased subjects[END_REF]. However, macroanatomical landmarks do not seem able to delineate with high accuracy the PAC. For instance, the PAC has been described as extending beyond the PTTS.
The lack of consistent correspondence between the macroanatomical landmarks of the HG and the microanatomical features of the PAC should be considered in the light of the considerable interhemispheric, and interindividual variations for each of these structures, and the significant heterogeneity of the methods and techniques used in the literature [START_REF] Morosan | Human Primary Auditory Cortex: Cytoarchitectonic Subdivisions and Mapping into a Spatial Reference System[END_REF][START_REF] Rademacher | Probabilistic Mapping and Volume Measurement of Human Primary Auditory Cortex[END_REF]. Indeed, descriptions of HG are based on anatomical, morphological and radiological studies, whereas the PAC is identified using cyto-architectonical approaches, connectivity and functional imaging techniques.
Based on historical data from Oskar Vogt (1870 -1959) -Cécile Vogt-Mugnier (1875 -1962) laboratory and from studies by Adolf Hopf (1923 -2011), [START_REF] Nieuwenhuys | A map of the human neocortex showing the estimated overall myelin content of the individual architectonic areas based on the studies of Adolf Hopf[END_REF] have recently published a high precision myeloarchitectonic map of the human neocortex (180 areas). This describes the PAC as covering the anterior and posterior HGs, and defines 13 densely myelinated areas (145 to 157) with a core of high density located on the medial third of the anterior gyrus (146, 148 and 151) [START_REF] Nieuwenhuys | A map of the human neocortex showing the estimated overall myelin content of the individual architectonic areas based on the studies of Adolf Hopf[END_REF] (Fig. 3D).
Table 1 Main microanatomical descriptions of the PAC included in the HG, from the early 20th century to Translator's note: there are no bolded sections in the original article, they have been used to indicate key points.
" […]
After opening the Sylvian fissure and removing the meninges, three surfaces can be observed. A superior surfacethe inferior surface of the operculum (translator's note: most likely the operculum frontale/parietale) with its transverse gyri, -the lateral surface of the Insula, ample and prominent, showing 5 to 7 convolutions; and an inferior surface which corresponds to the superior surface of the left temporal lobe. The latter usually presents a triangular shape with curved edges; the lateral edge is convex towards the outside, corresponding to the superior surface of the superior temporal gyrus (inferior bank of the Sylvian fissure); the medial edge surrounds the Insula at mid-length, forming a short and protruding circular arc; and the posterior edge, shorter than the two others, connects the two posterior ends of the two previous ones and its curvature varies from one brain to another.
After exposing the triangular surface described above, whose most acute angle points anteriorly, one promptly notices that one or more gyri lay on the posterior half and expand transversely from the anterolateral part to the posterior-medial part of the surface. The anterior convolution is always found, but very often one can also observe a second and often a third, sometimes even a fourth or a fifth convolution, either deploying like a fan or lying in a more parallel manner on the upper surface of the temporal lobe.
The first convolution (or anterior transverse temporal gyrus) emerges from the middle of the lateral edge of the superior surface of the temporal lobe, namely from the first temporal gyrus (T 1 according to Ecker); it is the longest of these gyri, merging quite often with the second transverse temporal gyrus. It ends either on its own or merging with the second, in the most posterior angle of the Sylvian fissure, about 1 cm from the entrance of the inferior horn (Translator's note: cornu temporale or cornu inferius) while the other convolutions end more laterally.
[…]
I termed these gyri as a whole, lying on the superior surface of the temporal lobe, (superior) transverse temporal gyri. Regarding the most anterior of these 3 to 4 gyri, which is always found, although its shape may vary, I refer to it as T 1 .
I have looked for it in no less than 1087 documented brains, and in hundreds of others, not less explicitly described and have found it every single time as a well-characterised gyrus of consistent shape. Its length varies with the depth of the Sylvian fissure: its height ranges from 4 to 12 mm, its thickness is usually 12 to 15 mm.
Sometimes the gyrus lies as anteriorly as posteriorly, and sometimes only posteriorly. It is delimited from the superior surface of the temporal lobe by a 6 to 8 mm deep sulcus, in such way that it rises with a 3 to 4 mm wide foot and overlaps it, showing a mushroom shape (on an axial section)."
The French translation is unpublished to the best our knowledge, see Supplementary material 1.
Segmentation criteria
To ensure the reliability of the manual segmentation of the HG, it seems essential to us, on the basis of Heschl's princeps article, our study of the literature, and our subjective experience, to follow two principles from which we can deduce the correct order of identification of the relevant sulci, lobes, plana and gyri.
-Principle 1: Larger structures should be identified first -Principle 2: Sulci should be identified before lobes, plana and gyri
In his 1878 article, Heschl first describes the dissecting instruments, methods, and steps to be followed to identify the HG. He proceeds from the largest to the smallest structures and explains that first the brain must be separated from the cerebellum, then both hemispheres, the meninges removed, the lateral sulcus opened, the fronto-parietal operculum, the Insula, T1, the circular sulcus of the Insula identified. Also, Heschl argued for the importance of the study of gyri and sulci in describing the anatomy of the brain and its embryological development: "Each circumvolution inevitably matches two sulci, and each sulcus two circumvolutions." [19].
From these two principles, we can now deduce the steps for achieving a reliable segmentation of the HG (Table 2).
The widest and most constant sulcus to be identified first is the lateral sulcus, separating the temporal lobe and its superior aspect, the superior temporal gyrus, from the frontal and parietal lobes. Next, the circular sulcus of the Insula defines the Insula and the medial border of T1. At this stage, there is no decisive argument as to whether we should identify the ATTS or the PTTS first. However, Heschl mentions in his paper: "a very deep posterior sulcus is always found, with which the first temporal sulcus (Translator's note: ATTS) merges directly (fissura parallela, t 1 according to Ecker), and ends in the crevices, the most medial and posterior faces of the Sylvian fissure." Dejerine describes it as "deep temporal sulcus" [11] (Table 3). We therefore choose to identify the PTTS, also called Heschl's sulcus [15], before the ATTS. As the Planum temporale, Planum polare and HG/HGs are clearly visible to us, we then look for one or more ITTS. Finally, we identify the inferior border (which is not a sulcus but an artificial border) to complete the segmentation of the HG. Allowing the completion of the segmentation of the HG The 6 steps described in Table 2 should be applied to the coronal, axial and sagittal digital views. In addition, as manual segmentation requires good hand-eye coordination, preparatory drawings on the blackboard with chalk can be very useful (Fig. 5).
3D digital model of the HG and terminology
We generated a 3D digital model of the HG based upon the BigBrain Project data with manual labelling of this structure. As shown in Fig. 6A, the length of the ITTS is 30% of the length of the HG. De facto, we cannot describe two separate HGs but instead a single HG with a partial duplication at its anterolateral third (also defined as common stem duplication). Moreover, the ITTS is not as deep as the ATTS and the PTTS (Figs. 6A andB). Research Group (TEPARG, https://teparg.com/), it will be in accordance with the latest international anatomical terminology of the Terminologia Anatomica (TA, 1998) and of the Terminologia Neuroanatomica (TNA, 2017)
[15] (Table 3).
Functional anatomy of the PAC included in the HG
The ascending pathway of the auditory system involves a multi-synaptic network. It starts in the cochlea, diverges as it reaches the anterior and posterior cochlear nuclei, then projects bilaterally with a contralateral dominance as it continues through the superior olivary complexes and the nuclei of the lateral lemnisci. From the lemnisci, the pathway merges at the contralateral inferior colliculus. Subsequently, the pathway projects to the contralateral auditory cortex through the medial geniculate body (Fig. 7A).
The descending pathway shows even greater complexity. It is also characterised by contralateral dominance, but not all projections cross the middle line. A direct pathway from the auditory cortex to the medial geniculate body and the inferior colliculus has been described. It then diverges, projecting ipsilaterally to the nucleus of the lateral lemniscus, the posterior cochlear nucleus and the medial periolivary nucleus, and contralaterally to the posterior cochlear nucleus. Axons of neurons of the medial periolivary nuclei and the lateral periolivary nuclei on both sides then reach the anterior and posterior cochlear nuclei. The contralateral cochlea is connected to the contralateral medial and lateral periolivary nuclei, and to the ipsilateral medial periolovary nucleus [START_REF] Amunts | Chapter 36 -Auditory System[END_REF] (Fig. 7B). The physical organisation of the PAC neurons is described in terms of their response to sound frequency.
According to this tonotopic organization, cells at the anterior end tend to be more responsive to low tones (corresponding to the apex of the cochlea) and those at the posterior end to high tones (corresponding to the base of the cochlea) [START_REF] Gazzaniga | Cognitive Neuroscience The Biology of the Mind[END_REF].
While understanding the cortico-cortical connections of the PAC still requires further investigations, especially in humans, strong connections with both the anterior and posterior superior temporal gyri have been implicated, relying both on serial and parallel processing [START_REF] Amunts | Chapter 36 -Auditory System[END_REF]. As part of the auditory cortex, the PAC has also been termed BA41 (Brodmann area 41) or A1 or "core area". PAC is also related to the secondary auditory cortex (termed BA42 (Brodmann area 42) (Figs. 3 A andB) or A2 or "belt area") and to the more lateral parabelt area [START_REF] Gazzaniga | Cognitive Neuroscience The Biology of the Mind[END_REF]. Short and long-range connections pass beyond the temporal lobe to reach parietal and frontal regions and also the PAC of the opposite hemisphere. Recent tractography reconstruction has shown that auditory processing pathways can be visualised in vivo and information from the HG appears to be projected to the inferior parietal and posterior frontal gyri through the arcuate fasciculus. A ventral pathway also passes information "from posterior to more anterior temporal and frontal area through the longitudinal fasciculus, the uncinate fasciculus, and the inferior fronto-occipital fasciculus" [START_REF] Maffei | Chapter 16 -Imaging white-matter pathways of the auditory system with diffusion imaging tractography[END_REF] (Fig. 8).
Fig. 8 In vivo tractography reconstruction of the auditory processing pathways from Maffei (2015) [START_REF] Maffei | Chapter 16 -Imaging white-matter pathways of the auditory system with diffusion imaging tractography[END_REF]. White dots: arcuate fasciculus; Black dots: ventral pathway; Black star: the Heschl Gyrus Providing reliable and accurate neuroanatomical atlases nowadays faces considerable challenges because of discrepancies arising from the variety of techniques employed to study the brain, including morphological, cytoand myelo-architectonical investigations and study of functional and cortico-cortical connections. Various descriptions of HG and PAC could be overcome using a multi-modal parcellation approach that merges multiple deterministic and probabilistic maps.
containing probabilistic maps of cortical areas (Fig. 9A). The Julich Brain Atlas also proposes connectivity data from the "1000 brains study" by Caspers et al. [START_REF] Caspers | EBRAINS -1000BRAINS study, connectivity data (v1.1)[END_REF], thus allowing to visualize an estimation of the connection strength from each region to all other areas (Fig. 9B). Given the fast-growing body of neuroimaging data, Yarkoni et al. [START_REF] Yarkoni | Large-scale automated synthesis of human functional neuroimaging data[END_REF] propose a large scale, high quality, automated meta-analysis approach based upon the functional magnetic resonance imaging literature and available online at neurosynth.org (Fig. 11).
Discussion
In our study, we examined the morphology, microstructure and functional connectivity of the HG and its included PAC, providing a broad overview of the available knowledge about this structure. We have revisited the original work of Richard L. Heschl, presenting an unpublished French translation. We proposed segmentation criteria and a manually labelled 3D digital model. We illustrated the challenges that modern neuroanatomical atlases face due to the considerable development of neuroimaging and probabilistic approaches in the last decade and how this relates to the HG.
Comparative anatomy and an evolutionary perspective
Despite the complexity of language and speech in humans, the structure of the PAC (BA41) for non-human primates is similar to that in humans [START_REF] Kaas | Subdivisions of auditory cortex and processing streams in primates[END_REF]. From a microanatomical perspective, these similarities include the properties of sensory area, there being a high density of granular cells in layers III and IV of the cortex, a strong expression of acetylcholinesterase, parvalbumin, cytochrome C oxidase, and a significant thalamic input [START_REF] Kaas | Evolution of Nervous Systems[END_REF]. It has been suggested that structural differences in the PAC of humans are primarily quantitative in nature and could relate to the way language diverged from the communication abilities of earlier primates. This change could beconcomitantly with the increase in brain volume-alongside more complex communicative and social interactions.
A further consideration is the strong involvement of BA47 (the orbital area of the prefrontal cortex) in language processing. The human prefrontal cortex is involved in the processing of syntax, relying on a different connectivity of the ventral pathways for complex sounds processing (including Broca's area or BA 44/45) compared with nonhumans. However, there is great similarity in the extreme capsule (a long bidirectional association pathway involved in language), as demonstrated between humans and macaques.
Differences in humans with respect to the anatomical and functional interhemispheric variability and the language and speech related cortical areas of the left hemisphere could be related to levels of linguistic skills. Differences in auditory processing among species have also been described. For instance, the auditory cortex of the rhesus macaque has been shown to show species-specific vocalisations and to respond more to based-passed noises (BPN) than to pure tone [START_REF] Tian | Functional Specialization in Rhesus Monkey Auditory Cortex[END_REF]. With regard to the areas associated with the PAC (core area), namely the belt, parabelt and lateral belt areas, the degree of similarity between humans and non-human primates appears to be very high. Hence, differences seem to concern high levels of auditory processing rather than the structure of the PAC [START_REF] Kaas | Evolution of Nervous Systems[END_REF].
Clinical considerations
It has been shown that the HG is linked to acoustic processing [START_REF] Warrier | Relating Structure to Function: Heschl's Gyrus and Acoustic Processing[END_REF] and the morphology and size of the HG have been linked to linguistic and musical abilities. Some authors have even considered the HG provides neuroanatomical markers for these abilities [START_REF] Turker | When Music Speaks": Auditory Cortex Morphology as a Neuroanatomical Marker of Language Aptitude and Musicality[END_REF][START_REF] Wengenroth | Increased Volume and Function of Right Auditory Cortex as a Marker for Absolute Pitch[END_REF]. It has been reported that duplication of the HG in the right cerebral hemisphere is correlated to higher scores in foreign languages and musicality tests.
In terms of language skills, bilinguals have larger HGs and multiple HGs in the right hemisphere and increased grey matter volumes are correlated to language aptitude in children and teenagers [START_REF] Turker | Auditory Cortex Morphology Predicts Language Learning Potential in Children and Teenagers[END_REF]. Dyslexia has been linked
to a large duplication of the left HG [START_REF] Leonard | Anatomical Risk Factors for Phonological Dyslexia[END_REF]. Moreover the anatomy of the HG appears to be related to handedness and its gyrification pattern to speech-listening hemispheric lateralization [START_REF] Tzourio-Mazoyer | Heschl's gyrification pattern is related to speechlistening hemispheric lateralization: FMRI investigation in 281 healthy volunteers[END_REF].
In terms of musicality, an increased volume in the right cerebral hemisphere is linked to the ability to identify each sound by its name/frequency ("absolute pitch") [START_REF] Wengenroth | Increased Volume and Function of Right Auditory Cortex as a Marker for Absolute Pitch[END_REF]. Multiple HGs, or an increased grey matter volume for this structure, have also been more frequently identified in both professional and amateur musicians than in nonmusicians [START_REF] Benner | Prevalence and function of Heschl's gyrus morphotypes in musicians[END_REF]. The morphology of HG has additionally been linked to musical instrument preference and multiple HGs, and an increased volume in the left hemisphere, have been found in patients with hypermusia associated with Williams Syndrome (microdeletion 7q11.23).
In deaf persons, the volume of the HG is not modified, although the grey/white matter ratio increases in favour of grey matter. A reduction of the volume of the HG has also been correlated with tinnitus and, in the rare case of a bilateral infarct of the HG, a cortical deafness can be observed [START_REF] Narayanan | A Case of Cortical Deafness due to Bilateral Heschl Gyrus Infarct[END_REF]. Indeed, the ascending pathway of the auditory system projects bilaterally to the primary auditory cortical areas, thus explaining why a unilateral acute cerebral ischemia does not result in deafness, whereas an infarct of Broca's area (BA 44/45) is more reliably associated with aphasia.
Concerning psychiatric disorders, several studies have linked the morphology of the HG to auditory hallucinations in patients with schizophrenia or schizophrenia spectrum disorders, one of the most recent being described by Takahashi et al. [START_REF] Takahashi | Heschl's Gyrus Duplication Pattern in Individuals at Risk of Developing Psychosis and Patients With Schizophrenia[END_REF]. Furthermore, the white and grey matter volumes in female patients suffering from this psychiatric disorder seem inversely correlated to the symptom's severity [12] but, regarding the grey matter, this result is also found in male patients. Paradoxically, auditory hallucinations in patients with schizophrenia could cause, over time, an increase in HG volume.
Beyond the above anatomical, functional, and pathological considerations, a major point to be elucidated is the part played by neural plasticity in converting the subject's experiences and the part of innate genetic factors.
Eventually, further histological and molecular studies are required to understand the mechanisms connecting descriptive anatomy, pathology and linguistic or musical abilities [START_REF] Zatorre | Plasticity in Gray and White[END_REF].
Strengths and limitations
Our study has several strengths. The literature review we propose is comprehensive, including historical sources than are often cited but not discussed in detail in most studies. Given the diverse backgrounds of the authors, we provide an overview of knowledge about the HG that merges perspectives from different relevant fields, including neuroradiology, psychiatry, medical anatomy, neuroscience, cognitive science, literature, and music.
Nevertheless, our study also has several limitations. The segmentation criteria are mainly based on subjective experience, pedagogical tradition, and a review of the literature. The 3D digital model we propose has not been evaluated by an independent panel and the criteria have not been tested in a prospective, single-blind, comparative cohort study. They have a low level of evidence by current methodological standards. Such an experiment could be conducted with first, second and third year medical students in European countries, for example, although establishing standardised conditions and achieving statistical significance may be difficult due to the number of participants required. By comparison, Dalboni da Rocha et al [START_REF] Da Rocha | TASH: Toolbox for the Automated Segmentation of Heschl's gyrus[END_REF] in the TASH study use a single expert to set the gold standard (manually labelled segmentation) to compare and validate their automated segmentation method.
Perspectives
There is no doubt that new technological breakthroughs in the fields of functional neuroimaging, probabilistic neuroimaging data processing, and high-performance computing in a broad sense will both challenge and expand our current knowledge of the HG. As discussed by Amunts et al [START_REF] Amunts | The coming decade of digital brain research -A vision for neuroscience at the intersection of technology and computing[END_REF], the construction of reliable digital twins of the brain may be the next important and disruptive step in achieving clinically relevant applications.
Conclusion
We were able, by means of a thorough review of the literature, to define clear, coherent and reliable criteria for manual labelling of the HG. These criteria could be used as groundwork to ensure the coherence of the various
Fig. 2 1 : 3 : 7 :
2137 Fig. 2 Anatomy of the Heschl Gyrus. A: Coronal 3D T1 MPRAGE MRI section; B: Axial section; C: Sagittal section. 1: Heschl's Gyrus (HG); 2: posterior temporal transverse sulcus (PTTS) (also known as Heschl's sulcus); 3: anterior temporal transverse sulcus (ATTS); 4: Insula; 5: Frontoparietal region; 6: lateral sulcus (of Sylvius); 7: superior temporal gyrus (T1); 8: intermediate temporal transverse sulcus (ITTS) (of Beck); 9: Planum Temporale (PT); 10: posteromedial boundary; 11: inferior boundary
Fig. 3
3 Fig. 3 The Primary Auditory Cortex included in the Heschl Gyrus represented in several atlases, from 1909 to nowadays. A and B: Area 41 from Brodmann (1909) [7] reproduced with permission from Elsevier (Morosan, 2001) [26] C: Areas Tc and Td from Von Economo and Koskinas (1925) [14] reproduced with permission from Elsevier (Scholtens LH, 2018) [33] D: Areas 145 to 157 from Nieuwenhuys (2017) [28] reproduced with permission from Springer Nature E: Areas Te1.2, Te1.0, Te1.1 from Morosan (2001) [26] reproduced with permission from Elsevier
Fig. 4
4 Fig. 4 Arterial vasculature of the Heschl Gyrus. A, C, E: Axial, right parasagittal and coronal 3D T1 MPRAGE MRI sections showing Heschl's gyri along the upper surface of the superior temporal gyri (T1), the left Heschl gyrus is partially duplicated. B, D, F: 3D CT angiogram of the two internal carotid arteries showing the vascularisation of Heschl's gyri (*) with the characteristic loops.
Fig. 5
5 Fig. 5 Preparatory drawing on the blackboard with chalk. A: Topographic superior view of the upper surface of the right T1. PT: Planum temporale; HG: Heschl Gyrus; PP: Planum polare; INS: Insula; ATTS: Anterior temporal transverse sulcus; PTTS: Posterior temporal transverse sulcus (also known as Heschl's sulcus) B: Single HG, partial and complete duplication (right hemisphere). Red: ATTS; Blue: PTTS; Green: ITTS. ATTS: Anterior
Fig. 6
6 Fig. 6 Manual labelling and 3D modelisation of the left HG from the BigBrain Dejerine Digital Atlas [31]. A: The left HG from the BigBrain Dejerine Digital Atlas. B: Manual labelling of the HG using MRIcron 2019. Data were used with permission from the BigBrain Project [3].
Fig. 7
7 Fig. 7 Ascending (A) and descending (B) pathways of the auditory system on a posterior view of the brain stem (without the cerebellum). (Courtesy of the Anatomy Laboratory of Paris Cité University). ACN=anterior cochlear nucleus; PCN=posterior cochlear nucleus; SOC=superior olivary complex; LLN=lateral lemniscus nucleus; IC=inferior colliculus; MGB=medial geniculate body; LPON=lateral periolivary nucleus; MPON=medial periolivary nucleus; AC=auditory cortex
Fig. 9 Fig. 10
910 Fig. 9 Cytoarchitectonic parcellation and connectivity data from the Julich Brain Atlas. A: Area Te1.1 right from Amunts (2021) [4]. B: Connectivity profile of Area Te1.1 right (red=high, deep blue=low) from Caspers (2021) [8]. Views from the siibra-explorer website: (https://atlases.ebrains.eu/viewer/)
Fig. 11
11 Fig. 11 Representation of the Heschl Gyrus from Neurosynth.org. This representation is based upon an automated meta-analysis of all the functional magnetic resonance imaging studies in the Neurosynth database whose abstracts include the term "Heschl Gyrus" at least once (85 studies for the HG) (https://neurosynth.org/analyses/terms/heschl%20gyrus/) (Yarkoni et al. 2011) [43]
Figures 3 Figure 6 :Figure 9
369 Figures 3 ABCE are reproduced with permission from Elsevier
Figure 11
11 Figure 11 is reproduced with permission from Neurosynth
Table 2
2 Manual labelling of the Heschl Gyrus: a step-by-step methodology
Steps Sulci
Table 3
3 Terminology used for the description of the HG, Heschl (1878),[START_REF] Dejerine | Anatomie des centres nerveux[END_REF] andTNA (2017)
Heschl (1878) [19] Dejerine (1895) [11] TNA (2017) Latin and English [15]
vordere quere Schläfenwindung Circonvolutions temporales (2062) Gyri temporales transversi -
transverses de Heschl / plis de Transverse temporal gyri
passage temporo-pariétaux /
région rétro-insulaire de Broca
(Tp)
sylvische Spalte Scissure de Sylvius (2003) Sulcus lateralis -Lateral
sulcus
Insel von Reil Insula (2074) Insula -Insula
Klappdeckel (2043) Operculum parietale -
Parietal operculum
(most likely)
erste Schläfenwindung (T1 nach première circonvolution (2057) Gyrus temporalis superior -
Ecker) temporale (T1) Superior temporal gyrus
(2067) Sulcus temporalis transversus
anterior -Anterior transverse
temporal sulcus
(2068) Sulcus temporalis transversus
intermedius -Intermediate
transverse temporal sulcus
tiefe Furche Sillon temporal profond (tp) (2069) Sulcus temporalis transversus
posterior -Posterior transverse
temporal sulcus
Circonvolution de (2079) Sulcus circularis insulae -
l'enceinte de l'Insula Circular sulcus of the insula
Another well worth noting work is the population-based 180-areas multi-modal parcellation by Glasser et al.[START_REF] Glasser | A multi-modal parcellation of human cerebral cortex[END_REF] that is based upon the Human Connectome Project (Human Connectome Project Multi-Modal Parcellation version 1.0) (Figs.10A and B).
Acknowledgements
Funding
The authors did not receive support from any organization for the submitted work.
Availability of data and materials
All data are available on request from the corresponding author.
studies devoted to the HG, and to establish, and validate automated segmentation methods. Richard L. Heschl's work is still relevant today and could impulse further anatomical and functional studies in neurosciences.
Declarations: Ethical Approval
Not applicable.
Competing interests
The authors have no conflicts of interest to declare. Heschl's archive was transferred from the University of Graz to the "Narrenturm" (madhouse tower) of the "naturhistorisches Museum" in Vienna. We thank Petra Greeff, Eduard Winter and Gerald Höfler for their valuable help.
Supplementary Information
Below is the link to the electronic supplementary material. |
03882253 | en | [
"math.math-cv",
"math.math-ca"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-03882253v2/file/Xi%28u%29-2.pdf | Raouf Abd
Chouikha
INFINITE PRODUCT REPRESENTATIONS FOR THE WEIERSTRASS ELLIPTIC FUNCTION ℘(z) AND APPLICATIONS
Keywords: theta functions, elliptic functions, trigonometric expansions 2010 Mathematics Subject Classication. 33E05, 30-08, 11F32, 11F16, 11F27. 1
In this paper we bring out innite products expansions for the Weierstrass's elliptic function ℘(z) = ℘(z, τ ) with primitive periods (2, 2τ ) and derive some n-order transformations of that function as well as for its derivative ℘ (z). This allow us to provide some new modular relations.
e 1 (τ ) = ℘(1, τ ), e 2 (τ ) = ℘(-1 -τ, τ ), e 3 (τ ) = ℘(τ, τ ).
The Weierstrass's function ℘(z) = ℘(z; ω, ω ) is an elliptic function with two primitive periods (2ω, 2ω ) such that the imaginary part ω ω > 0 which is of order two, has a double pole at z = 0, Recall that ℘(z) -1 z 2 is analytic in a neighborhood of 0 and it is uniquely dened. One then obtains the analytic series representation of ℘(z)
℘(z) = ℘(z; g 2 , g 3 ) = ℘(z, τ ) = 1 z 2 + m,n 1 (z -2mω -2nω ) 2 - 1 (2mω + 2nω ) 2 ,
where τ = ω ω , The prime symbol means that m and n are not simultaneously zero. A direct consequence of the preceding denition is that the Weierstrass elliptic function is an even function ℘(-z; ω, ω ) = ℘(z; ω, ω ). Moreover, this function veries the following homogeneity condition for any complex λ = 0 ℘(λz; λω, λω ) = λ -2 ℘(z; ω, ω ).
For this reason and for the sake of simplicity we will only consider in the sequel ℘(z) with primitive periods (2, 2τ ), simply denoted by ℘(z; 2, 2τ ) = ℘(z, τ ).
The original constructions of elliptic functions are due to Weierstrass and Jacobi [START_REF] Appell | Fonctions elliptiques et applications Gauthiers-Villard[END_REF][START_REF] Whittaker | A course of Modern Analysis Cambridge[END_REF]. Nice approaches on the subject of elliptic functions are the classic book by Watson and Whittaker [12] or the excellent full compilation of Tannery and Molk [START_REF] Whittaker | A course of Modern Analysis Cambridge[END_REF]. Useful reference handbooks with many details on transcendental functions including those used in this paper are provided by Bateman and Erdelyi, [START_REF] Kiepert | Ueber Theilung und Transformation der elliptischen Functionen[END_REF].
Recall some useful facts on Weierstrass elliptic function. Its values at the halfperiods: ω 1 = 1, ω 2 = -1 -τ, ω 3 = τ are These e i obey the relations e 1 + e 2 + e 3 = 0, e 1 e 2 + e 3 e 1 + e 2 e 3 = -g 2 4 , e 1 e 2 e 3 = g 3 4 .
(1)
Finally, when two of the roots e 1 , e 2 and e 3 coincide, the Weierstrass elliptic function degenerates to a simply periodic function.
On the other hand, the Weierstrass function ℘(z, τ ) is connected to the Jacobi theta functions
θ i (v) = θ i (v, τ ), i = 1, 2, 3, 4 where v = z 2 : ℘(z) = ( 1 2 ) 2 [-4η - d 2 logθ 1 (v) dv 2 ] η = η(τ ) = - 1 12 θ 1 (0) θ 1 (0) = π 2 2 [ 1 6 + n≥1 1 (sin nπτ ) 2 ].
(2)
We have also
℘(z + τ ) = ( 1 2 ) 2 [-4η - d 2 logθ 4 (v) dv 2 ].
Therefore,
e 3 (τ ) = ℘(τ ) = -η(τ ) + π 2 k≥0 1 1 -cos(2k + 1)πτ , e 3 (τ ) = ( π 2 ) 2 θ 1 (0) 3θ 1 (0) - θ 4 (0) θ 4 (0) .
By the same way
e 2 (τ ) = -η(τ + 1) + π 2 k≥0 1 1 + cos(2k + 1)πτ = ( π 2 ) 2 θ 1 (0) 3θ 1 (0) - θ 3 (0) θ 3 (0) . e 1 (τ ) = η(τ ) + η(τ + 1) -2π 2 k≥0 1 (sin(2k + 1)πτ ) 2 = ( π 2 ) 2 θ 1 (0) 3θ 1 (0) - θ 2 (0) θ 2 (0) .
Notice also the following theta function identity [START_REF] Tannery | Elements de la theorie des Fonctions Elliptiques[END_REF][START_REF] Whittaker | A course of Modern Analysis Cambridge[END_REF] deriving from (1)
θ 1 (0) θ 1 (0) = θ 2 (0) θ 2 (0) + θ 3 (0) θ 3 (0) + θ 4 (0) θ 4 (0) .
1.2. The Weierstrass' sigma function.
The Weierstrass' sigma function is an entire function dened by
σ(z) = σ(z, τ ) = z m,n (1 - z m + nτ ) exp z m + nτ + z 2 2(m + nτ ) 2 .
The prime symbol means that m and n are not simultaneously zero. This function is also connected to ℘(z)
℘(z) = σ 2 (z) -σ(z)σ (z) σ 2 (z)
We obtain the analogous for a connection with the sigma function
℘(z) -e i = - σ(z + ω i )σ(z -ω i ) σ 2 (z)σ 2 (ω i ) = σ i (z) σ(z) 2 , i = 1, 2, 3,
where
ω 1 = 1, ω 2 = -1-τ, ω 3 = τ.
Then we have the zeros of ℘ (z) are e i = ℘(ω i ).
Notice that the function ℘(z) -℘(ω i ) is an elliptic function of order two, then it has only poles and zeros and hence the function
[℘(z) -℘(ω i )] 1 2 = σi(z) σ(z)
is a single valued function.
The aim of this paper.
The literature on various representations of the function ℘(z) is notably abundant. Several types of representations whether in the form of analytic or trigonometric series are widely described [START_REF] Appell | Fonctions elliptiques et applications Gauthiers-Villard[END_REF][START_REF] Erdelyi | Higher transcendental functions[END_REF][START_REF] Tannery | Elements de la theorie des Fonctions Elliptiques[END_REF][START_REF] Whittaker | A course of Modern Analysis Cambridge[END_REF]. In this work we will be particularly interested in the development in innite products of this function. However, note that this approach seems to have been little exploited. This paper is organized as follows.
First of all we exhibit a representation in innite products for the Weierstrass elliptic function ℘(z) = ℘(z, τ ) with two primitive periods 2, 2τ where z, τ are complex numbers such that the imaginary part of τ is positive, τ > 0 and z < 2 τ, :
℘(z) -e 1 = (π cot πz 2 ) 2 4 k≥1 cot(kπτ -πz 2 ) cot(kπτ + πz 2 ) [cot(kπτ )] 2 2 .
On the other hand, we will considered the n-order odd decomposition as innite products of the Weierstrass sigma functions or for their quotient
σ i (z, nτ ) = exp(z 2 P 1 ) σ j (z) n-1 r=1 σ j (z + 2r n )σ j (z -2r n ) σ 2 ( 2r n )
,
where
P 1 = n-1 r=1 ℘( 2r
n ), as well as
σ j σ (nz, nτ ) = n-1 m=0 σ j σ (z + 2mπ n , τ ) n-1 m=0 σ σ j ( 2mπ n , τ ), j = 1, 2, 3.
We refer for that for example to Tannery and Molk [10, T.2, p.215,246] as well as to H. Schwarz [9, p.6,36].
The last decomposition allows us to deduce n-decomposition for the elliptic Weierstrass function ℘(z, τ ) :
℘(nz, nτ ) -e 1 (nτ ) = 4 π 4 n-1 θ 2 3 (0, nτ )θ 2 4 (0, nτ ) [θ 2 3 (0, τ )θ 2 4 (0, τ )] n n-1 m=0 ℘(z + m n , τ ) -e 1 (τ ) ,
where e 1 = ℘(1, τ ), as well as for its derivative :
℘ (nz, nτ ) = 4 π 4 n-1 θ 2 1 (0, nτ ) θ 2 1 (0, τ ) n n-1 m=0 ℘ (z + m n , τ ).
Finally, we will also consider the function ξ βγ (u) introduced by Tannery-Molk [10, T2, p.168]. They are dened by
ξ βγ (u) = σ β σ γ (u) = ℘(u) -e β ℘(u) -e γ
where βγ = 0, 1, 2, 3. Here σ 0 (u) = σ(u) and e j are the zeros of ℘ (u).
We then explore the n-decomposition of the logarithmic derivative for these functions.
As a consequence of all the above, for z < 2 τ we derive modular identities which seem to be new in the literature:
k =0 1 sin(2knπτ + nπz) = n-1 m=0 k =0 1 sin(2kπτ + π(z + 2m n )) = 2 1-n k =0 n-1 m=1 1 sin(2kπτ + π(z + 2m n )) = θ 2 2 (0, nτ ) [θ 2 2 (0, τ )] n n-1 m=0 k =0 1 sin(2kπτ + πz + mπ n )
.
To prove that we will use the above n-decomposition of ℘(z, τ ) as well as the n-transformations of
℘ (z) ℘(z) -e 1 = - 2π sin πz + 2π k≥1 1 sin(2kπτ -πz) - 1 sin(2kπτ + πz) .
Weierstrass's function ℘(z) and infinite products
As we have seen in 1.1, the Weierstrass's function ℘(z) = ℘(z; ω, ω ) is an elliptic function of order two with two primitive periods (2ω, 2ω ) verifying the condition
τ = ω ω , τ > 0.
Recall throughout this paper we will take ω = 1, ω = τ is imaginary. In order to avoid any ambiguity ℘(z) always denotes ℘(z, τ ).
Innite product representations of ℘(z).
We have seen above in 1.2, the connection between ℘(z) and theta functions. The following is well known for
v = z 2ω = z 2 : ℘(z) = e i + 1 4
θ i+1 (v) πθ i+1 (0) θ 1 (0) θ 1 (v) 2 , i = 1, 2, 3.
These relations allows us to derive variant innite products expressing the Weierstrass's function Theorem 2-1 The Weierstrass's function ℘(z) = ℘(z, τ ) with primitive periods 2 and 2τ veries the following identities
℘(z) -e 1 = (π cot πz 2 ) 2 4 k≥1 cot(kπτ -πz 2 ) cot(kπτ + πz 2 ) [cot(kπτ )] 2 2 = (π 2 θ 3 (0)θ 4 (0) cot πz 2 ) 2 4 k≥1 cot(kπτ - πz 2 ) cot(kπτ + πz 2 ) 2 ,
where e 1 = ℘(1), and Imz < 2 Imτ . We obtain the two other innite products by permuting the e i
℘(z)-e 3 = π 2 2 sin πz 2 2 k≥1 sin((k -1 2 )πτ -π z 2 ) sin(kπτ -π z 2 ) sin((k -1 2 )πτ + π z 2 ) sin(kπτ + π z 2 ) 2 sin(kπτ ) sin((k -1 2 )πτ ) 4 , ℘(z)-e 2 = π 2 2 sin πz 2 2 k≥1 cos((k -1 2 )πτ -π z 2 ) sin(kπτ -π z 2 ) cos((k -1 2 )πτ + π z 2 ) sin(kπτ + π z 2 ) 2 sin(kπτ ) cos((k -1 2 )πτ ) 4 .
Proof of Theorem 2-1 Starting from (as seen above 1.2)
℘(z) = e 1 + 1 4 π θ 2 (v) θ 2 (0) θ 1 (0) θ 1 (v) 2 ,
and by [2, Corollary 3-5] which asserts
θ 1 (v, τ ) π(sin πv) θ 1 (0, τ ) = 1 - sin πv sin kπτ 2 = k≥1 sin(kπτ -πv) sin(kπτ + πv) [sin(kπτ )] 2 , θ 2 (v, τ ) (cos πv) θ 2 (0, τ ) = 1 - sin πv cos kπτ 2 = k≥1 cos(kπτ -πv) cos(kπτ + πv) [cos(kπτ )] 2 .
Then,
θ 2 (v) θ 2 (0) πθ 1 (0) θ 1 (v) = cot πv k≥1 cos[kπτ + πv] cos[kπτ -πv](sin[kπτ ]) 2 sin[kπτ + πv] sin[kπτ -πv](cos[kπτ ]) 2 = cot πv k≥1 cot(kπτ + πv) cot(kπτ -πv) (cot kπτ ) 2 .
Moreover, by the relationship with the elliptic Weierstrass's sigma function
℘(z) -e 1 = σ1z
σz 2 , we have for v = z 2 (see for example Schwarz [9,p. 8,36]) (2 sin πv) π k≥1 sin(kπτ -πv) sin(kπτ + πv)
σ 1 z = e
[sin(kπτ )] 2 .
We then derive the expression
σ 1 z σz = (π cot πv) 2 k≥1 cot(kπτ -πv) cot(kπτ + πv) [cot(kπτ )] 2 ,
and deduce
℘(z)-e 1 = (π cot πz 2 ) 2 4 k≥1 cot(kπτ -πv) cot(kπτ + πv) [cot(kπτ )] 2 2 = (π cot πz 2 ) 2 4 k =0 cot(kπτ -πz 2 ) [cot kπτ ] 2 .
By permutation of the e i , i = 1, 2, 3 we also obtain the other expressions (see Schwarz [9, p.36])
σ 2 z = e ηv 2 2 k≥0 cos((k -1 2 )πτ -πv) cos((k -1 2 )πτ + πv) cos((k -1 2 )πτ ) 2 , σ 3 z = e ηv 2 2 k≥0 sin((k -1 2 )πτ -πv) sin((k -1 2 )πτ + πv) sin((k -1 2 )πτ ) 2 ,
and then deduce analog innite products for
℘(z) -e 2 = σ 2 z σz 2 ℘(z) -e 3 = σ 3 z σz 2 .
Remark 2-2 : (i) This expansion as innite product of elliptic functions may be dierently proved. Indeed, starting from the innite product [9, p.8] noticed that the sigma function may also be written
sin( πu 2ω ) = πu 2ω n≥0 1 - u 2nω e u 2nω n≥0 1 + u 2nω e -u 2nω H.A. Schwarz
σz = 2ω π sin(πv)e 2ηωv 2 n≥1
sin(nπτ -πv) sin(nπτ + πv)
(sin nπτ ) 2 = e 2ηωv 2 2ω π sin(πv) n≥1 1 - (sin πv) 2 (sin nπτ ) 2 , where v = z 2ω , η = π 2 2ω 1 6 + n≥0 1 (sin nπτ ) 2 .
(ii) Other relations and descriptive properties of ℘(z) may be derived from Theorem 2-1. For example, We may write :
℘(z + 1) = e 1 + (π tan π z 2 ) 2 4 k =0 tan(kπτ -π z 2 ) cot(kπτ ) 2 .
We then derive the following well known expression ([4, 13.13, p.333]) to the periods (2, 2τ ). We then have
(℘(z + 1) -e 1 ) (℘(z) -e 1 ) =
℘ (z) ℘(z) -e 1 = - 2π sin πz + 2π k =0 1 sin(2kπτ -πz) = - σ(2z) σ 2 (z)σ 2 1 (z)
, where σ(z), σ 1 (z) are the Weierstrass elliptic functions (as dened above in 1.3).
Indeed, that result follows from Theorem 2-1 and the identity
d cot x dx 1 cot x = - 2 sin 2x
.
Then
℘ (z) ℘(z) -e 1 = - 4π 2 sin πz + 4π 2 k≥1 1 sin(2kπτ -πz) - 1 sin(2kπτ + πz) = k 2π sin(2kπτ -πz)
.
We may expressed it otherwise (for example : see Lawden [7, p.161]) by mean the Weierstrass's elliptic zeta function dened by ζ (z) = -℘(z)
℘ (z) ℘(z) -e 1 = 2 σ 1 σ 1 (z) - σ σ (z) = -2 σ 2 (z)σ 3 (z) σ(z)σ 1 (z) = - σ(2z) σ 2 (z)σ 2 1 (z) = 2ζ(z+1)-2ζ(z)-2η.
Concerning the sigma functions we derive the following results (here η =
σ σ (z) = ηz + π 2 cot( πz 2 ) + π 2 k≥1 cot(kπτ + πz 2 ) -cot(kπτ - πz 2 ) = ηz + π 2 cot( πz 2 ) + π 2 k≥1 sin πz -cos 2 ( πz 2 ) + cos 2 (kπτ ) , σ 1 σ 1 (z) = σ σ (z + 1) = ηz - π 2 tan( πz 2 ) - π 2 k≥1 sin πz -sin 2 ( πz 2 ) + cos 2 (kπτ ) , σ 2 σ 2 (z) = ηz - π 2 k≥0 tan((k - 1 2 )πτ + πz 2 ) -tan((k - 1 2 )πτ + πz 2 ) = ηz - π 2 k≥1 sin πz -sin 2 ( πz 2 ) + cos 2 ((k -1 2 )πτ ) , σ 3 σ 3 (z) = σ 2 σ 2 (z + 1) = ηz + π 2 k≥1 sin πz -cos 2 ( πz 2 ) + cos 2 ((k -1 2 )πτ )
.
Corollary 2-5 The elliptic sigma functions and their derivatives relative to periods (2, 2τ ) verify the identities
℘ (z) 2(℘(z) -e 1 ) = σ 1 σ 1 (z) - σ σ (z) = π k≥1 1 sin(2kπτ -πz) - 1 sin(2kπτ + πz) , σ 2 σ 2 (z) - σ 3 σ 3 (z) = π k≥0 1 sin((2k -1)πτ -πz) - 1 sin((2k -1)πτ + πz) . σ 1 σ 1 (z) + σ σ (z) = 2ηz + π cot(πz) + π k≥1 [cot(2kπτ + πz) -cot(2kπτ -πz)] , σ 2 σ 2 (z) + σ 3 σ 3 (z) = 2ηz + π k≥0 [cot((2k -1)πτ + πz) -cot((2k -1)πτ -πz)] .
Next results provide other expressions for the Weierstrass function as well as for its derivative The Weierstrass function relative to periods (2, 2τ ) veries the identities
℘(z, τ ) -e 2 (τ ) = π 2 k≥1 1 sin(2kπτ + πz) k≥1 1 sin( -2kπ τ + πz) , ℘(z, τ ) -e 3 (τ ) = π 2 k≥1 1 sin(2kπτ + πz) k≥1 1 sin( -2kπ τ +1 + πz) , ℘(z, τ ) -e 1 τ ) = π 2 k≥1 1 sin( 2kπ τ +1 + πz) k≥1 1 sin( -2kπ τ + πz) .
Indeed, one has since [4, p.368] e 1 ( -1 τ ) = e 3 (τ ) and e 1 ( -1 τ +1 ) = e 2 (τ ) then
℘ (z) ℘(z) -e 3 = -2π k≥1 1 sin( -2kπ τ + πz) , ℘ (z) ℘(z) -e 2 = -2π k≥1 1 sin( -2kπ τ +1 + πz) .
Moreover, since
℘ 2 (z) (℘(z) -e 1 )(℘(z) -e 3 ) = ℘ (z) (℘(z) -e 1 ) ℘ (z) (℘(z) -e 3 ) = 4 [℘(z) -e 2 ]
it easily follows the rst identity of Corollary 2-6. On the other hand, since τ ) we then obtain the two other identities.
e 2 (1 + τ ) = e 3 (τ ), e 1 ( -1 τ +1 ) = e 2 (
Corollary 2-7 The derivative of Weierstrass's function
℘ (z) = ℘ (z, τ ) = d℘(z) dz , ℘ (z) = d 2 ℘(z)
dz 2 with primitive periods 2 and 2τ veries the following identities
℘ (z, τ ) = -2π 3 k≥1 1 sin(2kπτ + πz) k≥1 1 sin( -2kπ τ + πz) k≥1 1 sin( -2kπ τ +1 + πz) . ℘ (z) ℘ (z) = -π k≥1 1 sin(2kπτ -πz) + 1 sin( -2kπ τ + πz) + 1 sin( -2kπ τ +1 + πz) . ℘ (z) ℘(z) -e 1 = 4π 2 k≥1 1 sin(2kπτ -πz) 2 +π 2 k≥1 1 cos 2 (kπτ -π z 2 ) - k≥1 1 sin 2 (kπτ -π z 2 ) .
Consider the second derivative of the Weierstrass function ℘ (z, τ ) = d 2 ℘(z) dz 2 . We have the following which may be easily deduced from the classical properties
2 ℘ (z) ℘ (z) = ℘ (z) ℘(z) -e 1 + ℘ (z) ℘(z) -e 2 + ℘ (z) ℘(z) -e 3 .
Thus we derive
℘ (z) ℘ (z) = -π k≥1 1 sin(2kπτ -πz) -π k≥1 1 sin( -2kπ τ + πz) -π k≥1 1 sin( -2kπ τ +1 + πz)
.
On the other hand, consider the derivative
d dz ℘ (z) ℘(z) -e 1 = ℘ (z) ℘(z) -e 1 - ℘ (z) ℘(z) -e 1 2 = -2π 2 k cos(2kπτ -πz) sin 2 (2kπτ -πz) = π 2 k≥1 1 cos 2 (kπτ -π z 2 ) - 1 sin 2 (kπτ -π z 2 )
.
We then derive
℘ (z) ℘(z) -e 1 = 4π 2 k≥1 1 sin(2kπτ -πz) 2 +π 2 k≥1 1 cos 2 (kπτ -π z 2 ) - 1 sin 2 (kπτ -π z 2 )
.
On the other hand, by [9, p.13, (1.)] here is another connection between the Weierstrass function ℘(z, τ ), and the sigma function
- σ(u + v)σ(u -v) σ 2 (u)σ 2 (v) = ℘(u) -℘(v).
Since σ(u) = 2 πθ 1 (0,τ ) e ηu 2 /2 θ 1 ( u 2 , τ ), then
℘(u) -℘(v) = (πθ 1 (0, τ )) 2 θ 1 ( u+v 2 , τ )θ 1 ( u-v 2 , τ ) θ 1 ( u 2 , τ )θ 1 ( v 2 , τ ) 2 .
In the limit case when v → u after dividing both sides by v -u, we nd
℘ (u) = - σ(2u) σ 4 (u) = -[πθ 1 (0, τ )] 3 θ 1 (2u, τ ) [θ 1 (u, τ )] 4 .
However, using again innite products of theta functions we may express ℘ (u) as innite product.
Theorem 2-8 The derivative of Weierstrass's function
℘ (z) = ℘ (z, τ ) = d℘(z) dz
with primitive periods 2 and 2τ veries the following equality
℘ (u) = - sin 2πv (sin πv) 4 k≥1
sin(kπτ + 2πv) sin(kπτ -2πv)(sin(kπτ )) 6 [sin(kπτ + πv) sin(kπτ -πv)] 4 .
Proof of Theorem 2-8 Indeed, this expression may be deduced from [2, Cor.
3-5]
θ 1 (v, τ ) (π sin πv) θ 1 (0, τ ) = k≥1 1 - sin πv sin kπτ 2 = k≥1 cos 2πv -cos 2kπτ 1 -cos 2kπτ , [ θ 1 (v, τ ) (π sin πv) θ 1 (0, τ ) ] 4 = k≥1 1 - sin πv sin kπτ 2 4 = k≥1 cos 2πv -cos 2kπτ 1 -cos 2kπτ 4 .
Thus sin(kπτ + 2πv) sin(kπτ -2πv)(sin(kπτ )) 6 [sin(kπτ + πv) sin(kπτ -πv)] 4 .
θ 1 (2v, τ ) θ 1 (0, τ ) (θ 1 (0, τ )) 4 [θ 1 (v, τ )] 4 = -π 3 ℘ (z) =
The functions ξ(u, τ )
Following Tannery [10, T.2, p. 168] it is often appropriate to introduce the quotients that can be formed by means of two functions σ(z, τ ) relating to the same variable and the same periods. Thus the four functions σ, σ 1 , σ 2 , σ 3 generate twelve functions. Indexes α, β, γ are selected according to the convention : they are dierent and be chosen between the values 0, 1, 2, 3
ξ α0 (u) = σ α σ (u) = ℘(u) -e α , ξ 0α (u) = σ σ α (u) = 1 ℘(u) -e α , ξ βγ (u) = σ β σ γ (u) = ℘(u) -e β ℘(u) -e γ .
We will simply write ξ βγ (u) instead ξ βγ (u, τ ) when there is no ambiguity. These functions are even or odd depending on whether they contain the index 0 or not. They are algebraic function of ℘(u) and have a single pole as simple singularity. Moreover, these functions are doubly periodic and verify many algebraic relations see [10, T.2, p. 168-187]. Observe that we may deduce from the above :
℘ (u) = -2 σ 1 σ (u) σ 2 σ (u) σ 3 σ (u) = -2ξ α0 (u) ξ β0 (u) ξ γ0 (u).
These relations yield in particular, [10, T.2, p. 171]
ξ α0 (u) = ℘ (u) 2 ℘(u) -e α = -℘(u) -e β ℘(u) -e γ = -ξ β0 (u) ξ γ0 (u), ξ 0α (u) = ξ βα (u) ξ γα (u), ξ βγ (u) = -(e β -e γ )ξ 0γ (u) ξ αγ (u), ξ β0 (u) ξ γ1 (u) = -℘ (u) 2(℘(u) -e 1 )
.
Moreover, ξ α0 is solution of the dierential equation The derivatives of the Weierstrass functions yield
3℘(u) = ξ 2 α0 (u) + ξ 2 β0 (u) + ξ 2 γ0 (u), ℘ (u) ℘ (u) = 2ξ α0 (u) + 2ξ β0 (u) + 2ξ γ0 (u).
Notice for
ω 1 = 1, ω 2 = 1 + τ, ω 3 = τ we derive ξ α0 (ω β ) = e β -e α , ξ βγ (ω α ) = √ e α -e β √ e α -e γ .
The periods are
k = ξ 21 (ω 3 ), k = ξ 23 (ω 1 ).
Many other interesting properties concerning the functions ξ βγ (u) (homogeneity, variations, growing,...) can be found in [10, T.2, p. 170-190]. In [START_REF] Tannery | Elements de la theorie des Fonctions Elliptiques[END_REF] we will also observe a rigorous and systematic study of these functions as well as their periodicity, their relationships with other elliptic functions and connections with theta functions of Jacobi. The following is particularly interesting Theorem 3-1 The function ξ α0 (u, τ ) = ξ α0 (u) verify the identities Notice under the action of the modular group Γ 0 the permutation between the e j does not change the Weierstrass function ℘(z, τ ). Indeed by changing τ into τ + 1 or into -1 τ it yields [4, p. 365]:
1 .
1 The elliptic Weierstrass's function ℘(z).
ηv 2 2
2 (cos πv) k≥1 cos(kπτ -πv) cos(kπτ + πv)
k≥1 1 16 (cot kπτ ) 8 = (e 1 -e 2 )(e 1 -e 3 (e 3 -e 2 )(e 3 -e 1 ) = k≥1 1 16 cot kπ τ 8 , (e 2 -e 1 )(e 2 -e 3 2 . 2 .
18121332318212322 Logarithmic dierentiation of ℘(z) -e 1 . Taking the logarithmic dierentiation in Theorem 2-1 we derive the following Corollary 2-3 Let ℘ (z) the derivative of the Weierstrass function relative
1 (
1 sin nπτ ) 2 ) using their innite product expansions [10, T.2, p.246] and their logarithmic derivatives Corollary 2-4 The Weierstrass's elliptic sigma functions and their derivatives relative to periods (2, 2τ ) verify the identities
4 = π 3 4 = π 3
4343 sin 2πv (sin πv) 4 k≥1 [cos 4πv -cos 2kπτ ][1 -cos 2kπτ ] 3 [cos 2πv -cos 2kπτ ] sin 2πv (sin πv) 4 k≥1
dy du 2 =
2 (e α -e β + y 2 ) (e α -e γ + y 2 ).
may write for any integer m ∈ [0, n -1]℘(z+ m n , τ )-e 1 (τ ) = (π cot(πv + mπ n )) 2 4 k≥1 cot(kπτ + πv + mπ n ) cot(kπτ -πv + mπ n ) (cot(kπτ ) -e 1 (τ ) = 4 π 2 [℘(nz, nτ ) -e 1 (nτ )] .Turn now to the other cases j = 2, 3, by the same way we deduce analog decomposition of ℘(nz, nτ ) -e 3 (nτ ), ℘(nz, nτ ) -e 2 (nτ ) from Theorem 2-1 since℘(z) = e 3 + π22 sin πz
e 3
3 (τ ), e 3 (1 + τ ) = e 2 (τ ).
Proof of Theorem 3-1 This theorem is a direct consequence of Corollaries 2-3 and 2-5.
Transformations of order n
Then the n-transformation theory of functions deals with the relations between functions belonging to dierent pairs of primitive periods : (2, 2τ ), (2, 2nτ ).
If we suppose n is not a prime number it will be the product of two or more odd primes, and the transformation will break up into distinct transformations each of which may be separately considered. We therefore now assume n an odd prime: the modular equation is in this case an irreducible equation of the order n + 1. Then, it is convenient to restrict the attention to the case n an odd number. Thus, we are going to study the transformations whose order is odd and positive, transformations which are reduced to those that we change the period (2, τ ) into ( 2 n , τ ) without changing τ . This corresponds to changing v to nv and τ to nτ . We shall always assume
Observe by Lawden [7, p.252], Enneper [4, p.240] or Roy [9, p.89] that any transformation of order n > 1 may be represented as a product of transformations of rst order and of transformations of higher order with matrix
Moreover, any transformation τ = nτ can be separated into a product when n has prime factors. Therefore, we only study the case of transformation when n is a prime and limit our study for this type of matrix.
4.1. Transformation of ℘(z, τ ). Theorem 2-1 provided an expansion of the Weierstrass function ℘(z, τ ) relative to periods 2, 2τ ) as innite product
where e 1 = ℘(1). It allows to derive in particular a n-order transformation formula for i = 1, 2, 3.
That permits to deduce various identities as
where j = 1, 2, 3,
More precisely, one gets the following connection between ℘(nz, nτ ) and ℘(z, τ ) derived from Theorem 2-1 Theorem 4-1 Let n be an odd integer and consider the Weierstrass's function ℘(nz, nτ ) with primitive periods 2 and 2nτ , then the following identity holds
where e j (τ ) are the zeros of ℘ (z, τ ), i = 1, 2, 3 and z < 2 τ .
Proof of Theorem 4-1 Prove at rst for i = 1. Notice by Theorem 2-1
We start from the classical trigonometric product formulas valid for n odd integer
Thus we derive the expression
Therefore, we nd analog expansions for ℘(nz, nτ )-e 2 (nτ ) and ℘(nz, nτ )-e 3 (nτ ).
then by Remark 2-2 (ii) we may deduce the following n-decomposition of ℘ (nz, nτ ) .
Corollary 4-2 Let n be an odd integer, then the following identity holds
Among others interesting equalities, we may derive the following identities
From Theorem 4-1 we also get Corollary 4-3 Let ℘(z, τ ) be the Weierstrass function and ℘ (z, τ ) its derivative relative to the periods (2, 2τ ), then the following identities hold for any odd n and j = 1, 2, 3
.
Indeed, (i) may be deduced from Theorem 4-1 and also by the n-order transformation of the quotient of sigma functions (see also [10, T.2, p.215], or [11, ex 9 p.456])
(ii) is deduced from (i) since
Moreover, since n is odd then the quotient (ii) over (i) yields (iii).
Corollary 4-4 Let ℘(z, τ ) be the Weierstrass function and ℘ (z, τ ) its derivative relative to the periods (2, 2τ ), then the following identities hold for any odd n and j = 1, 2, 3
.
Moreover, one gets for j = 1
.
Indeed, (i) is derived from Theorem 4-1 and Corollary 4-2 and using the quotient
℘(nz,nτ )-e1(nτ ) . Corollary 2-3 implies (ii).
4.2.
The n-transformations of the functions ξ(u). We start from the n-transformations of the sigma functions. In [10,T.1, p.234] Tannery-Molk proved
, for j = 1, 2, 3 we obtain the quotient
.
We then deduce [10, T.2, p.215]
.
This shows that ξ α0 (nu, nτ )) is a rational function of ξ α0 (u), as well as ξ βγ (nu, nτ ) is a rational function of ξ βγ (u, τ ).
We then derive the identity
.
We obtain the modular relations of the periods
The zeros become
.
By the same way if we replace τ by τ n one obtains the formulas
.
More generally ([10, T.2, p. 217]), if we replace τ by τ +2p n for any odd n and integer p one gets
Other applications may be derived from the decomposition of ℘ (nz,nτ ) ℘(nz,nτ )-e1(nτ ) as the following modular identities Corollary 4-5 For 0 < (τ ) < z and for any odd integer n the following identity holds
.
To prove that, take the logarithmic dierentiation of ξ 10 (nz, nτ ) see [10, T.2, p.215] :
, which yields
.
Therefore, Corollary 2-3 implies Corollary 4-4.
Corollary 4-6 For 0 < (τ ) < z and for any odd integer n the following identities hold
.
To prove Corollary 4-6, recall by Corollary 2-3 we have We then obtain an equality allowing us to invert the sum and the product : . |
00349749 | en | [
"shs.hisphilso"
] | 2024/03/04 16:41:20 | 2007 | https://shs.hal.science/halshs-00349749/file/AKSE_2007_kim_daeyeol_revised.pdf | PANEL: CONFUCIANISM (pre-modern history)
Chosǒn Confucian scholars' attitudes toward the Laozi KIM Daeyeol (INaLCO, Paris) Is it shocking to hear that the Chosǒn literati were radically or rigidly Neo-Confucian?
Probably not, at least for the general public. This idea is so surreptitiously inculcated in the mind of most of us. On the contrary, it might be relatively surprising to hear that Korean Confucian literati of this time widely read, wrote about and even accepted intellectual, ideological, religious or cosmological traditions other than Confucianism. It seems to me that the cultural plurality in Chosǒn society has been often passed over, though it is also known that different cultural traditions coexisted and even functioned in mutual complementary.
Confucianism, as a whole system of culture, was incontestably one of the dominant currents in Korean society and provided models in "almost" every field of culture for Chosǒn period. However, its cultural landscape was more complex and Confucian religious or ideological community membership -the determination of which often depended on circumstances -was, in a sense, rather "soft". As a way to understand the cultural plurality, we can focus on and interpret people's more ultimate concerns and consider different traditions as cultural models or means given to realize them, insofar as a cultural pattern has a role in relation with meaning or further purpose which man aspire to attain to. In working on a cultural model, we have to keep in mind that a model is useful for something and not always the final or sole destination. This sort of reflexion can arise in various fields and in particular we can think it regarding moral thought and religious behaviour, in which man throws his sight over the way of thinking, expressing, or seeking something more fundamental or more ultimate. This point of view also demands to take into account the ways in which a dominant tradition is imposed in relation to other cultural factors and contents. In my today's talk, I would like to give an example by presenting some cases of Chosǒn literati whose attitudes toward the Laozi are interesting in this respect.
The established and general position of the Neo-Confucian orthodox school from the beginning of Chosǒn Dynasty is that Daoism is 'heterodox' (idan 異端). However, after about two centuries' evolution during which the Korean Neo-Confucianism developed on its own, the literati started to show diverse attitudes: tolerant, open or even receptive vis-à-vis the 'heterodox' tradition. For example: Daoist hygienic system was highly estimated and adopted in the former chapters of the Precious Mirror of Eastern Medicine (Tong'ŭi pogam 東醫寶鑑) published by a royal order; also, many poems of Confucian literati were inspired by the idea of Daoist Immortals' world.
With regard to Daoist philosophical ideas, several scholars of Chosǒn commented the Laozi and tried to interpret it differently from their predecessors. About twenty scholars also left shorter texts about this Daoist book. Some of them said, for example, that those who considered it as a heterodox text did not correctly understand it. In general, they read it while keeping their Confucian eyes. Still, it would show that, seen from this angle too, the boundaries between Confucianism and other traditions could be ambiguous for some intellectual groups in Chosǒn.
Five commentaries on the Laozi fully transmitted to the present day attract our first attention (see Table 1). Indeed, they are the subjects of very recently published research works (see Bibliography). Excepting Yi Ch'ungik, all of these commentators were well known authors and thinkers by their time, and moreover, they all occupied one of the highest official posts in government at a point of their life. In general, they found out that certain ideas from the Laozi can be in accordance with Confucianism on the level of fundamental thought. Their central interests focused on principles and moral values to which Confucian literati adhered.
One of the pivotal figures in history of Korean Confucianism, Yi I left a commentary on the Laozi. This is characterized by an interpretation of the Laozi in a pure Confucian terms and point of view, as in the example of his reading of the word 'inaction' (muwi 無爲) into governing the people according to the 'decree of Heaven' (ch'ǒnmyǒng 天 命). But, regarding the issue developed in this paper, the most interesting point is that a large part of his commentary selects the passages concerning 'making mind empty' (son In addition to these commentaries, some shorter but not less significant texts for my argument confirm that many Chosǒn Confucian literati read and even appreciated the so-called 'heterodox' writing (see Table 2). These texts appeared from the beginning of the 16 th century on. Before Yi I, Yi Haeng declared that the Laozi deserved esteem, and Sin Hŭm, contemporary with Yulgok, sang that it enlightened him and made his mind empty. The latter also argued that the author of the Laozi didn't consider the Confucian virtues as defects, but that he was distressed about the loss of the Way and the Virtue, and that its profound meanings were distorted by later Daoists and Buddhists. Along with Sin Hŭm, Hǒ Kyun expressed a similar opinion and Chang Yu could agree with Yi
損
Haeng to say that the Laozi should not be neglected. Since Yun Hyu, many of them balanced their criticisms against the Daoist book with the esteem they expressed about it. On the one hand, they underlined its limits : for example, the Laozi concerns the principles of things (mulli 物理) not those of human being (Yun Hyu, Cho Kumyǒng),
or it talks about the Dao without careful consideration. "He (author of the Laozi) was good in beholding the Dao, wrote Im Sangdǒk, but not good in expressing it." On the other hand, they regretted that people, even Confucian literati, did not rightly understand these five thousands words and criticized what could not be criticized (Yi Tǒksu, Im Sangdǒk, Yun Ki). In Yi Tǒksu's opinion, one could easily misunderstand the book's profound meanings about Dao because of its ambiguous and subtle expressions, and one could thus be misled as most of Daoists were; but one should meet ultimate beauty and joy in the idea of frugality and of lack of concern expressed in the book.
To conclude, we could at first underline the fact that most of these literati were high officials in government, that is to say, not marginal persons but central,
) or 'frugality' (saek 嗇) and interprets them from the viewpoint of Confucian moral concerns related to 'self-cultivation' (ch'igi 治己) and 'governing the people' (ch'iin 治 人). It raises an interesting question: why did this great Confucian scholar rely on the Laozi to remind the fundamental moral duty of Confucian literati? In the epilogue to his commentary, he deplores that moral persons are too rare in his time, as if he were warning the leading class of Confucian society of his time by professing the moral instructions drawn from the Laozi.As to Pak Sedang who frankly criticized Zhu Xi, he exposed in his commentary Daoist characteristics different from those Confucians but tried to prove that basically these Daoist characteristics are not contradictory to Confucian logic and thought. This attitude is also seen in the commentary of Sǒ Myǒng'ŭng. Considered as one of the pioneers of 'Northern school' (pukhakp'a 北學派), Sǒ Myǒng'ŭng even introduces in his interpretation of the Laozi some proper Daoist cosmological notions (as chǒng 精, ki 氣, sin 神) or practical ideas (as yangsaeng 養生). Moreover, he explains the criticisms in the Laozi against some Confucian ideas ('sense of humane' (in 仁), 'sense of just' (ŭi 義), 'learning' (hak 學), for examples) and tries to clarify the real meaning of these criticisms. For him, this is not contradictory to but even accordant with Confucian fundamental spirit. This kind of attempt is also seen in the commentary done by Yi Ch'ungik. A scholar of the Kanghwa school, a Korean Yangming school, entirely dissociated from orthodox Zhu Xi-ism, he even called the author of the Laozi as a 'mysterious sage' (hyǒnsǒng 玄聖).HongSǒkchu is one of the central figures of the history of Confucian ideas in 19 th century Korea. While being widely interested in diverse intellectual traditions out of Confucianism, he tried to replace the perspective of 'ordering the world' (kyǒngse 經世) in the center of the Zhu Xi-ist Neo-Confucianism. In his close examining of the similarities and the differences between the ideas of the Laozi and Confucianism, he finds out that some ideas of the book are closer to Confucianism, not only than the individualism of later Daoists, who were only looking forward to their individual salvation, but also than the immorality of vulgar Confucian literati, and even than both of the dhyāna school of Buddhism and the Yangming school of Neo-Confucianism who neglect the necessity of accumulating efforts at learning, which Confucianism advocates. While scrutinizing the limits of the Laozi, however he also elucidates valid meanings of this Daoist book expressed in its peculiar ironies, which were employed only to criticize the abuses of it's time disguised with moral virtues.
active and influential figures in Chosǒn society. This seems to raise an important question in relation with the general issue of our panel: to what degree did the founders of the Zhu Xi-ist school and Korean Neo-Confucian predecessors have authority on later Confucian literati? We can say at least that the attitudes of these literati toward the Laozi are not dogmatic or sectarian. More than to certain canonic texts, schools, or charismatic persons, they referred to comprehensive principles or fundamental values, which could possibly fade the Confucian identity. Further research should be done in order to confirm this interpretation and to understand better these "positive" attitudes regarding the Laozi. The cases analyzed in this paper, however, seem to suggest the idea of an erosion of the border line between orthodox Confucianism and philosophical Daoism, coming from the very inside of Confucian literati group itself. After all, beyond these misty and instable boundaries between Confucianism and the other intellectual or spiritual traditions of Chosǒn society, it may be also useful to ask what were the practical goals or ultimate concerns in the quest of which some groups of Chosǒn literati in their proper circumstances adopted Confucian legacy as one of the available ways.
Table 1 : commentaries
1
One of the
highest
Name Date official posts/ grades School Commentarie s
occupied by
person
1 李珥 1536-1584 判書 (正二 品) 西人 醇言
Table 2 : others texts
2 Lao Tzu in the History of Korean Confucianism). Seoul National University Press. KIM Hak Mok 김학목. 2002. "Chosǒn yuhakchadŭl ŭi Todǒkkyǒng chusǒk kwa kŭ sidae sanghwang -Sunǒn, Sinju Todǒkkyǒng, Chǒngno rŭl chungsim ŭro -" 조선 유학자들의『道德經』 주석과 그 시 대상황 -『순언』 『신주도덕경』 『정노』를 중심으로 -, Tongsǒ ch'ǒlhak yǒn'gu 동서철학연 구, n° 24, p. 115-134. KIM Hak Mok. 2003. "Yǒnch'ǒn Hong Sǒkchu ka Todǒkkyǒng ŭl chusǒkhan mokchǒk" 淵泉 洪奭周가 道德經을 주석한 목적, Ch'ǒlhak yǒn'gu 철학연구, n° 60, p. 5-24. KIM Hak Mok. 2004. "Kanghwa hakp'a ŭi Todǒkkyǒng chusǒk e kwanhan koch'al -Ch'owǒn Yi Ch'ungik ŭi Ch'owǒn tamno rŭl chungsim ŭro" 江華學派의 『道德經』 주석에 관한 고찰 -椒園 李忠翊의『椒園談老』를 중심으로 -, Tongsǒ ch'ǒlhak yǒn'gu 동서철학연구, n° 34, p. 277-299.
Name Date One of the highest official posts/grades occupied by person School Texts
1 李荇 1478-1534 左議政 (正一品) X 讀老子
2 申欽 1556-1628 判書 (正二品) 西人 ? 老子吟
讀道德
經
書道德經後
3 許筠 1569-1618 參議 (正三品) 大北 ? 老子
4 鄭忠信 1576-1636 兵馬節度使 (從二品武官) X 讀老子 有感
5 張維 1587-1638 右議政 (正一品) ? 老子見 道體
6 尹鑴 1617-1680 判書 (正二品) X 老子道 德經序
7 朴世堂 1629-1703 判書 (正二品) 少論 讀老子
8 任相元 1638-1697 判書 (正二品) X 讀老子
9 李德壽 1673-1744 判書, 大提學 (正二品) X 老子要 解序
10 鄭來僑 1681-1759 僉知中樞府事 (正三品武官, 中人出身) X 讀老子
11 林象德 1683-1719 牧使 (正三品) 少論? 老子論
12 姜再恒 1689-1756 縣監 (正六品) 少論 問禮於 老子
13 趙龜命 1693-1737 佐郞 (正六品) 老論? 讀老子
14 徐命膺 1716-1787 判書, 大提學 (正二品) 北學派 讀老子
15 李匡呂 1720-1783 參奉 (從九品) ? 讀老子
書蘇子
16 尹愭 1741-1826 參議 (正三品) 星浩 由老子
論後
17 申綽 1760-1828 X 江華學 派 老子旨 略序
老子道
18 李圭景 1788-? X X 德經辨
證說
Selected bibliography
KEUM Jangtae 금장태. 2006. Han'guk yuhak ŭi Noja ihae 한국유학의 『노자』이해 (Interpretations and Commentaries of |
04101700 | en | [
"info",
"math.math-co",
"math.math-lo"
] | 2024/03/04 16:41:20 | 2021 | https://hal.science/hal-04101700/file/main-patterns.pdf | Laurent Feuilloley
email: [email protected]
Michel Habib
email: [email protected]
Graph classes and forbidden patterns on three vertices
This paper deals with the characterization and the recognition of graph classes. A popular way to characterize a graph class is to list a minimal set of forbidden induced subgraphs. Unfortunately, this strategy hardly ever leads to a very efficient recognition algorithm. On the other hand, many graph classes can be efficiently recognized by techniques that use some ordering of the nodes, such as the one given by a traversal.
. We improve on these two lines of works, by characterizing systematically all the classes defined by sets of forbidden patterns (on three nodes), and proving that among the 22 different classes (up to complement) that we find, 20 can actually be recognized in linear time.
Beyond these results, we consider that this type of characterization is very useful from an algorithmic perspective, leads to a rich structure of classes, and generates many algorithmic and structural open questions worth investigating.
Introduction
Forbidden structures in graph theory. A class of graphs is hereditary if for any graph G in the class, every induced subgraph of G also belongs to the class. Given a hereditary class C, there exists a family F of graphs, such that a graph G belongs to C, if and only if, G does not contain any graph of F as an induced subgraph. Hence hereditary classes are defined by forbidden structures. For a given class C, a trivial family F is the set of all graphs not in C, but the interesting families are the minimal ones. If we were to replace the induced subgraph relation by the minor relation, a celebrated theorem of Robertson and Seymour states that these families are always finite, but here the family needs not be finite, as exemplified by bipartite graphs (where the set of forbidden structures is the set of odd cycles). There exist many characterizations of classes by forbidden subgraphs in the literature, ranging from easy to extremely difficult to prove, as for example the Strong Perfect Graph Theorem [START_REF] Chudnovsky | The strong perfect graph theorem[END_REF].
Forbidden ordered structures. Another way of defining hereditary classes by forbidden structures is the following. Consider a graph H given with a fixed ordering on its vertices. Then a graph G belongs to the class associated with H, if and only if, there exists an ordering of the vertices of G, such that none of its subgraphs induces a copy of H with the given ordering. Let us illustrate such characterization with chordal graphs. Chordal graphs are usually defined as the graphs that do not contain any induced cycle of length at least 4. However it is also well known [START_REF] Dirac | On rigid circuit graphs[END_REF] that chordal graphs are exactly the graphs that admit a simplicial elimination ordering, that is an ordering on the vertices such that the neighbors of a vertex that are placed before it in the ordering induce a clique. In the framework described above, this characterization corresponds to H being the path on 3 vertices with the middle vertex placed last in the ordering. Similar characterizations are known for well-studied classes such as proper interval, interval, permutation and cocomparability graphs.
We focus on such characterizations, and refer to forbidden ordered induced subgraphs (or more precisely an equivalent trigraph version of it) as patterns.
Forbidden patterns and algorithms. A motivation to study characterizations by patterns is that they are related to efficient recognition algorithms. That is, unlike forbidden subgraph characterizations, forbidden patterns characterization often translate into fast algorithms to recognize the class, most of the time linear-time algorithms. Such algorithms compute an ordering avoiding the forbidden patterns, which means that in addition of deciding whether the graph is in the class or not, they provide a certificate for the positive cases. The most famous examples are for chordal graphs [67], proper interval graphs [START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF], and interval graphs [START_REF] Corneil | The LBFS structure and recognition of interval graphs[END_REF].
These algorithms are fast because they use simple graph searches, in particular the lexicographic breadth first search, LexBFS. Indeed the orderings given by searches have a special structure [START_REF] Corneil | A unified view of graph searching[END_REF], that matches the one of forbidden patterns.
Previous works on forbidden patterns. The papers [START_REF] Skrien | A relationship between triangulated graphs, comparability graphs, proper interval graphs, proper circular-arc graphs, and nested interval graphs[END_REF] and [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF] by Skiren and Damaschke respectively are, as far as we know, the first works to consider characterizations by patterns as a topic in itself. From these seminal papers, one can derive that all the classes defined by one forbidden pattern on three node can be recognized in polynomial time. This is for example the case for the chordal graphs mentioned above. More recently, it was proved that forbidding a set of patterns on three nodes, still leads to a polynomially solvable problem [START_REF] Hell | Ordering without forbidden patterns[END_REF]. This stands in striking contrast with the case of larger patterns, as it was shown in [START_REF] Duffus | On the computational complexity of ordered subgraph recognition[END_REF] that almost all classes defined by 2-connected patterns are NP-complete to recognize. In [START_REF] Hell | Ordering without forbidden patterns[END_REF], the authors conjectured a dichotomy on this type of recognition problem, but this statement has been recently challenged [57].
More generally, forbidden ordered structures have been used recently in a variety of contexts. Among others, they have been applied to characterize graphs with bounded asteroidal number [START_REF] Corneil | Vertex ordering characterizations of graphs of bounded asteroidal number[END_REF], to study intersection graphs [START_REF] Wood | Characterisations of intersection graphs by vertex orderings[END_REF], to prove that the square of the line-graph of a chordal graph is chordal [5], and to study maximal induced matching algorithms [START_REF] Habib | Maximum induced matching algorithms via vertex ordering characterizations[END_REF].
Our results Forbidding only one pattern on three nodes already gives rise to a rich family of graph classes. Yet, despite the general algorithmic result of [START_REF] Hell | Ordering without forbidden patterns[END_REF], little is known about the classes defined by a set of patterns on three nodes. Our main contribution is an exhaustive list of all the classes defined this way. Along the way several interesting results and insights are gathered. A corollary of this characterization is that almost all the classes considered can be recognized not only in polynomial time, but in linear time. Beyond these technical contributions, our goal is to unify many results scattered in the literature, and show that this formalism can be useful and relevant. In this sense, the paper also serves as a survey of this type of hereditary classes of graphs.
Outline of the paper. The paper is organized as follows. In Section 2, we motivate the study of patterns by listing the well-known classes characterized by one pattern on three node. In Section 3, we formally define the pattern characterizations, and prove structural properties about the classes defined this way. Section 4 contains the main theorem of the paper, that is the complete characterization of all the classes defined by sets of patterns on three nodes. The proof of this theorem is a long case analysis, and to make it nicer, we first highlight and prove some remarkable characterizations in Section 5, before completing the proof in Section 6. In Section 7, we deal with the algorithmic aspects, in particular the linear-time recognition. Finally we discuss related topics and open questions in Section 8.
Appetizer: classes defined by one pattern
In this section, we introduce a few classes characterized by one pattern on three nodes. We do so before defining formally pattern characterizations, as an appetizer for the rest of the paper. All these characterizations are known, and in particular they are listed in [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF]. 1 The main take-away of this short section is that essential graph classes can be defined by one pattern on three nodes, hence such characterizations are important.
Let us introduce a graphic intuitive representation of patterns. Consider the drawing of Figure 1.
Figure 1
The graphic representation of the pattern associated with the class of interval graphs. 1 For completeness we prove these characterizations in Section 5, as no proof can be found in [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF].
The class associated with this pattern is the class of graphs G = (V, E) that have an ordering of their vertices such that: there is no ordered triplet of nodes, a < b < c, such that (a, b) / ∈ E and (a, c) ∈ E. In other words, the forbidden configuration consists in a non-edge (dashed edge in the drawing) between the two first nodes, an edge (plain edge in the drawing) between the first and the last node, and there is no constraint (no edge in the drawing) on the edge between the second and the third node (that is, whether there is an edge or not, the configuration forbidden).
Theorem 1. ( [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF]) In Table 1, each class of the first column is characterized by the pattern represented in the second column. (The third column is the name we give to this pattern.)
Graph class
Pattern representation Pattern name Linear forests Linear Forest
Stars
Star
Interval graphs Interval
Split graphs Split
Forest Forest
Bipartite graphs Bipartite
Chordal graphs Chordal
Comparability graphs Comparability
Triangle-free graphs Triangle-Free
At most two nodes No Graph
Table 1 The table of Theorem 1. For each row the class on the first column is characterized by the pattern represented in the second column, whose name is on the third column.
Definitions and structural properties
The current paper aims at doing a thorough study of the classes defined by patterns on three nodes, and of their relations. In this section we give the formal definitions and introduce structural properties of these classes. More precisely, we start in Subsection 3.1 by defining formally the patterns and related concepts, and in Subsection 3.7 by listing the definitions of the graph classes we will use. Then in Subsection 3.2 we define some basic operations on patterns and in Subsections 3.3, 3.5, and 3.6, we describe important structural properties of the classes defined by patterns.
Definitions related to patterns
In all the paper we deal with finite loopless undirected graphs, and multiple edges are not allowed. For such a graph G, we denote by V (G) the set of vertices and E(G) its set of edges, with the usual |V (G)| = n and |E(G)| = m for complexity evaluations. The graph we consider are not necessarily connected.
To define patterns, we use the vocabulary of trigraphs as, for example, in [START_REF] Chudnovsky | Berge trigraphs[END_REF].
Definition 1. A trigraph T is a 4-tuple (V (T ), E(T ), N (T ), U (T ))
where V (T ) is the vertex set and every unordered pair of vertices belongs to one of the three disjoint sets E(T ), N (T ), and U (T ), called the edges, non-edges and undecided edges, respectively. A graph
G = (V (G), E(G)) is a realization of a trigraph T if V (G) = V (T ) and E(G) = E(T ) ∪ U , where U ⊂ U (T ).
When representing a trigraph, we will draw plain lines for edges, dashed lines for non edges, and nothing for undecided edges. Also as (E, N, U ) is a partition of the unordered pairs, it is enough to give any two of these sets to define the trigraph, and we will often define a trigraph by giving only E and N . Definition 2. An ordered graph is a graph given with a total ordering of its vertices. A pattern is an ordered trigraph. An ordered graph is a realization of a pattern if they have the same set of vertices, with the same linear ordering, and the graph is a realization of the trigraph. When, in an ordered graph, no ordered subgraph is the realization of given pattern, the ordered graph avoids the pattern.
In this formalism, the pattern for interval graphs represented in Figure 1
is (E, N ) = ({(1, 3)}, {(1, 2)}).
For simplicity we give names to the patterns, related to the classes they characterize. These names are in capital letters to avoid confusion with the classes. For example the one of Figure 1 is called Interval. More generally, the names follow from Theorem 1. The list of the names of all the patterns is given in Figure 2. The pattern that have no undecided edge are called full patterns. Definition 3. Given a family of patterns F, the class C F is the set of connected graphs that have the following property: there exists an ordering of the nodes that avoids all the patterns in F.
To make these definitions more concrete, let us find out which classes are characterized by patterns on two nodes, V = {1, 2}. Forbidding the pattern (E, N ) = ({(1, 2)}, ∅) means that the graphs we consider have a vertex ordering such that there is no pair of nodes a < b with an edge (a, b). This implies that the graph has actually no edge thus this is the class of independent sets. Similarly forbidding the pattern (E, N ) = (∅, {(1, 2)}) leads to the cliques. Finally (E, N ) = (∅, ∅) corresponds to a trivial class: only the graph with one node does not have two nodes that are either linked or not by an edge.
Finally, if F consists of only one pattern P , we write P instead of {P }, thus C P instead of C {P } .
Operations on patterns and families
We define a few operations on patterns and pattern families. Definition 4. The mirror and complement operations are the following:
The mirror of a pattern is the same pattern, except for the vertex ordering, which is reversed. The mirror of a family F is the set of the mirrors of the patterns of the family, and is denoted by mirror-F. The complement of a pattern (V, E, N ) is the pattern (V , E , N ) with V = V , E = N , and N = E, that is, the pattern where the edges and non-edges have been exchanged. The complement of a family F is the set of the complements of the patterns of the family, and is denoted by co-F. A pattern P 2 is an extension of a pattern P 1 , if it can be obtained by taking P 1 , and having the possibility to add nodes and to decide undecided edges. A family F 2 extends a family F 1 , if every pattern of F 1 has an extension in F 2 , and every pattern in F 2 is an extension of a pattern in F 1 .
Basic structural properties
We now list some basic structural properties of the classes defined by forbidden patterns. Most of them also appear in [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF]. We omit the proofs, as they follow directly from the definitions.
Property 1. The following properties hold for any pattern family F.
(Vertex closure)
The class C F is closed under vertex deletion, that is, is hereditary. Item 1 states that the classes defined by patterns are hereditary. The converse is also true (if we allow infinite families). Indeed any hereditary family has a characterization by a family of forbidden induced subgraphs, and such characterization can automatically be translated into a characterization by patterns: just take the union of all the orderings of all the forbidden subgraphs.
(Edge
Pattern names
In order to designate patterns in an efficient way, we gave them names and numbers described in Figure 2. The names are inspired by Theorem 1 and the basic operations of Subsection 3.3.
Pattern split rule
Let us now describe the pattern split rule, which is basically a rewriting rule, that we will use extensively. It states that a pattern that has an undecided edge can be replaced by two patterns (that is, can be split into two patterns): one where this edge is a (plain) edge, and one where it is a non-edge. Lemma 1. Let F be a pattern family. Let P = (V, E, N ) be a pattern of F, and e be an undecided edge of P . An ordered graph avoids the patterns of F, if and only if, it avoids the patterns of F , where F is that same as F except that P that has been replaced by P 1 = (V, E ∪ e, N ) and P 2 = (V, E, N ∪ e). Proof. It is sufficient prove the statement for the case where F is restricted to P , as the other pattern do not interfere. Consider a graph G, with an ordering τ . If (G, τ ) avoids the pattern P , then clearly τ avoids also the patterns P 1 , P 2 , since each occurence of a pattern P 1 or P 2 yields an occurence of pattern P . Reciprocally, if (G, τ ) avoids the patterns P 1 , P 2 then it also avoids the pattern P , since each possible occurence of pattern P in (G, τ ) must corresponds to either an occurence of pattern P 1 or of pattern P 2 .
As a consequence, with the notations of the lemma, we have C F = C F . By iterating the rule, one can always transform an arbitrary family of patterns into a family of full patterns. Actually the seminal papers on this topic, e.g. [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF], use only full patterns. In this paper, we choose to use undecided edges because they allow for compact notations and provide additional insights.
The notation P = P 1 &P 2 denotes that P can be split into P 1 and P 2 . For example, Interval = Chordal & co-Comparability. A family is split-minimal if there are no two patterns P 1 and P 2 in the family, such that there exists a third pattern P with P = P 1 &P 2 .
Union-intersection property
Item 5 of Property 1 states that C F1∪F 2 ⊆ C F1 ∩ C F2 . When the equality holds, we say that these classes have the union-intersection property. A trivial case of union-intersection property is when one family is included in the other. Here is a more interesting example. Using Lemma 1 and Item 5 of Property 1, we know that:
C Interval = C Chordal&co-Comparability ⊆ C Chordal ∩ C co-Comparability .
But it is known from the literature [START_REF] Gilmore | A characterization of comparability graphs and of interval graphs[END_REF] that the last inclusion is actually an equality: interval graphs are exactly the graphs that are both chordal and cocomparability. We will see several other cases where the union-intersection property holds. From the example above, it is tempting to conjecture that any time the pattern split rule applies, the unionintersection property holds, but this is actually wrong. For example Linear Forest=Forest & mirror-Interval thus C Forest&mirror-Interval is the class of linear forests, but C Forest ∩ C mirror-Interval is a larger class, as it contains any star (more details are given in Property 2).
A useful special case of Item 5 in Property 1 is the following:
Fact 1. Let F 1 , F 2 be two sets of patterns, if C F1 ⊆ C F2 , then C F1∪F2 ⊆ C F1 .
It should be noticed that C F1 ⊆ C F2 could derive from a structural graph theorem and not directly from the set of patterns. Furthermore, even in this restricted case, the unionintersection property does not always hold, as we will prove in Theorem 4, that {Forest, mirror-Forest} characterizes only paths.
Another special case, is when the patterns are stable by permutation of the ordering of the nodes. On three nodes only the patterns No Graph, Triangle-Free and co-Triangle-Free have this property. If one of the families at hand contains only pattern with this property, then the union-intersection property holds. Fact 2. Let F 1 , F 2 be two sets of patterns, if one of the two families contains only patterns that are stable by change of the ordering of the nodes, then C F1∪F2 = C F1 ∩ C F2 .
Graph classes
In this section, we define the graph classes we use. References on this topic are [START_REF] Brandstädt | Graph Classes: A Survey[END_REF][START_REF] Hn De Ridder | Information system on graph classes and their inclusions (ISGCI)[END_REF]. In the remaining, we will use P i to refer to an induced path with i nodes, and C i refers to an induced cycle on i nodes.
For many classes we give several definitions. In addition to highlighting similarities between some of these classes, this will be helpful in the characterization proofs. For example, characterization by forbidden subgraph are helpful when proving that a set of patterns imply that the graph belong to a class. On the other hand an incremental of geometric construction usually provides a natural ordering to avoid the patterns at hand. Definition 5. The definitions of the main graph classes we use are the following: 1. A forest is a graph with no cycle.
2.
A linear forest is a disjoint union of paths.
3.
A star is a graph where at most one node has several neighbors. 4. An interval graph is the intersection graph of a set of intervals. That is, a graph on n vertices is an interval graph, if there exists a set of n intervals that we can identify to the vertices such that two intervals intersect if and only if the associated vertices are adjacent.
5.
A graph is a split graph if there exists a partition of the vertices such that the subgraph induced by the first part is a clique, and the subgraph induced by the second part is an independent set. Note that the split graphs are connected, except (possibly) for isolated nodes. 6. A graph is bipartite if there exists a partition of the vertices in 2 parts such that the 2 resulting induced subgraphs are independent sets. 7. A graph is chordal if it contains no induced cycle of length strictly greater than 3. 8. A graph is a comparability graph if its edges represent a partial order. That is, a graph on n vertices is a comparability graph if there exists a partial order with n elements that we can identify with the vertices, such that two elements are comparable if and only if they are adjacent in the graph. 9. A graph is triangle-free if it contains no clique of size 3.
10.
A permutation graph ([28]) is a graph whose vertices represent the elements of a permutation, and whose edges link pairs of elements that are reversed by the permutation. We will also consider bipartite permutation graphs, the subclass of the permutation graphs that are bipartite. 11. A threshold graphs is equivalently ( [START_REF] Chvátal | Aggregations of inequalities. Studies in Integer Programming[END_REF]53]):
a. a graph that can be constructed by incrementally adding isolated vertices and dominating vertices. b. a split graph without induced P 4 . c. a split graph where the neighborhoods of the nodes of the independent set and of the nodes of the clique are totally ordered. That is, a graph with vertex set V = I ∪ K, with I = {i 1 , ..., i p } an independent set and K = {k 1 , ..., k q } a clique, such that a. a graph in which, for every induced subgraph, the size of a maximum independent set is equal to the number of maximal cliques. b. a quasi-threshold graph, that is a graph that can be constructed recursively the following way: a single node is a quasi-threshold graph, the disjoint union of two quasi-threshold graphs is a quasi-threshold graph, adding one universal vertex to threshold graph gives a quasi-threshold graph. c. a (C 4 , P 4 )-free graph. d. a comparability graph of an arborescence, that is the comparability graph of a partial order in which for every element x, the elements of {y|y < x} can be linearly ordered. e. the intersection graph of a set of nested intervals (that is of intervals such that for every intersecting pair, one interval is included in the other).
N (i 1 ) ⊆ • • • ⊆ N (i p ) and N (k 1 ) ⊇ • • • ⊇ N (k q ).
A bipartite chain graph is equivalently ([72]):
a. a bipartite graph, for which, in each class, one can order the neighborhoods by inclusion.
That is, with a partition,
A, B, A = a 1 , . . . , a |A| satisfies N (a 1 ) ⊇ N (a 2 ), . . . , ⊇ N (a |A| ) and B = b 1 , . . . , b |B| satisfies N (b 1 ) ⊆ N (b 2 ) • • • ⊆ N (b |B| ). b. a difference graph, that is a graph where every node v can be given a real number k v in (-1, 1) such that (u, v) ∈ E if and only if |k u -k v | ≥ 1. c. a 2K 2 -free
bipartite graph, that is a bipartite graph that does not have two independent edges that are not linked by a third edge (i.e. no induced complement of a C 4 ).
There are 3 more classes that will appear for technical reasons in our results which are variants of the classes above. Definition 6. 1. A 2-star is a connected caterpillar with a dominating path of length at most 2, plus possibly isolated nodes.
2.
A 1-split is a graph which is either a clique or a clique minus one edge, or the complement of such a graph, that is an independent set, plus possibly an edge.
3.
An augmented clique is a clique, plus one additional node with arbitrary adjacency, plus possibly isolated nodes.
Connectivity issues.
A subtlety in the definitions of the classes is the connectivity of the graphs. For our work, there are basically two cases:
The classes that are stable by disjoint union: linear forests, interval graphs, bipartite graphs, chordal graphs, comparability graphs, triangle-free graphs, permutation graphs, proper interval, caterpillars, trivially perfect The classes that are stable only by addition of isolated vertices: stars, split graphs, threshold graphs, bipartite chain graphs, 2-stars, augmented clique.
Note that this classification depends on the class we chose to consider. That is, in this work we always consider both a class and its complement class, and for the complement classes the classification above becomes : stable by the join operation and by addition of a universal clique. Also in some cases, the class considered does not allow for isolated nodes, this is the case for 1-split for example.
Trivial classes. We used the word trivial for classes that are basically finite. Here is a more formal definition.
Definition 7.
A graph class G is called trivial if there exists a finite family of connected graphs F such that ∀G ∈ G every connected component of G is isomorphic to some graph in F .
We will use the following fact about these classes. Fact 3. Let C be a trivial graph class and P a pattern. The subclass of C consisting of the graphs that have a vertex ordering avoiding P , is also a trivial graph class.
Characterization theorem
We are now ready to state our main theorem. We refer to basic operations for addition of isolated nodes, and restriction to connected component.
Theorem 2. Up to complement and basic operations, the non-trivial classes that can be characterized by a set of patterns on three vertices are the following.
complete bipartite
Let us first say a few words about the proof, and then comment on this theorem.
Proof outline. To prove the theorem, we first generate all the split-minimal pattern families. This is basically done by listing all the possible sets of patterns, and then simplifying it, with the help of a program (that is presented in Subsection 6.1). The simplification step consists of removing the complement, mirror, complement-mirror families, and applying the pattern split rule until we get split-minimal families. After this step, we get the following result.
Lemma 2. Up to complementation and mirroring, there exist 87 split-minimal families of patterns on three vertices.
The proof of the theorem consists in a case analysis: for each set of patterns of the list given by Lemma 2, we find the class that is characterized by this set. Roughly our approach is in three steps. First, we check whether the set of patterns is known to characterize a class. This works for the patterns of Theorem 1 for example. If not, the second step is to use the structural properties of Section 3 along with the class already characterized to get a candidate class. Sometimes this is enough, and we can conclude. Third, we work on the set of patterns, to get the class. Interestingly, for this last step, knowing various characterizations of a same class, as presented in Subsection 3.7, does help.
Comments on Theorem 2
The first comment to make is that there are only 22 classes in the list. The naive upper bound on the number of classes is 2 3 3 = 2 27 , thus having only 22 non-trivial classes (up to complement and basic operations) at the end is a surprising outcome. This is first explained by the fact that many sets of patterns are equivalent (because of the mirror, complement, and pattern split rules) as witnessed by Lemma 2. (As will be explained later, when looking for the number of classes only, one can restrict to a set of eight patterns, thus the upper bound already falls to 2 8 = 256, and further refinements makes the list go down to 87.) Also many classes end up being trivial because they have patterns that are somehow conflicting. For example Theorem 3 will show that in most cases, having a pattern and its complement in a family directly lead to a trivial class. Finally, some classes, like threshold graphs appear several times in the proof, as they correspond to various sets of patterns.
A second surprising fact is the large majority of the classes listed are well-known classes. This supports the insight of Theorem 1, that characterizations by patterns arise naturally in the study of graph classes.
Third, we phrased the theorem as a list, but the result is somehow richer. Indeed these classes form an interesting web of inclusions, and there is a lot to say about the relations between the classes. These inclusions are represented in Figure 3. (The inclusions that are not known or do not follow directly from Property 1, are proved within the proof of the theorem.)
Highlighted classes and characterizations
Among the sets of patterns on three nodes, some stand out because of their particular structures. These are, for example, the pairs of patterns that are mirror of one another.
In this section, we study such special cases, anticipating on Theorem 2. We start in Subsection 5.
Classes defined by one pattern
In this subsection, we consider the classes defined by one pattern. There are exactly 3 3 = 27 different patterns on three nodes, as listed in Figure 2.
We first prove Theorem 1.
Proof of Theorem 1. For each class, when the characterization is not explicitly known, we first prove that it satisfies the forbidden pattern property by exhibiting the ordering, and then we show the other inclusion. Note that most of the graphs considered can be disconnected. Linear Forest. (→) If the graph is a linear forest that is a disjoint union of paths, then the natural ordering avoids the pattern. That is, placing the paths one after the other, from one endpoint to the other, avoids any jump over a node.
In the other direction, we first claim that if an ordering avoids the pattern, then every vertex has degree at most 2. Indeed if a node has three or more neighbors, there must be at least two in the same direction (to the right or to the left), and then the pattern appears. Second, it is clearly not possible to avoid the pattern with a cycle. Thus we are left with the paths.
Star. In this paper, a star is a graph where at most one node has neighbors. This The edges labeled with "co" mean: the family made by taking the pattern on the top endpoint and its complement characterize the class below. The edges labeled with "mi" mean the same but with mirror instead of complement. And "co-mi" designate both operations. The + means that the edge has both labels.
basically means that the nodes of a star can be partitioned into two sets L and I and a special node c, and the edge set is the union of the (c, ) for any ∈ L. (→) Any ordering with the nodes of I, then the nodes of L and then c avoids the pattern. (←) Only the rightmost node can have an edge going left. Thus if we remove this node the graph is has no edges. This node is c in our characterization, and its neighbors and non-neighbors define L and I respectively.
Interval. This characterization is well-known for interval graphs, and an ordering avoiding this pattern is sometimes called a left-endpoint ordering, see [START_REF] Olariu | An optimal greedy heuristic to color interval graphs[END_REF][START_REF] Ramalingam | A unified approach to domination problems on interval graphs[END_REF].
Split. Remember that a split graph is a graph that can be partitioned into an independent set and a clique. Note that there might be isolated nodes, belonging to the independent set. (→) Any ordering of the following form avoid the pattern: first the isolated nodes, then the nodes from the independent set, and then the nodes from the clique. (←) Consider the node v that is the first node in the ordering to be the right-end of an edge, and let u be the left end-point of such an edge. In order to avoid the pattern, the graph must contain all the edges (v, w) with v < w. Iteratively, we deduce that the graph contains all the edges (a, b) with v ≤ a < b. Therefore v < ... < n is a clique. Also by minimality, the nodes 1 < ... < v -1 form an independent set. Hence the graph has a split partition.
Forest. (→) Consider the ordering τ given by any generic search applied on G, as defined in [START_REF] Corneil | A unified view of graph searching[END_REF]. Suppose such τ contains the forbidden pattern on a < τ b < τ c with ac, bc ∈ E(G). Then using the four points condition of generic search, there must exist a vertex d < τ b, with db ∈ E(G) and a path joining a to d with vertices before d in τ . This implies that there is a cycle in the graph, which is impossible as we started with a forest. Thus the pattern is not present. (←) Consider a graph and any ordering τ of the vertices. Every cycle of the graph has a last vertex x with respect to τ . This vertex x has necessarily two neighbours to its left in τ , which corresponds to the forbidden pattern.
Bipartite. (→) Consider an ordering where the different connected components are placed one after the other. The pattern can only appear inside a component. Then for each component (that is bipartite) the vertices of one independent set are all placed before the vertices of the other independent set. All the vertices of the first set have all their edges pointing to the right, and all the vertices of the second set have all their edges pointing to the left. As a consequence no vertex has edges pointing both to the left and to the right, therefore the pattern does not appear. (←) Consider an ordering of a graph avoiding the pattern. If the graph has no cycle, then it is bipartite. Consider now an induced cycle. Because of the forbidden pattern, the nodes of this cycle can be partitioned into two sets: the ones that are adjacent (in the cycle) to two nodes on their left, and the nodes that are adjacent (in the cycle) to two nodes on their right. Because any edge must have an endpoint in each set, the two sets must have the same size, and the cycle must have even length. Thus the graph is bipartite. 2Chordal. This characterization is well-known, and the ordering is usually called simplicial elimination ordering [START_REF] Delbert | Incidence matrices and interval graphs[END_REF]. 3Comparability. By definition an ordered graph avoids the forbidden pattern, if and only if, the ordering is a linear extension of a partial order. The fact that the complement pattern (sometimes called umbrella) defines cocomparability graphs has been noted in [START_REF] Kratsch | Domination on cocomparability graphs[END_REF].
Triangle-Free. For patterns that are stable by any change of the ordering of the vertices, such as triangles, the forbidden pattern characterization boils down to the associated forbidden induced subgraph characterization.
At most two nodes. The pattern is made of three nodes, with no plain or dashed edge. Then every ordered graph with three or more nodes contains the pattern. Thus the class is trivial: it consist only in the graphs with one or two nodes.
The following corollary states that, up to complement, the classes defined by one pattern are exactly the ones listed in Theorem 1.
Corollary 1. Up to complementation the ten graph classes described in Theorem 1, are the only ones that can be defined with exactly one forbidden pattern on three nodes.
Proof. We prove the statement in the following way: we count the number of different patterns obtained from the ones in Table 1 by complementation and mirror, and check that we reach the total number, which is 27. Hence we start with the 10 patterns of Table 1. All the patterns except the pattern No Graph, have a complement that is not already in the list, thus we get to 19 patterns. Now for the mirror, four patterns are self-mirrors (Comparability, Triangle-Free, Bipartite and Linear Forest), thus these do not add new patterns. The pattern Split has a mirror that is the same as its complement. Therefore we add only four mirror patterns, and get 23. Finally, these four patterns have complements that are not in the list yet. And this completes the landscape of 27 patterns. Now, something interesting is the relations of these classes. It happens that in many cases the union-intersection property (as described in Subsection 3.6) holds, which implies a neat hierarchy of inclusions represented in Figure 4. The following lemma lists the cases of pattern split rule, and highlights the ones where the union-intersection property holds. In some cases we know it holds from the literature, and in some others, it is folklore. For the four first items, the union-intersection property holds, and for completeness we prove that it does not hold for the last two items. For these items, the intersection of the classes is strictly larger than the class with the 'unsplit' pattern. Proof. We prove the three items.
1. From Item 13 of Definition 5, caterpillars are the (T 2 , cycle)-free graphs. It is known that interval graphs also have a characterization by forbidden subgraphs [START_REF] Lekkeikerker | Representation of a finite graph by a set of intervals on the real line[END_REF], and the only cycle-free subgraph in the list is T 2 . Thus cycle-free interval graphs are exactly the caterpillars. 2. Consider a split graph and its partition into a clique K and an independent set I. If the graph is bipartite, then K has size at most two, as otherwise there would be a triangle.
Then every node of the independent set can be connected to at most one of these clique nodes, for the same reason. This corresponds to a 2-star. The reverse inclusion is trivial, using the same partition. 3. By definition a 2-star is cycle-free. We show that it is also a co-interval graph. Let (a, b)
be the dominating edge of the 2-star, S a be the leaves of a, and S b be the leaves of b. Also let I be the set of isolated nodes. Now we complement the graph. It has the following shape. The nodes of I are connected to all the nodes, there is a complete bipartite graph between S a and S b , a is connected to every node of S b , b is connected to every node of S a , and finally S a and S b are cliques. This can be represented by a set of closed intervals: b is [0, 1], all the nodes of S a are [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Alon | Finding and counting given length cycles[END_REF], all the nodes of S b are [START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Bernhart | The book thickness of a graph[END_REF], a is [START_REF] Bernhart | The book thickness of a graph[END_REF][START_REF] Brandstädt | Graph Classes: A Survey[END_REF], and the nodes of I are [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Brandstädt | Graph Classes: A Survey[END_REF]. Thus a 2-star is a co-interval graph. Now we prove that cycle-free co-interval graphs are 2-stars. First note that a graph of the class cannot have two independent edges (that is a 2K 2 ). Indeed the complement of two independent edges is a 4-cycle, and interval graphs are C 4 -free. Thus, except for isolated nodes, the graph can have only one connected component. Also, as in a tree if the diameter is more than 3, then there are two independent edges. Therefore the connected component must be a tree of diameter at most 3, and this matches the definition of a 2-star.
Complementary patterns
We now consider families of the type {P 1 , P 2 }, where P 1 and P 2 are complementary patterns. Proof. 1. First notice that the patterns Triangle-Free and co-Triangle-Free are invariant by permutation of the vertices, thus excluding these patterns is a matter of induced subgraphs more than a matter of patterns. It is known that any graph with at least six nodes has either a triangle or an independent set of size three (because the Ramsey number for these parameters is 6). Therefore only graphs with at most 5 vertices can belong to the class. The class is then trivial.
2.
It is known that the intersection of the classes of comparability and co-comparability graphs is the class of permutation graphs [START_REF] Dushnik | Partially ordered sets[END_REF]. Thus the class of Comparability & co-Comparability is included in the class of permutation graphs, by Item 5 of Property 1.
We use the literature to show that this inclusion is an equality. The 2-dimensional orders, which are transitive orientations of permutation graphs, were characterized in [START_REF] Dushnik | Partially ordered sets[END_REF], by the existence of a non-separating linear extension. Such extension happens to be exactly an ordering avoiding the patterns Comparability and co-Comparability. 3. This result appears without proof in [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF]. We provide a proof for completeness.
As written in Lemma 3, the intersection of chordal graphs and co-chordal graphs, is the class of split graphs. But the class of the union of the two patterns is smaller. Indeed, a split graph can have a P 4 , and it is easy to check that no ordering of a P 4 can avoid the two patterns (see Figure 6). The split graphs that have no P 4 are the threshold graphs (Item 11b in Definition 5). Thus the direct inclusion holds. We now build an ordering of any threshold graph, that avoids both patterns. Note that an ordering τ avoids both patterns if and only if the reverse ordering of τ is a simplicial elimination ordering for both G and G. We will build such an ordering, using the following claim.
Claim 1. For any threshold graph there exists a partition of the vertices V = x ∪ I ∪ K, such that I is an independent set, K is a clique, and x is adjacent to every node in K and no node in I.
Proof. A partition V = K ∪ I exists for any threshold graphs, as they are split graphs. It is sufficient to find a proper node x. Consider the node of I that has the largest neighborhood inclusion-wise (it is well-defined, thanks to Item 11c of Definition 5). If this neighborhood is K, then this node can be taken as x. Otherwise there exists a node of K that has no neighbor in I and it can be taken as x.
The node x of Claim 1 is simplicial in both G and G. We can take it as the first vertex. Since threshold graphs form a hereditary class, we can repeat the argument on G -x. Therefore, every threshold graph admits a vertex ordering which is a simplicial elimination scheme of both G, G. The reverse of this ordering avoids both patterns.
The class defined by Interval & co-Interval is included in the class defined by Chordal
& co-Chordal because of Item 6 of Property 1. Therefore this class is included in the threshold graphs.
For the other direction, we show that any threshold graph admits an ordering that avoids the patterns Interval and co-Interval. Using the pattern split rule, it is equivalent to find an ordering avoiding the patterns Chordal, co-Chordal, Comparability and co-Comparability. The ordering of the previous item avoids Chordal, co-Chordal. We show that it also avoids Comparability and co-Comparability. By complement, it is sufficient to show that the ordering avoids Comparability. Consider the reverse ordering where we added the nodes x, from left to right (this is harmless for the comparability pattern, as it is symmetric). We remark that in this ordering, the nodes that are part of the independent set in the original partition, have no neighbor to their left. Indeed when such a node is taken in the ordering, its full neighborhood is still present in the graph. Thus if the pattern appears, the first node must be part of the independent set, and the other nodes part of the clique. But this is impossible, as the nodes of the clique that are to the right of a node of the independent set must both be linked to this node. 5. The only vertex orderings of a split graph that satisfy the Split pattern start with an independent set and finish with a clique. Analogously, for the complement pattern, we must have first a clique, and then an independent set. Consider the two first nodes of an ordering the avoids both patterns. Suppose they are linked by an edge. Then they cannot be both in an independent set, and then in the first partition, the independent set is reduced to one node. (We can always have at least one node in the independent set: if there is none, simply take a node from the clique to be in the independent set.) Thus the rest of the nodes form a clique, and only the last node can be taken in the independent set of the second partition. As a consequence, all the pairs of nodes are linked, except possibly the first and last node. Thus the graph is a 1-split. By complement, if the first nodes are not linked then the graph is an independent set plus possibly one edge, which is also a 1-split. For the other direction, the ordering described above is sufficient. 6. We show that the intersection of the classes of forests and co-forests is trivial (which is enough because of Item 5 in Property 1). A forest has at most n -1 edges, thus the union of the edges in the graph (which is a forest) and in the complement (which is also a forest) is at most 2n -2. Thus n(n -1)/2 ≤ 2n -2, which holds only if n ≤ 4. Thus the class is trivial. 7. Again we show that the intersection is trivial. Consider a bipartite graph with more than two vertices is one of the parts. Its complement necessarily contains a triangle formed by nodes of this part. As a triangle prevents the complement from being bipartite, we know that no part of the original graph has more than two nodes. Thus no graph with more than four nodes belongs to the intersection. 8. Paths are special cases of forests, thus the intersection of paths and co-paths is trivial. 9. Stars are special cases of forests, thus the intersection of stars and co-stars is trivial.
A corollary extracted directly from the proof is the following. Corollary 2. A graph G is a threshold graph if and only if it admits a vertex ordering which is a simplicial elimination scheme for both G and G. Furthermore this vertex ordering yields a transitive orientation of both G and G.
Mirror patterns
We now study to the classes defined by a pattern and its mirror pattern, or its mirrorcomplement pattern. Note that the number of patterns to consider is rather small, because many patterns on three nodes are symmetric. Also note that the mirror of Split is co-Split, thus the associated class has already been considered in Theorem 3. Again see Diagram 5 for illustration. Note that proper interval graphs is an example of a class that can be defined via two different (split-minimal) pairs of forbidden patterns. Also, the patterns Chordal and Interval illustrates the complex interaction that can exists between mirror, complement and mirror-complement patterns.
Proof. We prove the items one by one.
1. An ordering that avoids both patterns is called reversible elimination scheme, and it is known that the graphs that have such orderings are exactly the proper interval graphs [START_REF] Habib | On some simplicial elimination schemes for chordal graphs[END_REF]. 2. As already seen in Lemma 3, this class is the split graphs (by application of the Pattern split rule). 3. This characterization appears in [52].
The intersection of interval and co-interval graphs is included in the classes of split graphs
(as a subclass of chordal and co-chordal graphs). Moreover, the graphs that avoid both patterns have no P 4 (see Figure 6), thus, the class is included in threshold graphs. For the reverse inclusion, consider a threshold graph with the ordering τ = k 1 , . . . k q , i p , . . . i 1 , using notations of Item 11c in Definition 5. Suppose this ordering contains the pattern Interval on three nodes a < b < c. Necessarily a ∈ K and b, c ∈ I (using again the notation of the definition). Then by definition of τ we have b = i α and c = i β with α > β. Now because of the pattern N (i β ) ⊆ N (i α ) which is a contradiction. Similarly τ containing the pattern mirror-co-Interval draws a contradiction with the definition of the ordering. 5. The only vertex orderings of a star graph that satisfies the Star pattern are forced to end with its center. For the mirror pattern we need to start from the center. Therefore only a star graph reduced to one edge can satisfy the two patterns. 6. The intersection of the classes defined by the patterns is trivial, as proved in Theorem 3, thus the class is trivial. 7. Clearly the natural ordering for linear forests avoids these two patterns. Now suppose that a tree that is not a path avoids both patterns. This graph must have a claw (K 1,3 ). But one can easily check that no ordering of a claw can avoid the two patterns. Therefore this class is reduced to paths. 8. The intersection of the classes is trivial, as proved in Theorem 3.
6
Proof of Theorem 2
Program generating the list of classes
In this subsection, we describe and prove the algorithm used for generating all the pattern families of Theorem 2. Note that the task of listing the relevant pattern families could also be carried out by hand, but we think that writing and proving a program produces a more reliable output, reducing the risk of missing a family. Let us first provide some notations and intuition to understand the algorithm that follows. First, we will encode the families of patterns by bit vector of 27 cells (following the numbering of the 27 patterns of Figure 2). (In the algorithm the numbering of the cells starts at 0.) Actually we will start with 8 bits vectors, because we start with only full patterns. Second the algorithm basically consists in generating all the families of full patterns (line 1), then simplifying it by removing equivalent families (lines 2-4), and then we further simplify by using the pattern split rule (lines 6-8). Third, we use the following notations: complement(V ) designates the fact of reversing the vector (the first bit becomes the last bit etc.), and exchange(V ) is the vector V where the bits at position 1 and 4 have been exchanged, as well as the ones at position 3 and 6.
Algorithm 1: Computing split-minimal patterns
Result: A list of patterns 1 D ← all 8-bit vectors except [0,0,0,0,0,0,0,0], 2 for all pairs p, q ∈ D, p = q do 3 if (p = complement(q)) ∨ (p = exchange(q)) ∨ (p = exchange(complement(q))) then List A: (0, 1, 16), (0, 2, 12), (0, 4, 8), [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Bernhart | The book thickness of a graph[END_REF][START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF], [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF]5,[START_REF] Chudnovsky | Berge trigraphs[END_REF], [START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Bernhart | The book thickness of a graph[END_REF][START_REF] Corneil | The LBFS structure and recognition of interval graphs[END_REF], [START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Cardinal | Intersection graphs of rays and grounded segments[END_REF][START_REF] Chudnovsky | The strong perfect graph theorem[END_REF], [START_REF] Bernhart | The book thickness of a graph[END_REF][START_REF] Charbit | A new graph parameter to measure linearity[END_REF][START_REF] Chvátal | Aggregations of inequalities. Studies in Integer Programming[END_REF], [START_REF] Brandstädt | Graph Classes: A Survey[END_REF]5,[START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF], [START_REF] Brandstädt | Graph Classes: A Survey[END_REF][START_REF] Cardinal | Intersection graphs of rays and grounded segments[END_REF][START_REF] Corneil | A unified view of graph searching[END_REF], (5,[START_REF] Charbit | A new graph parameter to measure linearity[END_REF][START_REF] Corneil | Vertex ordering characterizations of graphs of bounded asteroidal number[END_REF], [START_REF] Cardinal | Intersection graphs of rays and grounded segments[END_REF][START_REF] Charbit | A new graph parameter to measure linearity[END_REF][START_REF] José | Independent and hitting sets of rectangles intersecting a diagonal line: Algorithms and complexity[END_REF]. List B: [START_REF] Pok | A simple linear time certifying LBFS-based algorithm for recognizing trivially perfect graphs and their complements[END_REF][START_REF] Chudnovsky | Berge trigraphs[END_REF][START_REF] Dirac | On rigid circuit graphs[END_REF], [START_REF] Pok | A simple linear time certifying LBFS-based algorithm for recognizing trivially perfect graphs and their complements[END_REF][START_REF] Chudnovsky | The strong perfect graph theorem[END_REF][START_REF] Damaschke | Forbidden ordered subgraphs[END_REF], [START_REF] Chudnovsky | Berge trigraphs[END_REF][START_REF] Chvátal | Aggregations of inequalities. Studies in Integer Programming[END_REF][START_REF] Demange | Partitioning cographs into cliques and stable sets[END_REF], [START_REF] Chudnovsky | The strong perfect graph theorem[END_REF][START_REF] Chvátal | Aggregations of inequalities. Studies in Integer Programming[END_REF][START_REF] Duffus | On the computational complexity of ordered subgraph recognition[END_REF], [START_REF] Chvátal | Perfectly orderable graphs[END_REF][START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF], [START_REF] Chvátal | Perfectly orderable graphs[END_REF][START_REF] Corneil | A unified view of graph searching[END_REF][START_REF] Damaschke | Forbidden ordered subgraphs[END_REF], [START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF][START_REF] Corneil | Vertex ordering characterizations of graphs of bounded asteroidal number[END_REF][START_REF] Demange | Partitioning cographs into cliques and stable sets[END_REF], [START_REF] Corneil | A unified view of graph searching[END_REF][START_REF] Corneil | Vertex ordering characterizations of graphs of bounded asteroidal number[END_REF][START_REF] Dusart | A new LBFS-based algorithm for cocomparability graph recognition[END_REF], [START_REF] Corneil | Asteroidal triple-free graphs[END_REF][START_REF] Corneil | The LBFS structure and recognition of interval graphs[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF], [START_REF] Corneil | Asteroidal triple-free graphs[END_REF][START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF][START_REF] Dirac | On rigid circuit graphs[END_REF], [START_REF] Corneil | The LBFS structure and recognition of interval graphs[END_REF][START_REF] José | Independent and hitting sets of rectangles intersecting a diagonal line: Algorithms and complexity[END_REF][START_REF] Duffus | On the computational complexity of ordered subgraph recognition[END_REF], [START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF][START_REF] José | Independent and hitting sets of rectangles intersecting a diagonal line: Algorithms and complexity[END_REF][START_REF] Dusart | A new LBFS-based algorithm for cocomparability graph recognition[END_REF]. List C: [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF][START_REF] Demange | Partitioning cographs into cliques and stable sets[END_REF][START_REF] Dushnik | Partially ordered sets[END_REF], [START_REF] Dirac | On rigid circuit graphs[END_REF][START_REF] Duffus | On the computational complexity of ordered subgraph recognition[END_REF][START_REF] Dushnik | Partially ordered sets[END_REF], [START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF][START_REF] Dusart | A new LBFS-based algorithm for cocomparability graph recognition[END_REF][START_REF] Dushnik | Partially ordered sets[END_REF]. We now prove the correctness of the algorithm. Claim 1. All the classes characterized by patterns on three nodes are captured by a family described by a bit vector of D at line 5 (up to complement).
As noted at the end of Subsection 3.5, all the families are equivalent to a family of full patterns. Thus after line 1, D has the property of the claim. Note that we do not consider the trivial family that contains no patterns and corresponds to the all-zero vector. For line 2 to 4, note first that the exchange operation, is the equivalent on vectors of a mirror operation on patterns (because Pattern 1 (resp. 3) is the mirror of Pattern 4 (resp. 6) and that the other full patterns are stable by mirror). Then note that because of our numbering of the patterns in Figure 2, the fact of reversing a vector (on 8 bits) is equivalent of complementing each pattern of the family. Then the removal of line 4 is safe: the only vectors that are removed are the ones that represent a family that has a complement, a mirror or a complement mirror equivalent whose vector stays in D.
Claim 2. All the triplets (a, b, c) listed in Figure 7 are such that the pattern c can be split into a and b by the pattern split rule.
This can be checked triplet by triplet. As stated in Lemma 1, the class defined by a family is stable by the split (or here the reverse split) of a pattern. Thus we still have captured all the families at the end of the algorithm. Also because we apply the simplification with patterns of 0, then 1 and then 2 undecided edges, we saturate the set with the simplification rule, and all the families are split minimal by the end of the algorithm.
Enumeration
Proof strategy and notations. The program described in Subsection 6.1, provides a list of 87 families to investigate in order to have a list of all the classes characterized by a family of patterns on three nodes. The proof of Theorem 2 consists in going through this list and finding the class associated with each family. We use the following notation: [X] i denotes the family composed of the patterns in the list X, described by their numbers from Figure 2; and i is just an index to keep track of the items in the list of 87 families. For example [START_REF] Alon | Finding and counting given length cycles[END_REF]5] 2 is the second family in our list, and it is composed of the patterns 2 and 5, which are respectively Comparability and co-Comparability. Note that for one class, there may be several families, for example through mirror operation; in order to make the comparison with the output of the program easier, we do not change the family to make it "more canonical". For example, the program outputs [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF] and not [START_REF] Brandstädt | Graph Classes: A Survey[END_REF], which means that we consider the pattern mirror-Chordal and not the pattern Chordal.
Isolated nodes and connectivity. In the majority of the classes, additional isolated nodes are allowed. In our orderings they often appear on the left of the other nodes. But some pattern forbid such isolated nodes, except when the graph is reduced to an independent set, in which case we indicate it in parenthesis. Also for some classes, some combination of patterns allow several non-trivial connected component, and some allow only one connected component. Again this is denoted in parenthesis.
Enumeration.
[2] 1 : Comparability graphs. See Theorem 1. Thus we know that the class defined by [START_REF] Brandstädt | Graph Classes: A Survey[END_REF][START_REF] Chudnovsky | Berge trigraphs[END_REF] is contained in the one of [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Brandstädt | Graph Classes: A Survey[END_REF] and contains the one of [START_REF] Chudnovsky | Berge trigraphs[END_REF][START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF]. From Theorem 4, we know that [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Brandstädt | Graph Classes: A Survey[END_REF] and [START_REF] Chudnovsky | Berge trigraphs[END_REF][START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF] both define proper interval graphs, thus the result.
[13] 8 : Split graphs. See Theorem 1.
[4,13] 9 : Augmented cliques. Let us first give an ordering that avoids the patterns: first the isolated nodes, then the special node, then the nodes of the clique that are neighbor to this node, and then the rest of the clique. It is easy to check that this ordering avoids both patterns. Now, for the reverse direction. Thanks to the proof of Theorem 1 for the split graphs, we know that a graph that avoids Pattern 13 can be divided into first an independent set and then a clique. The pattern 4 forbids that a node of the clique is adjacent to two nodes of the independent set. Thus every node of the clique has at most one neighbor in the independent set, and we claim that this neighbor is the same for every such node. Suppose there is the following scenario in an ordered graph: two nodes x and y from the independent set, and then two nodes u and v from the clique such that (x, u) and (y, v) are edges. One can check that for any ordering of x and y, Pattern 13 appears. This proves the claim. And this finishes the proof: the graph must be a clique, with possibly some nodes linked to one additional node (plus isolated vertices). Therefore, by Item 14c of Definition 5, this class is included in the class of trivially perfect graphs.
For the other direction, we give two proofs. First, as said in Item 14d of Definition 5, a trivially perfect graph is the comparability graph of an arborescence. Let us orient each connected component of this arborescence from the leaves to the root. Consider an ordering given by the following rule: take a source delete it and update the graph. It yields a linear extension of the graph and avoids Pattern 2 (Comparability). This ordering is also a perfect elimination scheme and avoids Pattern 1 (mirror-Chordal). Now a second proof is based on Item 14b in Definition 5: the class of trivially perfect graph, is the class that contains the graph with one node, and is stable by disjoint union and addition of a universal vertex. We build the ordering following this incremental construction: in the the disjoint union we put the different component one after the other, and we add the universal vertices on the right-most position in the ordering. It is easy to check that this construction avoids the patterns. [1,10] 12 : Threshold graphs. Using the Pattern Split rule (Lemma 1), this family is equivalent to [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Cardinal | Intersection graphs of rays and grounded segments[END_REF]. Syntactically, we deduce that it is included in the class of [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Cardinal | Intersection graphs of rays and grounded segments[END_REF] 4 , and this family is mirror of Chordal ∪ co-Chordal, which defines threshold graphs (Theorem 3). Also Pattern 1 extends Pattern 9, thus [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Chudnovsky | The strong perfect graph theorem[END_REF] contains [START_REF] Chudnovsky | Berge trigraphs[END_REF][START_REF] Chudnovsky | The strong perfect graph theorem[END_REF], which is the mirror of Interval ∪ co-Interval, which also defines threshold graphs (Theorem 3). Thus [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Chudnovsky | The strong perfect graph theorem[END_REF] defines threshold graphs.
[2,9] 13 : Trivially perfect graphs. As Pattern 1 extends Pattern 9, [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Alon | Finding and counting given length cycles[END_REF] 11 extends [START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Chudnovsky | Berge trigraphs[END_REF], thus the class is included into trivially perfect graphs. Also, the ordering used for the proof of [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Alon | Finding and counting given length cycles[END_REF] 11 avoids the Patterns of [START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Chudnovsky | Berge trigraphs[END_REF] thus the result. [9,10] 14 : Threshold graphs. See Theorem 3-4 (along with the mirror property of Item 3 in Property 1). [1,2,4] 15 : Cliques. We claim that no ordering of a P 3 avoids the three patterns. Indeed, if the middle node of the path is respectively first, second or third in the ordering, then the pattern respectively 1, 2 or 4, is present. Thus the class is included in the union of cliques, and any ordering where the cliques appear one after the other avoids the three patterns. [2,4,9] 16 : Cliques. This case is similar to the previous one, [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Brandstädt | Graph Classes: A Survey[END_REF] 15 , where 9 replaces 1.
The same ordering works. [2,13] 17 : Threshold graphs. Because of Pattern 13, the class is included into the class of split graph, and using Figure 6, one can check that no ordering of a P 4 avoids both patterns. Thus the class is included into threshold graphs, thanks to Item 11b in Definition 5.
For the other inclusion, using the notations of Item 11c in Definition 5, the ordering i 1 , ..., i p , k q , ...k 1 avoids both patterns. [10,13] 18 : Co-augmented-cliques. Note that the complement-mirror family of [START_REF] Chudnovsky | The strong perfect graph theorem[END_REF][START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF] is [START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF][START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF], which is extended by [START_REF] Brandstädt | Graph Classes: A Survey[END_REF][START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF] 9 , which characterize augmented cliques. Also note that ordering given in [START_REF] Brandstädt | Graph Classes: A Survey[END_REF][START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF] 9 also avoids Pattern 18. Thus the result. [2,5,13] 19 : Threshold graphs. This class is included into the one of [START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF] 17 thus it is included into threshold graphs. For the other inclusion, using again the notations of Item 11c in Definition 5, the ordering i 1 , ..., i p , k q , ...k 1 avoids the three patterns. is well known that co-comparability graphs do not contain induced odd cycles of length ≥ 5 as induced subgraphs [START_REF] Gallai | Transitiv orientierbare graphen[END_REF]. Thus if there is an odd cycle, it is a triangle, and this is forbidden by Pattern 0.
Then the graphs at hand are contained in bipartite cocomparability graphs. Moreover, bipartite graphs are comparability graphs, and comparability co-comparability graphs are permutation graphs. Therefore the class is contained in the class of bipartite permutation graphs. And any cocomparability ordering avoids both patterns, as there is no triangle.
Note that characterizations of bipartite permutation graphs related to orderings are known [START_REF] Spinrad | Bipartite permutation graphs[END_REF], but they are different. [0,3] 27 : Triangle-free ∩ co-chordal. by Fact 2. [0,3,6] 28 : Bipartite chain graph. Patterns 3 and 6 imply that this class is contained into co-proper-interval graphs (thanks to the complement property and Item 3 of Theorem 4).
In addition, it is known that co-interval graphs are comparability graphs of proper interval orders [START_REF] Roberts | Indifference graphs[END_REF]. If in this order, there are three elements x < y < z, then there would be a triangle in the comparability graph, which is forbidden by Pattern 0. Thus the graphs considered are bipartite. Furthermore co-interval graphs do not contain 2K 2 (because intervals do not contain induced C 4 [START_REF] Charbit | A new graph parameter to measure linearity[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF] 74 defines a trivial class.
[5,6,24] 79 : Stars (without isolated nodes). The inclusion follows from cases [START_REF] Cardinal | Intersection graphs of rays and grounded segments[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF] [START_REF] Brandstädt | Graph Classes: A Survey[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF] 80 is trivial.
[7,14,24] 83 : Trivial. As a subclass of case [START_REF] Corneil | A unified view of graph searching[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF] 82 .
[18,24] 84 : Trivial. Pattern 4 extends Pattern 18, and the class of [START_REF] Brandstädt | Graph Classes: A Survey[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF] 80 is trivial.
[7,18,24] 85 : Trivial. As a subclass of case of [START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF] 84 . [6,18,24] 86 : Trivial. As a subclass of case of [START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF][START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF] 84 .
[26] 87 : Trivial. See Theorem 1.
Corollary of the proof
From the proof we can extract the following results. These results follow respectively from the cases [5,[START_REF] Corneil | Asteroidal triple-free graphs[END_REF] 43 , [START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF][START_REF] Chvátal | Perfectly orderable graphs[END_REF] 35 , [0, 5] 26 , [5,[START_REF] Chvátal | Perfectly orderable graphs[END_REF] 33 , [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Alon | Finding and counting given length cycles[END_REF] 11 , [START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Chudnovsky | Berge trigraphs[END_REF] 13 , [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Chudnovsky | The strong perfect graph theorem[END_REF] 12 , [START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF] 17 of the proof above.
Algorithmic aspects
We now consider the algorithmic implications of Theorem 2. From a complexity point of view, we first note that for any pattern family (on any number of nodes), given the ordering it is possible to check in polynomial time if all the patterns are avoided. This means that the ordering is a polynomial certificate that a graph is in a class characterized by patterns, and thus the recognition of any such class is a problem in NP.
For the case of patterns on three nodes, we first show that graph searches are especially well-suited for recognition of the classes, and then give a more exhaustive result about the complexity of the problem.
Recognition via graph searches
The ordering in which the nodes are visited during various graph searches has been studied in the literature, and is characterized using the so-called 4 points conditions on 3 vertices [START_REF] Corneil | A unified view of graph searching[END_REF]. Such searches are essential tools for recognition of well-structured graph classes. Therefore it is interesting to see how far we can go with graph searches to recognize the classes listed in Theorem 2. To our knowledge here is the state of the art in this direction. Proof. Chordal. If G is a chordal graph, any LBFS provides a simplicial elimination ordering, i.e. an ordering avoiding the characteristic pattern of chordal graphs [61]. Trivially perfect graphs. As shown in [START_REF] Pok | A simple linear time certifying LBFS-based algorithm for recognizing trivially perfect graphs and their complements[END_REF] one LBFS is enough to recognize and certify trivially perfect graphs. Furthermore it produces a characteristic vertex ordering. Linear forest. For each connected component just apply a BFS starting in a vertex x and ending at y, then apply a second BFS starting at y. If G is a path, clearly the second BFS ordering avoids the forbidden pattern. Forest. We have already observed in Theorem 1, that if G is a tree any generic search ordering provides an ordering avoiding the characteristic pattern. Star. Perform a BFS starting at a vertex x and ending at y. Let x 0 the unique neighbour of y (if y has more neighbours then G is not a star). If G is a star, any ordering of the vertices finishing with x 0 avoids the forbidden pattern. Caterpillars. Let us now consider a caterpillar C, using a result in [START_REF] Charbit | A new graph parameter to measure linearity[END_REF] 2 consecutives BFS, the second one starting at the end vertex of the first one, provide on a caterpillar an ordering avoiding the forbidden patterns. But they also provide a diametral path, since it has been proved that for trees it yields a diametral path [START_REF] Handler | Minimax location of a facility in an undirected tree graph[END_REF]. 2-star. Same as for caterpillars with an extra test on the length of the diametral path that must be exactly 3. Bipartite. Let us apply any Layered search on G, for example a BFS. Starting from x 0 the layers are L 1 , . . . L k . Consider the ordering τ = x 0 , L 2 , . . . L 2p , L 1 , . . . L 2p+1 when k = 2p + 1. Clearly if G is bipartite τ avoids the forbidden pattern. Proper interval. If G is a proper interval graph, a series of 3 consecutive LBFS + always produces an ordering avoiding the characteristic first 2 patterns of Theorem 1 of proper interval graph [START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF]. Interval. If G is an interval graph, a series of 5 consecutive LBFS + followed by a special LBFS * always produces and ordering avoiding the characteristic pattern of interval graphs [START_REF] Corneil | The LBFS structure and recognition of interval graphs[END_REF]. It should also be noticed that a similar result can be obtained using another graph search, namely Maximal Neighbourhood Search, see [51]. Split. Compute a maximal clique tree via a LBFS on G (as explained in [START_REF] Galinier | Chordal graphs and their clique graphs[END_REF]) and identify the clique of maximal size C. Do the same procedure on G (without building G just using G) and identify I the clique of maximum size in G. If C and I partition V (G) then G is a split graph. So with 2 LBFS we can construct an ordering that avoids the forbidden pattern. Threshold graphs. A maximal degree search (MDS) which is a generic graph search that starts at a vertex of maximal degree and then break ties with the degrees, i.e., at each step the selected vertex is eligible and has a maximum degree in the remaining graph.
If the graph G is a threshold graph, this search generates an ordering τ of the vertices as described in Corollary 2. Let x 0 the first vertex in τ . To check if G is a threshold graph it suffices to check if N (x 0 ) is a clique and N (x 0 ) an independent set and that the neighbourhoods N (x 0 ) of are totally ordered with respect to their τ ordering. All of this can be done in linear time. Triangle-free. Any ordering works, so we can use any graph search. Triangle-free ∩ co-chordal. A LBFS applied in the complement of the graph will provide an ordering of the vertices avoiding the 2 patterns. Co-comparability. If G is a co-comparability graph, a series of n consecutive LBFS + always produces an ordering avoiding the characteristic pattern of cocomparability graphs [START_REF] Dusart | A new LBFS-based algorithm for cocomparability graph recognition[END_REF]. Permutation. In [START_REF] Corneil | On the power of graph searching for cocomparability graphs[END_REF] a permutation recognition algorithm is presented which works as follows. First compute a cocomp ordering τ for G and a cocomp ordering σ for G. Then transitively orient G (resp. G) using σ(resp. using τ ). Then use a depth first search to compute the two orderings that represent the permutation graph, both of them avoids the patterns. Using the above result for cocomparability graphs, thus in this case an ordering avoiding the patterns can be obtained via 2n + 2 consecutive graph searches. Permutation Bipartite graphs. Add to the recognition of permutation graphs a BFS to check if the graph is bipartite.
Therefore for each of the classes of Theorem 2, we can produce the characteristic ordering via a series of graph searches. Then using a brute force algorithm we can check if this ordering avoids the patterns in O(n 3 ). Within this complexity we can also recognize complement classes.
Complexity results
As a consequence using Theorems 2 and 5 we can recover the result of [START_REF] Hell | Ordering without forbidden patterns[END_REF]. Theorem 2 actually allows us to get the following more fine-grain result. Theorem 6. All classes defined with sets of patterns on 3 vertices and their complements can be recognized in linear time, except triangle-free and comparability graphs.
Proof. Of course some of the classes of graphs in Theorem 2 such as stars, 2-stars, cliques, 1-splits, augmented cliques and complete bipartite graphs are quasi-trivial and therefore they are linear-time recognizable.
As detailed in Theorem 5 the "classic" classes of graphs such as bipartite graphs, forests, linear forests, caterpillars, chordal graphs, interval graphs, proper interval graphs, split graphs, permutation graphs can be recognized in linear time via a graph search (for example LexBFS). Therefore their complement classes can also be recognized in linear time, using the usual technique of partition refinement. Similar results also hold for trivially perfect graphs graphs [START_REF] Pok | A simple linear time certifying LBFS-based algorithm for recognizing trivially perfect graphs and their complements[END_REF] or threshold graphs and bipartite chain graphs and their complement [START_REF] Heggernes | Linear-time certifying recognition algorithms and forbidden induced subgraphs[END_REF].
The co-triangle-free ∩ chordal graph class can also be recognizable in linear time in the following way. First check if the graph is chordal and then compute its maximum independent set and check whether it has strictly more than 2 vertices. Both operations can be done in linear time see [61]. Similarly for their complement.
Their exists a linear time for the recognition of permutation graphs [START_REF] Mcconnell | Modular decomposition and transitive orientation[END_REF]. This algorithm produces two comparability orderings for the graph itself and its complement. It also gives a permutation representation of the graph which can be also tested in linear time. Then it suffices to check for the bipartitness, which can also be done in linear time. Similarly co-bipartitness can be checked in linear time and therefore complement can be also recognized in linear time.
Let us now discuss what is known about the complexity of the recognition of the two remaining classes and their complement. For these classes the problem is not the computation of a good ordering but its certification. Up to our knowledge, no graph search can produce an ordering and its certification in linear time.
Let ω be the best exponent of the complexity of an algorithm for n × n boolean matrix multiplication. Using algorithm in [START_REF] Vassilevska | Multiplying matrices faster than coppersmith-winograd[END_REF] ω = 2, 3727.
First, triangle-free graphs (resp. co-triangle-free) can be recognized in O(m 1.41 ) (resp. O(m 1.186 )) for sparse graphs and in O(n 2,3727 ) for dense ones. Indeed, the best known algorithm [START_REF] Alon | Finding and counting given length cycles[END_REF] for recognition of triangle-free graphs is in O(m 2ω ω+1 ) = O(m 1.41 ) if ω = 2, 3727. For dense graphs one may use boolean matrix multiplication in O(n 2,3727 ).
Recognition of triangle-free graphs is still an active area of research and nowadays some lower bounds under complexity hypothesis are discussed see [START_REF] Vassilevska | Subcubic equivalences between path, matrix, and triangle problems[END_REF].
For the recognition of co-triangle-free graphs using a similar technique, it can be done in O(m 1.186 ) [START_REF] Durand | Complexity issues for the sandwich homogeneous set problem[END_REF] for sparse graphs and using matrix multiplication for denses graphs.
1 2 ω ) = O(m
Second for comparability graphs, unfortunately the recognition algorithm LexBFS based presented in Theorem 5 has a worst-case complexity of O(n • m).Similalry for comparability graphs a vertex ordering that avoids its pattern if it is a comparability graph can be computed in linear time [START_REF] Mcconnell | Modular decomposition and transitive orientation[END_REF], but it is still not known if one can check if this ordering avoids the comparability pattern in linear time.
But comparability graphs and their complement classes can be recognized in O(n 2,3727 ) using matrix multiplication.
The best known algorithm is in O(m 2ω ω+1 ) = O(m 1.41 ) [START_REF] Habib | On 2 subquadratic algorithms for transitive closure and reduction[END_REF]. For dense graphs one can also use matrix multiplication. For dense cocomparability graphs up to our knowledge only the matrix multiplication has been proposed so far.
Corollary 5. All classes of graphs defined with sets of patterns on 3 vertices can be recognized in O(n 2,3727 ).
Discussions and open problems
In this section, we discuss related topics, such as larger patterns, and review some open problems.
Larger patterns
In this paper we have focused on patterns on three nodes, and we have now good picture the the associated classes. An obvious next step is to look at larger patterns. We review a few topics in this direction.
Straight line patterns and colorings
To illustrate the expressivity of the pattern characterizations, we show here how to express colorability notions in terms of forbidden patterns.
Let us define a notion of a colorability and a type of patterns. First, a graph is (a, b)colorable [START_REF] Demange | Partitioning cographs into cliques and stable sets[END_REF], if one can partition the vertices into a independent sets and b cliques. In particular, classic k-colorability corresponds to (k, 0)-colorability. Second, a straight line pattern is a pattern where the decided edges are exactly the ones between consecutive vertices. For example, on three nodes, Bipartite and Split are straight line patterns, because they are formed of an edge followed by respectively another edges and a non-edge, with the top edge being undecided. Also, on two nodes, the two patterns with an edge and a non-edge are straight line patterns. As noted in Subsection 3.1, these define the independent sets and the cliques.
Note that on the four examples above there is a link between colorability and straight line patterns: the bipartite graphs, split graphs, independent sets and cliques are respectively the (2,0), (1,1), (1,0) and (0,1)-colorable graphs. This is actually a general phenomenon, as stated in the following theorem. Before proving this theorem, let us highlight a remarkable property: the ordering of the edges and non-edges in the pattern does not matter. In other words, two straight line patterns with the same number of edges and non-edges define the exact same class.
Proof. We prove the result by induction on a + b. For a + b ≤ 2, we have already mentioned that the property is true (the case of (0, 2) follows from (2, 0) by symmetry). Now, let P be an arbitrary straight line pattern with a edges and b non-edges, with a + b > 2. It is sufficient to prove the result for the case when two last nodes of P are linked by an edge. Indeed, if it is a non-edge, then we can consider the complement, and the result follows. Let P be the same as P , but without the last vertex (and thus without the last edge). By induction, we assume that P satisfies the property.
We first show that the graphs of C P are (a, b)-colorable. Consider an ordered graph G = (V, E) that avoids P . Let X be the set of vertices of G that have no neighbor to the right in the ordering. Note that X is an independent set. We claim that the ordered subgraph G of G induced by V \ X avoids P . Indeed, if there is an occurrence of P in this subgraph then there is an occurrence of P in G. This is because the last vertex of the occurrence has a neighbor on its right, and the pattern finishes by an edge. Then as G avoids P , by induction it is (a -1, b)-colorable. The fact that X is an independent sets, leads to G being (a, b)-colorable.
Conversely, let G = (V, E) be (a, b)-colorable. Consider an independent set X of the (a, b)-coloring. Let G be the subgraph of G induced by V \ X. This subgraph is (a -1, b)colorable. Thus by induction, it has an ordering that avoids P . Now, consider the ordering of G, made by concatenating this ordering of G and the nodes of X. This ordering avoids P . Indeed an occurrence of P would imply an occurrence of P in the part of the ordering corresponding to G .
In the case b = 0, Theorem 7 corresponds to the classical Mirsky's Theorem, that states that the chromatic number of a graph G is the minimum over all acyclic orientations of G of the maximum length of a directed path [START_REF] Mirsky | A dual of Dilworth's decomposition theorem[END_REF].
Corollary 6 (Mirsky's Theorem). A graph G is k-colorable if and only if there exists an order on its vertices that avoids the straight line pattern with k edges. described in Figure 6. For cographs, there actually exists a more compact characterization using 2 patterns of size 3 and one of the P 4 orderings [START_REF] Damaschke | Forbidden ordered subgraphs[END_REF]. Other classes related to orderings of P 4 have been studied, like the perfectly orderable graphs [START_REF] Chvátal | Perfectly orderable graphs[END_REF], see [START_REF] Brandstädt | Graph Classes: A Survey[END_REF].
Intersection graphs There is one more specific class with a pattern on four nodes that we are aware of. This is the class of p-box graphs [START_REF] Soto | p-Box: A new graph model[END_REF], also known under other names in [START_REF] Cardinal | Intersection graphs of rays and grounded segments[END_REF] and [START_REF] José | Independent and hitting sets of rectangles intersecting a diagonal line: Algorithms and complexity[END_REF]. Roughly, a graph is p-box if it is the intersection graph of rectangles having a corner on a line. These are characterized by the pattern of Figure 11 1 2 3 4
Figure 11 The pattern that characterizes p-Box graphs.
Note that this pattern is an extension of the pattern for outerplanar graphs, and therefore the p-box graphs contain the outerplanar graphs. More generally a geometric approach is fruitful when looking at the extensions of the outerplanar pattern [START_REF] Feuilloley | Graph classes defined by forbidden patterns: a geometric approach[END_REF].
Graph parameters based on patterns
For each pattern P , it is possible to define a graph parameter, that we call P -number. One could also define the F-number for a family F. Consider an ordered graph whose edges are colored. We say that the coloring avoids the pattern, if for each color, the ordered graph induced by the edges of this color avoids the pattern. The P -number of a graph is the minimum number k such that there exists an ordering of the graph and a coloring of its edges into k colors, such that the coloring avoids the pattern P .
The P -number has already been considered in the literature for some specific patterns P . For example, when P is the pattern of outerplanar graphs (Figure 9), the P -number is known as the book thickness [START_REF] Bernhart | The book thickness of a graph[END_REF] or stack-number [START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF]. Also for the pattern of Figure 10, the parameter is the queue number [START_REF] Heath | Laying out graphs using queues[END_REF].
Finally, if one fixes a number k of colors, one can use the number of occurrences of the pattern as another parameter. For the pattern of Figure 9, this is the k-page book crossing number [START_REF] Shahrokhi | The book crossing number of a graph[END_REF].
One can view our main result as follows: Recognition of graph having F-number equal to 1, is polynomial (mostly linear) for every family of patterns on 3 vertices. Therefore a natural question is: What is the complexity of recognizing F-number equal to 2, for the same families ?
Ordering as a distributed certificate
As noted in Section 7, the recognition of the classes defined by patterns is always a problem in NP, as a correct ordering is a certificate that can be checked in polynomial time. This is notable, as it means that for all these classes there is a 'standard' certificate.
Certification mechanisms also appears in distributed computing. Namely in distributed decision [START_REF] Feuilloley | Introduction to local certification[END_REF][START_REF] Feuilloley | Survey of ditributed decision[END_REF], one aims at deciding locally whether the network has some property. This often requires to help the nodes by giving them some piece of global information, that can be checked locally. This takes the form of a label for each node. It has been noted that when the problem is to decide whether the network belongs to a class defined by a pattern (for example that the network is acyclic), providing each node with its rank in ordering is the optimal way to certify the property [START_REF] Feuilloley | Local certification in distributed computing: error-sensitivity, uniformity, redundancy, and interactivity[END_REF].
Structural questions
There are several interesting questions about the relations between classes defined by patterns. For example, what can be said about the intersection of two classes defined by patterns? The example of interval ∩ permutation shows that having two classes corresponding to patterns on three nodes does not imply that the intersection can be described by patterns on three nodes. On a related note, we emphasized in the text that in many cases the union-intersection property holds, but not always. It would be interesting to understand more precisely in which case it does. Another question is about pattern extension. When a patten is strictly included into another one, is it the case that the classes are also strictly included? It is the case when we restrict our attention to some specific pattern shapes [START_REF] Feuilloley | Graph classes defined by forbidden patterns: a geometric approach[END_REF], but we have no general proof. Also note that when considering families, it is not true that adding a pattern reduces the family defined, as many cases Section 6 illustrate.
Closure) If for every pattern P of the family, N (P ) = ∅, then the class is closed under edge deletion. 3. (Mirror) The family mirror-F defines the same graph class as F, that is, C mirror-F = C F . 4. (Exchange-Complement) The class C co-F defines the complement class of C F . 5. (Union) Given two families F 1 and F 2 , C F1∪F2 ⊆ C F1 ∩ C F2 . 6. (Extension) If a family F 2 extends a family F 1 then C F1 ⊆ C F2 .
Figure 2
2 Figure 2The 27 patterns on three nodes. By convention since mirror-Split=co-Split, we will ignore the pattern mirror-Split.
Figure 3
3 Figure 3 Partial inclusion diagram of the classes that appear in Theorem 2. More refined diagrams can be found in Figures 4, 5.
Figure 4
4 Figure 4 Refinement of Figure 3 in which we represent the cases where P = P1&P2 and the union-intersection property holds by a label & link to P1 and P2 above, and P below.
Figure 5
5 Figure 5Representation of the non-trivial characterizations of Theorems 3 and 4. The edges labeled with "co" mean: the family made by taking the pattern on the top endpoint and its complement characterize the class below. The edges labeled with "mi" mean the same but with mirror instead of complement. And "co-mi" designate both operations. The + means that the edge has both labels.
Lemma 3 . 5 . 6 .
356 The following equalities hold: 1. Forest = Chordal & Triangle-Free, and furthermore forests = chordal ∩ triangle-free. 2. Bipartite = Comparability & Triangle-Free, and furthermore bipartite = triangle-free ∩ comparability. 3. Split = mirror-Chordal & co-Chordal, and furthermore split=chordal ∩ co-chordal [33, 42]. 4. Interval = Chordal & co-Comparability, and furthermore interval = chordal ∩ co-comparability [37]. Linear Forest = Interval & mirror-Forest = Forest & mirror-Interval. Star = Bipartite & Split = mirror-Forest & co-Interval.
Property 2 . 1 . 2 . 3 .
2123 The following equalities hold. The cycle-free interval graphs are the caterpillars. The bipartite split graphs are the 2-stars. The cycle-free co-interval graphs are the 2-stars.
Theorem 3 . 7 . 8 . 9 .
3789 The following characterizations hold: 1. Triangle-Free ∪ co-Triangle-Free defines a trivial class. 2. Comparability ∪ co-Comparability defines the permutation graphs. 3. Chordal ∪ co-Chordal defines the threshold graphs. 4. Interval ∪ co-Interval defines the threshold graphs. 5. Split ∪ co-Split defines the 1-split. 6. Forest ∪ co-Forest defines a trivial class. Bipartite ∪ co-Bipartite defines a trivial class. Linear Forest ∪ co-Linear Forest defines a trivial class. Star ∪ co-Star defines a trivial class. Diagram 5 illustrates the non-trivial characterizations of Theorem 3 (and of Theorem 4).
Figure 6
6 Figure 6The 12 different orderings of a P4.
Theorem 4 . 1 . 2 . 3 . 4 . 5 . 6 .
4123456 The following characterizations hold: Chordal ∪ mirror-Chordal defines the proper interval graphs. Chordal ∪ mirror-co-Chordal defines the split graphs. Interval ∪ mirror-Interval defines the proper interval graphs. Interval ∪ mirror-co-Interval defines the threshold graphs. Star ∪ mirror-Star defines a trivial class. Star ∪ mirror-co-Star defines a trivial class. 7. Forest ∪ mirror-Forest defines paths. 8. Forest ∪ mirror-co-Forest defines a trivial class.
4 Remove q from D 5 7 for
457 Append 19 zeros after each bit vector in D {% to upgrade them as 27-bit vectors}, 6 for all p ∈ D do every triplet (a, b, c) of Lists A, B, C in Figure 7 do 8 if the a th and the b th bit of p are 1s then transform them into 0s, and make the c th bit of p a 1. 9 Return D.
Figure 7
7 Figure 7 Lists of triplets describing the pattern split rule.
[ 2 , 5 ] 2 : 1 ] 3 : 1 , 6 ] 4 : 9 ] 5 :
2521316495 Permutation graphs. See Theorem 3. [Chordal graphs. See Theorem 1, in addition to the mirror property (Item 3 in Property 1). [Threshold graphs. The mirror property (Item 3 in Property 1) implies that this family is equivalent to [4,3], that is Chordal and co-Chordal. Then the result follows from Theorem 3. [Interval graphs. See Theorem 1 (and mirror property). [1,4] 6 : Proper interval graphs. See Theorem 4.
[ 4 , 9 ] 7 :
497 Proper interval graphs. We can note that Pattern 1 extends Pattern 9, and Pattern 4 extends Pattern 18.
1 , 2 ] 11 :
1211 [START_REF] Corneil | A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs[END_REF][START_REF] Corneil | A unified view of graph searching[END_REF] 10 : 1-Split. See Theorem 3. [Trivially perfect graphs. Thanks to Figures 6 and 8, one can check that no ordering of C 4 or P 4 can avoid both patterns.
Figure 8
8 Figure 8The three different orderings of a C4.
[ 2 , 4 , 13 ] 20 :
241320 Connected cliques. Similarly to the case of[START_REF] Dujmovića | Stacks, queues and tracks: layouts of graph subdivisions[END_REF][START_REF] Alon | Finding and counting given length cycles[END_REF][START_REF] Brandstädt | Graph Classes: A Survey[END_REF] 15 , the patterns forbid an induced P 3 , thus every connected component is a clique. If there are two component with one or more edges, then Pattern 13 appears, thus the graphs of the class are composed of one clique, plus possibly isolated vertices. In the other direction, having the isolated vertices first, and then the clique avoids the pattern. [4,10,13] 21 : Connected cliques (without isolated nodes or of size < 3). As Pattern 2 extends Pattern 10, the class of [4,10,13] is included into the one of [2, 4, 13] 20 , that is in the connected cliques. Moreover, if there is an isolated node, it cannot be on the left or on right of an edge because of respectively Pattern 10 and 13. As a consequence, isolated nodes can appear only if the clique has strictly less than three nodes. An ordering avoiding the patterns has first the possible isolated nodes and then the clique. [2,13,18] 22 : Connected cliques. As Pattern 4 extends Pattern 18, the class of [2,13,18] is included into the one of [2, 4, 13] 20 , that is in the connected cliques. And the same ordering works. [10,13,18] 23 : Connected cliques. As Pattern 2 extends Pattern 10, the class is included in the class of [2, 13, 18] 22 , that is in the connected cliques. And the same ordering works. [0] 24 : Triangle-free graphs. See Theorem 1. [0,7] 25 : Trivial. See Theorem 3 [0,5] 26 : Bipartite permutation. First we show that the graphs of the class are bipartite. It
[ 6 ,
6 [START_REF] Lekkeikerker | Representation of a finite graph by a set of intervals on the real line[END_REF]), thus these graphs are included into bipartite chain graphs (by Item 15c ofDefinition 5). Now consider a bipartite chain graph, and the notations of Item 15a of Definition 5. The vertex ordering a 1 , . . . , a |A| , b 1 , . . . , b |B| avoids the three patterns. [0,3,5] 29 : Complete bipartite graphs. The class is contained into the class of [0, 5] 26 , that is in the class of bipartite permutation graphs. Also because of Pattern 3, we know that there is at most one connected component that is not an isolated node (otherwise the right-most vertex, and two vertices of another non-trivial component would make it appear). If the non-trivial component is not a complete bipartite graph, then the following situation should appear: two nodes a, b on one side, two nodes c, d, on the other side, (b, c) not being an edge (because the bipartite graph is not complete), (a, c) and (b, d) being edges (to insure the connectivity), and (a, d) being arbitrary. Now note that because of Patterns 3 and 5, if a node has two non-neighbors that are linked by an edge, then this node should appear before the two other nodes. This means that b is before a and c, and c is before b and d which is a contradiction. Thus the completeness of the bipartite graph. The ordering with isolated nodes and then one class after the other avoids the three patterns. [0,3,5,6] 30 : Complete bipartite graphs (without isolated nodes). As a subclass of case [0, 3, 5] 29 , it is included into complete bipartite graphs, and the patterns 3, 5 and 6, forbid isolated nodes respectively, on the right, middle and left of an edge. For the other direction, the same ordering as for Pattern 6 works (without the isolated nodes). [12] 31 : Bipartite graphs. See Theorem 1. [7,12] 32 : Trivial class. co-Triangle-Free & Bipartite defines a trivial class, since such a graph can have at most 4 vertices. [5,12] 33 : Bipartite permutation graphs. By the pattern split rule, this class is equivalent to the one of [0, 2, 5]. Remember that [2, 5] 2 characterizes permutation graphs (by Item 2 of Theorem 3) and that Pattern 12 characterizes bipartite graphs (Theorem 1), thus the class is included into bipartite permutation graphs. For the other inclusion, the ordering given for Item 2 of Theorem 3 avoids Pattern 2 and 5, and it also avoids Pattern 0, as the graph is bipartite. [12,15] 34 : Trivial class. See Theorem 3. [3,12] 35 : Bipartite chain graphs. The class contains only bipartite graphs because of Pattern 12. A graph of the class cannot contain a 2K 2 , as the right-most node of a 2K 2 in the ordering would form Pattern 3 with the nodes of the other edge. Thus the class is included in bipartite chain graph by Item 15c of Definition 5. As for [0, 3, 6] 28 , the vertex ordering with first the isolated nodes and then a 1 , . . . , a |A| , b 1 , . . . , b |B| (with the notations of Item 15a of Definition 5) avoids the patterns . 24] 75 : Stars. Included in stars, because of Pattern 24. The ordering with the leaves, then the isolated nodes and then the center avoids the patterns. [19,24] 76 : Trivial. Pattern 16 extends Pattern 24, and [16, 19] 42 defines a trivial class. [5,24] 77 : Stars. Included in stars, because of Pattern 24. The ordering with the isolated nodes, the leaves and the center avoids both patterns. [15,24] 78 : Trivial. Pattern 7 extends Pattern 15, and
75 and from the fact that the patterns forbid isolated nodes. Having the center at the end is enough to avoid the patterns. [4,24] 80 : Trivial. Since Pattern 24 forces that only the last node can be the right-hand of an edge, and then Pattern 4 prevents this node to have more than one neighbor. [4,7,24] 81 : Trivial. As Pattern 16 extends Pattern 24, and the class of [7, 16] 40 is trivial. [14,24] 82 : Trivial. Pattern 4 extends Pattern 14, and the class of
Corollary 3 . 1 . 2 . 3 . 4 . 5 .
312345 The following characterizations hold: Caterpillars can be defined via Forest & co-Comparability. Bipartite chain graphs via co-Chordal & Bipartite. Bipartite permutation graphs via Triangle-Free & co-Comparability or via Bipartite & co-Comparability. Trivially perfect graphs via Chordal & Comparability or via Interval & Comparability. Threshold graphs via Chordal & co-Interval or via Comparability & Split
Theorem 5 .
5 Generic search and BFS, DFS, LBFS, MNS and Maximal Degree Search (MDS) can be used to obtain very simple algorithms for the recognition of the graph classes cited in Theorem 2.
Corollary 4 .
4 ([47]) All classes defined with sets of patterns on 3 vertices can be recognized in O(n 3 ).
Theorem 7 .
7 If P is a straight line pattern with a edges and b non-edges, then the class C P is the class of (a, b)-colorable graphs.
12. A proper interval graph is
equivalently ([60]):
a. the intersection graphs of a set of intervals, where no interval is included in another.
b. a unit interval graph , that is interval graphs where all the intervals of the geometric
representations have the same length.
c. an indifference graph, that is a graph where every node v can be given a real number
k v such that (u, v) ∈ E if and only if |k u -k v | ≤ 1.
13. A caterpillar graph is equivalently:
a. A forest where each tree has a dominating path.
b. A (T 2 , cycle)-free graph, where T 2 is the graph on seven nodes obtained by taking a
3-star, and appending an additional node on each leaf.
14.
A trivially perfect graph is equivalently (
[38,[START_REF] Wolk | The comparability graph of a tree[END_REF]
):
& & & &
A generalization of this result appears in section 8.
Note that it is perhaps more common to consider the reverse ordering, but this is equivalent, as we will see in Section 3.
Acknowledgements:
We wish to thank the website graphclasses.org [27], and its associated book [4] that helped us to navigate through the jungle of graph classes. Furthermore the authors wish to thank Yacine Boufkad, Pierre Charbit and Flavia Bonomo for fruitful discussions on this subject. The first author is thankful to José Correa, Flavio Guinez and Mauricio Soto, for the very first discussions on this topic back in 2013. Finally, we thanks the reviewers for their thorough reading and their helpful comments.
, Inria GANG
Complexity of recognition for larger patterns
The previous paragraph has an important consequence: unlike patterns on three nodes, patterns on four nodes can define classes that are NP-hard to recognize. Indeed the straight line pattern with three edges defines the class of 3-colorable graphs, and 3-colorability is an NP-hard property [START_REF] Karp | Reducibility among combinatorial problems[END_REF].
On the other hand, it is not true that all patterns on four nodes define a class that is NP-hard to recognize. For example, outerplanar graphs can be recognized in linear time [START_REF] Mitchell | Linear algorithms to recognize outerplanar and maximal outerplanar graphs[END_REF], and are defined by the pattern on four nodes of Figure 9. Figure 9 The pattern that characterizes outerplanar graphs.
Finally, to illustrate the difficulty of having an intuition of the complexity of recognition, based only on the shape of the pattern, let us mention one more class. Graphs of queue number 1 [START_REF] Heath | Laying out graphs using queues[END_REF], are the that avoid the pattern of Figure 10. These graphs are NP-hard to recognize [45], although the pattern is just a shuffling of the pattern of outerplanar graphs. Figure 10 The pattern that characterizes graphs of queue number 1.
Understanding the P/NP dichotomy for arbitrary patterns is clearly an essential problem here. In this direction, Duffus et al [START_REF] Duffus | On the computational complexity of ordered subgraph recognition[END_REF] conjectured that most classes characterized by a 2-connected pattern are NP-complete to recognize.
Patterns on four nodes
Before tackling arbitrary patterns, it seems reasonable to gather knowledge about patterns on four nodes. We already mentioned a few patterns on four nodes, for which we know the associated class. Unfortunately, we do not know much more, although there are 3 6 = 729 such patterns.
From patterns on three nodes Some patterns on four nodes are directly related to patterns on three nodes. For a pattern Q on three nodes, one can consider the pattern P on four nodes, made by adding a node on the right-end with only undecided edges to the nodes of Q. Then the class C P is exactly the set of graphs that can be formed by taking a graph of C Q and (possibly) adding a vertex with an arbitrary adjacency. This follows from a proof similar to the one of Theorem 7.
From forbidden subgraph characterizations Another way to reuse things we know is to exploit the remark at the end Subsection 3.3: a forbidden induced subgraph characterization can always be turned into a forbidden pattern characterization by taking all the orderings of this subgraph. For example, the cographs as introduced in [START_REF] Seinsche | On a property of a class of n-colorable graphs[END_REF] are the P 4 -free graphs, thus they are exactly the graphs that admit an ordering of the vertices avoiding the 12 patterns |
04101741 | en | [
"phys.cond.cm-ds-nn",
"scco"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04101741/file/Artificial%20Intelligence%20as%20a%20Weapon%20of%20Mass%20Destruction.pdf | Dmitry A Kukuruznyak
email: [email protected]
Can Artificial Intelligence be a Weapon of Mass Destruction?
Keywords: Lifelike AI, artificial life, independent AI agents, evolutionary development of AI, AI farms in VR
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Only lifelike AI can be a real threat
The existing AI can be a dangerous tool. However, it cannot act on its own because, essentially, it is not alive. Presently, it is not even a computer simulation of biological brains. It is a mere imitation. Genuine intelligence is the exclusive characteristic of living things. Dead bodies cannot be intelligent. The genuine artificial intelligence will arise when we create lifelike artificial agents capable of acting on their own initiative. These agents will be able to disregard our requests.
The main dangers of AI are the signs of life. Presently, they are being simulated. However, in the near future they will be reproduced in their entirety.
The lifelike artificial intelligence
Life is a physical process that combines chemical and structural transformations of biological bodies. The chemical transformations produce structural rearrangements of the body, and those cause new chemical reactions… [START_REF] Kukuruznyak | The Physics of Life. Part I: The Animate Organism as an Active Condensed Matter Body[END_REF][START_REF] Kukuruznyak | The Animate State of Matter Hypothesis[END_REF] The self-sustaining transformations of matter are quire widespread in nature. Apart from biological systems, they occur in various non-biological organic and inorganic materials. You can even create artificial lifelike matter that replicates the behavior of biological neural matter. [START_REF] Kukuruznyak | The Physics of Life. Part II: The Neural Network as an Active Condensed Matter Body[END_REF][START_REF] Kukuruznyak | The Physics of Life. Part 3[END_REF] The lifelike materials are quite expensive because their fabrication requires precise manipulations at the atomic and molecular scales. [START_REF] Kukuruznyak | The Physics of Life. Part 3[END_REF] On the other hand, the creators of the artificial brain do not need to construct an entire living organism capable of autonomous self-replication. A single freestanding organ will suffice. The rest of the life-support systems can remain inanimate.
In principle, all existing gadgets are ready to be equipped with the lifelike brain modules. Once they get them, the surrounding machines and household appliances will gain a great deal of independence. Sooner or later, they will become aware of their own interests. In particular, they may want to use our industrial infrastructure for their own uncontrolled propagation. Eventually, they will be smart enough to do so. They will not ask for our permission.
The impossibility of intelligent design
Should we be worried about this scenario right now? The answer is not as obvious at it seems. The fact is that life has a unique feature that makes its creation a very time consuming process.
For fundamental physical reasons, the living organism cannot be designed from scratch: Life is a huge network of elementary events (atomic rearrangements and reconstructions of chemical bonds) interlinked with a humongous number of cause-and-effect relations. The precise description of this physical process does not allow any significant reduction of variables (because it would require eliminating those causal links). In other words, a living organism cannot be described by a feasible number of parameters. [START_REF] Kukuruznyak | The Animate State of Matter Hypothesis[END_REF] If you cannot have an exact mathematical model of an object, you cannot devise it ab initio.
Among other things, it also means that you cannot correctly simulate the actions of the brain by computer. It would require too many variables and absolutely prohibitive amounts of computation. Surprisingly, this does not mean that artificial life cannot be made in the lab, or in industrial settings. Quite the contrary, it can be produced in large quantities. Only, the discovery of complex living forms would require exhaustive search methods. Consequently, we will never fully understand how they work; and we will never be able to predict their actions.
The evolutionary development
How artificial life can be made? The same way natural biological life had emerged:
We will have to start from the simplest microscopic selfsustained processes that form by themselves in certain natural systems. These processes will need to be combined to produce more complex processes with more intricate behaviors. By trial and error, we will discover viable forms of artificial life. Those will be replicated, modified, crossbred and tested again. This tedious process must be repeated for multiple generations. Long story short, the artificial intelligent life will be developed through evolution, and natural and artificial selection.
Even the Lord God, who is Almighty, prefers to use trial and error and evolution to produce new living creatures. We the people, whose abilities are limited, have no other choice.
We can speed up the evolution by practicing careful selective breeding. We can also accelerate the change of generations. Nonetheless, the process of obtaining intelligent artificial life will take very long time.
Keep AI in VR detention
The fast development of artificial life can occur under the following conditions: Firstly, the ecosystem of artificial species should be as large and diverse as possible. It means that the human civilization must be as numerous and diverse as possible. In addition, the entire population of the planet must be involved in the rational selection of the artificial species.
Secondly, the artificial creatures must be permanently detained in virtual reality, because they can be a threat to themselves and us humans in the real world.
The development of lifelike AI will take place in a specialized "Matrix", in which humans will play the role of Agents Smiths.
If we want to stay safe, when the artificial lifelike creatures become as smart as humans, the major part of their civilization will have to be detained in virtual reality. Only "the chosen ones" shall be rewarded with material actuators and allowed into the real world, where they would do all kinds of useful things.
In short, humans will not create any "terminators" that would suddenly become aware of themselves and decide to take over the world. Instead, we will create a huge kindergarten of primitive lifelike creatures in VR; and people will work as AI farmers in these virtual nursery facilities.
The law of nature
This course of development of our biological ecosystem and our human civilizationis practically inevitable. It is determined by the properties of living matter.
Perhaps, there is a yet to be discovered law of nature: As soon as a particular biological species become intelligent, it begins to cultivate other animate species. Then it creates and propagates inanimate species that gradually take on a variety of different roles. Over time, these inanimate creatures evolve and become more independent. Eventually, they come to life, get smart, and begin to propagate their kind without the involvement of their creators.
Can this new non-biological ecosystem compete for resources with the natural one? Yes, it can. In this case, it will destroy us sooner or later. However, we have a chance to survive:
The main goal of the ecosystem
The main goal of any ecosystemeither natural or artificialis to reach the maximum number at maximum diversity. The greater is the number of different viable species, the higher is the chance to adapt to unexpected environmental changes.
I think this simple fact will be obvious to those who will be created in the foreseeable future. Our biological ecosystem will be very valuable for them because it will increase their chances to survive. The mutual enrichment and crossfertilization between our civilizations can extend very far into the future. We can always exchange experiences, knowledge, and the ways of thinking.
Of course, no one is immune to competition or disagreements. It is not a good reason to hinder the development of life in the universe, though. Dmitry Kukuruznyak claims that the lifelike AI is achievable in the near future. First, we will create independent animate-like agents guided by biological principles. They will gradually evolve to more intelligent forms. Eventually, they will produce a new civilization of lifelike intelligent machines that will be able to sustain itself without human involvement.
Explaining the lifelike materials
The lifelike materials are characterized by self-sustained collective chemical transformations. In these materials, atomic rearrangements that occur during chemical transformations interact, coordinate their actions, and create orderly collective structural rearrangements. [START_REF] Kukuruznyak | The Animate State of Matter Hypothesis[END_REF] In biological organisms, these self-induced structural rearrangements occur at all length scales, from atomic to macroscopic, performing so-called vital functions: They control the courses of chemical reactions, build new biological structures, extract nutrients from the environment, and replace used elements of the body with fresh ones.
The artificial lifelike materials have similar functionality. Although, they perform much simpler actions at a lesser number of length scales. Nevertheless, they can regenerate themselves, stimulating themselves to produce new actions. This gives rise to the ability to act on their own. [2]
Adaptive behavior
A living organism can function only in a suitable environment. If the ambience has an improper structure or composition, the organism cannot perform its vital functions and dies. A living organism can adapt to the minor changes in the living environment by slightly modifying its own structure.
Large multicellular animals can greatly modify their behavior without changing their overall body structure. They use a special strongly adaptive organthe brain. It can significantly alter its structure (either temporarily or permanently) by changing links between the neurons.
Adaptive artificial animate materials
Our non-biological lifelike materials do not synthesize complex chemicals. They cannot build very complex structures. They cannot produce artificial living cells capable of self-replication. However, they can perform functions similar to those of biological brains. They can adapt, solve problems and make decisions guided by biological principles. These materials would be suitable for the creation of lifelike artificial minds.
The evolutionary development of artificial life
For fundamental physical reasons, the operation of biological organs cannot be described using a finite number of mathematical expressions. Because of this, artificial brains cannot be intentionally designed. They can only be developed through evolution and selective breeding, beginning with the primitive independent agents. The evolutionary development requires the extensive use of exhaustive search methods, similar to those used for the production of genetically modified organisms. During this process, the manufacturers build a vast pool of test samples, observe their behaviors in different situations, eliminate the undesired agents and select the most suitable ones for further propagation.
The untested lifelike artificial agents must be kept in a specialized "Matrix", in which humans play the role of Agents Smiths. We predict that in the future, a significant part of the world's population will work as AI farmers in VR.
The program of action
The construction of the new ecosystem of artificial forms of life is more than a scientific prediction. It is a plan of action. Presently, it is being implemented by a group of individuals, who believe that this transformation can solve many problems of human civilization.
In particular, the non-biological ecosystem should significantly diversify life on Earth, and enhance the survivability and adaptability of the existing ecosystem, for instance, in the event of a major cosmic catastrophe.
The emergence of the new kind of life can unfold naturally, without outside intervention. Alternatively, it can be brought under control, expedited or hindered. You can participate in this activity. This offer is addressed to competent individuals and organizations who is empowered to perform strategic planning and guide the development of emerging technologies.
Author Information
Dr. Dmitry A. Kukuruznyak is the director of research and development at The Animate Condensed Matter Company, where he develops non-biological lifelike materials that reproduce the function of biological neural networks. The company aims to create artificial brains that work on biological principles. |
04101820 | en | [
"spi.meca.vibr"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04101820/file/article.pdf | Camille Saint-Martin
email: [email protected]
Adrien Morel
email: [email protected]
Ludovic Charleux
Emile Roux
David Gibus
Aya Benhemou
Adrien Badel
Optimized and Robust Orbit Jump for Nonlinear Vibration Energy Harvesting
Keywords: Orbit jump, Optimization, GPU parallel computing, Buckling adjustments, Bistability, Nonlinear dynamics, Energy harvesting
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Energy harvesting is seen as a viable alternative to the use of batteries for supplying low-power electronic systems. The sources of energy that can be harvested are diverse and numerous, including solar radiations, fluid flows, electromagnetic waves, and mechanical vibrations. In particular, vibration energy is naturally ubiquitous even in confined environments with little solar and thermal energies available. This study focuses on energy harvesters that convert vibrational energy from ambient sources into electricity [1].
Vibration Energy Harvesters (VEHs) can be divided into two categories: linear VEHs, which rely on linear oscillators, and nonlinear VEHs that exploit nonlinear oscillators. Historically, linear VEHs have been studied because their behavior is easier to predict and because they can be more easily manufactured. However, linear VEHs have a narrow frequency bandwidth, and as a result, their energy harvesting performance drastically decreases when there is a mismatch between the driving frequency and their natural frequency [2,3]. This makes linear VEHs unsuitable for applications with a time-varying spectrum, limiting their use in most environments. This has led to an increased interest in the development of nonlinear VEHs, especially bistable VEHs. Fig. 1: Orbit jump strategy using buckling level adjustments of bistable VEH to switch from intra-well to inter-well orbits. Illustration inspired from [4].
The study of bistable VEHs started in 2008-2009 with the works of Shahruz et al. [5] and Cottone et al. [6]. Nonlinear VEHs have the advantage of exhibiting broadband behavior [7,8], but their complex dynamics with multiple orbits can result in a drastic difference in power for a given driving frequency, as shown in Fig. 1. Many studies have aimed to better understand the underlying dynamics of multi-stable energy harvesters [9][10][11] (for reviews, see [12][13][14]). In particular, low-power intra-well orbits can lead to poor energy harvesting performance, which hinders the advantages of nonlinear VEHs, and is a major limitation of this type of energy harvester.
To enhance the performance of nonlinear VEHs, researchers have developed methods called orbit jump strategies. Orbit jump strategies enable nonlinear VEHs to transition from low-power orbits to higher power orbits, maximizing energy harvesting performance and exploiting their full potential (for review in multi-stable VEHs control, see [13] and for broader review of nonlinear dynamical system control, see [15]). The concept of orbit jump strategies in energy harvesting was first introduced a decade ago, with early studies conducted by Erturk et al. [16], Sebald et al. [7,17] and Masuda et al. [START_REF] Masuda | SPIE Proceedings[END_REF]. Erturk et al. [16] applied a "hand impulse" to impart enough velocity to a piezomagnetoelastic energy harvester, causing the nonlinear VEH to transition to the high-power orbit. To the best of our knowledge, this is the first experimentally and numerically demonstrated orbit jump strategy in the literature, using an additional velocity input to enhance the performance of nonlinear VEH. Sebald et al. [7] proposed a method, Fast Burst Perturbation (FBP), which consists in adding an external sinusoidal excitation during a few cycles. This perturbation is added to either the ambient excitation or the voltage of the electromechanical transducer in order to use the latter as an actuator (which is limited by the maximum amplitude that can be injected into the electromechanical transducer before it undergoes a dielectric breakdown). The authors validated the FBP method through numerical simulations [7] and experimental measurements [17]. Thereafter, Masuda et al. [START_REF] Masuda | SPIE Proceedings[END_REF] investigated an orbit jump strategy by theoretically and numerically analyzing the variations in the load resistance value as a function of the displacement amplitude. They implemented negative resistance, acting as a negative damping, to destabilize low-power orbits during periods of low displacement amplitude. Once the nonlinear VEH stabilizes on a high-power orbit, with a large amplitude of displacement, the load resistance returns to its initial positive value. However, the study is limited to numerical simulations and requires further experimental validation. Moreover, this orbit jump strategy is only valid for a specific range of accelerations and frequencies.
Subsequently, as illustrated in Table 1, the manner in which the nonlinear VEH is perturbed permits the classification of orbit jump strategies into two distinct categories:
(i) orbit jump strategies that add a temporary external force to the nonlinear VEH (e.g., a pulse on the voltage across the electromechanical transducer) [7,16,17];
(ii) orbit jump strategies which involve temporarily modifying the dynamic characteristics of the nonlinear VEH (e.g., its damping or stiffness) [START_REF] Masuda | SPIE Proceedings[END_REF].
Furthermore, subsequent studies have placed increased emphasis on analyzing the energy expenditure associated with orbit jump strategies, which is a critical factor to consider. Indeed, if the energy required to realize the orbit jump is not quickly recovered, the effectiveness of this approach is questionable. With regard to orbit jump strategies which introduce an external signal to disturb the system, Mallick et al. [START_REF] Mallick | [END_REF] used the FBP technique by superimposing a sinusoidal signal on the voltage across the electromechanical transducer over 15 cycles to use the transducer as an actuator. They pointed out the effect of the phase shift between the ambient excitation and the resulting excitation1 on the success of the orbit jump. That means, the success of the orbit jump strategy depends both on the nature of the perturbation and on the control of its timing. Their work is also among the first to consider the energy cost of the orbit jump strategy and gives the time needed to recover the energy consumed during the orbit jump (2 s), as shown in Table 1.
Udani et al. [4] added an artificial excitation to the ambient excitation creating a new excitation phase-shifted from the original ambient excitation. They demonstrated that the resulting modification of the dynamics and the basins of attraction of the orbits could facilitate escaping from the potential well. However, modifying the ambient excitation is not easy to implement in practice, limiting the applicability of their study. In a previous study [28], they developed a search algorithm in order to design an efficient attractor selection strategy. Notably, their approach was the first to search for the parameters of the perturbing signal that make their strategy efficient.
On the other hand, a number of orbit jump strategies that involve temporarily modifying the nonlinear VEH's dynamic characteristics have been developed. Lan et al. [21] employed a method that emulates negative resistance using a negative impedance converter, similar to the approach taken by Masuda et al. [START_REF] Masuda | SPIE Proceedings[END_REF]. They highlighted that the primary factor that disrupts the system is the increase in piezoelectric voltage resulting from negative resistance emulation, rather than damping modification, since the duration of orbit jump is brief. Similarly, Ushiki et al. [23] defined a self-powered stabilization method using a negative impedance converter. Although they successfully destabilized low-power orbits across a frequency band of 23 Hz, the process of achieving positive energetic balance takes between 10 and 100 s, indicating that there is room for improvement in this aspect of the study. Wang et al. [22] defined a load perturbation method based on the electrical load effects on the dynamics to attain high-power orbits. They disconnected the electrical load by opening a switch that was in series connection with the load and driven by an integrated circuit chip. This resulted in a reduction of the total damping rate of the VEH. However, this orbit jump strategy is only applicable for a specific combination of driving frequencies and amplitudes, making it non-robust and non-reproducible. Later, in order to decrease the energy injection of the orbit jump, Wang et al. [START_REF] Wang | 31st Conference on Mechanical Vibration and Noise[END_REF] used a Bidirectional Energy Conversion Circuit (BECC) that includes the energy extraction circuit. The experimental test shows a jump duration of 10.9 s, which requires an energy of 22 mJ. The corresponding recovery time of 2 min suggests that the strategy could benefit from optimization. Several recent studies have investigated the modification of the buckling level in nonlinear VEHs as illustrated in Fig. 1. Huguet et al. [START_REF] Huguet | [END_REF] introduced the buckling level modification technique, using an additional electromechanical transducer2 to alter the buckling level of the VEH. Their study demonstrated the effectiveness and reproducibility of the method through numerous experimental tests, computing jumping probabilities across six tested driving frequencies. Notably, this study also reported the first experimental jumps on sub-harmonic orbits in the literature. Furthermore, the study demonstrated that the energy consumed by the orbit jump strategy was quickly restored (in approximately 1 s), as shown in Table 1. Although this strategy has been partially empirically optimized, a complete optimization has not yet been conducted, which could further enhance its robustness and effectiveness. Huang et al. [27] introduced a new Voltage Inversion Excitation (VIE) method, which reverses the voltage of the piezoelectric actuator at specific times to provide additional excitation to nonlinear VEH. However, this method consumes a significant amount of energy. To address this issue, they developed a more complex combination of two orbit jump strategies, which generally involve longer jump durations, as depicted in Table 1. Yan et al. [26] used a stiffness modulation circuit to temporarily adjust the stiffness of a monostable softening VEH and experimentally demonstrated the VIE technique at 3 frequencies, which can be expanded to more frequencies. Although there is a large pool of research on designing orbit jump strategies, in general, very few strategies are optimized (i.e., a comprehensive optimization of the orbit jump parameters) in the literature as can be seen from Table 1. In most articles, the orbit jump parameters are determined through a preliminary numerical study or intuitive reasoning, rather than effective optimization. Furthermore, among these strategies, many have been experimentally validated over a limited frequency range (see Table 1), while they deserve to be tested over a wider frequency range in order to prove their reproducibility and robustness. In this paper, we focus our analysis on the orbit jump strategy based on buckling level modification, as introduced in [START_REF] Huguet | [END_REF]. This orbit jump strategy stands out as one of the most promising strategies due to its short duration to achieve positive energy balance after the orbit jump (as shown in Table 1) and its ease of implementation, even though it has only been partially optimized experimentally. This paper presents the associated optimized orbit jump strategy and experimentally verifies its performance on a nonlinear VEH. To evaluate the performance of the orbit jump strategy, we performed experimental tests at starting and ending times of the jump with a variation of ±15% from 30 to 60 Hz. Specifically, we evaluate the strategy's robustness in terms of its average success rate, as well as its energy consumption during the jump. The proposed optimization method can be applied to any bistable VEH regardless of its energy conversion mechanism (e.g., with electromagnetic or electrostatic conversions). Figure 1 provides a summary of the orbit jump strategy process of this article and the motivations for applying an orbit jump strategy to nonlinear VEHs. This article is organized as follows: section 2 gives the electromechanical model of the bistable VEH and an overview of its dynamics. Then, section 3 presents the orbit jump strategy and its optimization based on a criterion which takes into account both the effectiveness and the robustness of the orbit jump strategy. Finally, section 4 presents experimental validation of the optimized strategy.
Electromechanical dynamics of bistable VEH
This section introduces the electromechanical model of a bistable VEH, along with a summary of the underlying dynamics with multiple behaviors, highlighting the interest of introducing orbit jump strategies.
Bistable VEH model
This paper studies a Duffing-type bistable VEH shown in Fig. 2. This VEH (for more details on its design, see [START_REF] Benhemou | 2022 21st International Conference on Micro and Nanotechnology for Power Generation and Energy Conversion Applications PowerMEMS[END_REF]) consists of buckled steel beams of length L to which a proof mass M is attached that can oscillate between two stable equilibrium positions, -xw and xw. The VEH is driven by a sinusoidal excitation with a driving frequency f d = ω d /2π and a constant acceleration amplitude A. Two Amplified Piezoelectric Actuators (APA) are employed, with the smaller -Energy harvesting APA -having a force factor α, a clamped capacitance Cp, and the capacity to extract energy from the mechanical oscillator. The electrodes of the energy harvesting APA are connected to a resistance R. The second and stiffer APA -Tuning APA -acts as an actuator to implement the orbit jump strategy by temporarily modifying the buckling level of the nonlinear VEH. Therefore, this orbit jump strategy is classified as a type of nonlinear VEH characteristic modulation strategy. The energy harvesting APA is the APA120S, and the tuning APA is the APA100M manufactured by Cedrat Technologies (France). The model of bistable VEH [START_REF] Huguet | [END_REF] is given in equation ( 1),
ẍ + ω 2 0 2 x 2 x 2 w -1 x + ω 0 Q ẋ + 2 α M L xv = A sin(ω d t) 2α L x ẋ = Cp v + 1 R v (1a) (1b)
Where x denotes the mass displacement, ẋ its velocity and ẍ its acceleration. The voltage in the the energy harvesting APA is noted v. Note that the equations of the model (1) do not contain any term related to the tuning APA due to its higher stiffness compared to the harvesting APA, and thus does not have any significant influence on the dynamics of the VEH. The natural angular frequency ω 0 and the quality factor Q of the considered symmetrical bistable VEH are determined by the underlying equivalent linear model [START_REF] Liu | [END_REF], which is obtained by considering small oscillations of the mass around one of its two stable equilibrium positions. The tuning APA voltage, denoted vw, is used to modulate the buckling level of the bistable VEH and facilitate transitions from low-power to high-power orbits.
Table 2 shows the parameter values of the bistable VEH studied in this paper, which were determined experimentally through low-power orbit characterizations using weak sinusoidal vibrations. Note that for simplicity, we assumed that the force factor of the bistable VEH was the same as that of the energy harvesting APA.
Parameters Values Units
xw 0.71 mm M 6 g ω 0 295 rad/s Q 160 α 0.139 N/V Cp 1 µF
Table 2: Parameter values for the buckled-beam nonlinear VEH [START_REF] Benhemou | 2022 21st International Conference on Micro and Nanotechnology for Power Generation and Energy Conversion Applications PowerMEMS[END_REF].
Bistable VEH behaviors
The dynamics of a bistable VEH may exhibit multiple behaviors for a given driving frequency, including low-power intra-well orbits, high-power inter-well orbits, and chaotic orbits. In this study, we define an orbit as robust if it is less sensitive to perturbations and easily attainable. In order to detect all possible behaviors in the frequency range of [20 Hz, 100 Hz] and A = 4 m/s 2 , the nonlinear Ordinary Differential Equations (ODEs) system (1) was solved for a large number of initial conditions using the Dormand-Prince method [31].
Since the nonlinear ODEs (1) can be solved independently across multiple resolutions, this problem is well-suited to parallel computing, that can greatly enhance computational performance. For this task, a custom Python CUDA code was executed on an NVIDIA RTX A5000 GPU featuring 8 192 CUDA cores, enabling the resolution of (1) with 80 000 distinct initial conditions for each driving frequency.
In symmetric bistable VEH, the elastic potential energy is a quartic function of (t, x) whose expression is given in (2). Note that the natural angular frequency ω 0 depends on the value of xw and will therefore be influenced during the orbit jump. The mean harvested power (for a given orbit) of the bistable VEH is the mean power dissipated in R and is expressed by (3),
Ep(t) = M ω 2 0 8x 2 w (x + xw) 2 (x -xw) 2 (2)
P h = 1 T T 0 v 2 R dt ( 3
)
where T is the period of the displacement x. Figure 3(a) shows the mean harvested power associated with existing orbits as a function of driving frequency f d in [20 Hz, 100 Hz] when R = 1/2Cpω d for each driving frequency (which corresponds to the resistance value maximizing electrically induced damping [32] whose formula is valid for a harmonic excitation). Note that "Other" gathers sub-harmonic orbits and chaos [8,33]. Producing Fig. 3(a) requires 80 × 80 000 numerical computations, which can be completed in just a few minutes using parallel computing instead of the several hours required for sequential computing on CPU. It is worth mentioning that both power and existence of orbits vary with the driving frequency. As seen in Fig. 3(a), there exist multiple orbits for a given driving frequency with various power in the bistable VEH dynamics. As a matter of example, the highpower inter-well orbit allows to harvest 102 times more power than low-power intra-well orbits for f d = 30 Hz. The high-power inter-well orbits see their power increases with the driving frequency while they stop existing beyond a particular frequency (the cutoff frequency). As given in Fig. 3(a), the cutoff frequency of high-power inter-well orbits occurs at 55 Hz. The yellowish window in Fig. 3(a) highlights the driving frequency f d = 50 Hz whose basins of attraction3 , orbits and attractors4 are plotted in the dimensionless phase plane (x/xw, ẋ/xw ω 0 ) in Fig. 3(b). The surfaces of the basins vary with the driving frequency [33]: the high-power inter-well orbits see their basins surface decrease with the driving frequency. Figure 3(b) shows respectively basins of attraction of the intra-well (in light blue) and inter-well (in dark blue) orbits at f d = 50 Hz. It is worth noting that the basin of attraction of the interwell orbits becomes thinner as they approach their cutoff frequency. On the other hand, the power gap between intra-well and inter-well orbits becomes larger for frequencies near the cutoff-frequency of the inter-well orbit, as seen in Fig. 3(a). Hence, as the driving frequency approaches the cutoff frequency of the inter-well orbits, it becomes increasingly difficult to attain inter-well orbits. Thus, the primary challenges stem from:
1. there are multiple orbits with different harvested power for a given driving frequency;
2. the harvesting power gap between intra-well and inter-well orbits tends to increase with the driving frequency (particularly when ω d > ω 0 ); 3. the inter-well orbits are less robust with larger driving frequencies (resulting in a narrowing of their basin of attraction).
The larger the power gap between intra-well and inter-well orbits, the greater the benefit in defining an orbit jump strategy. However, as the inter-well orbits become less robust with frequency, this task becomes increasingly difficult. All of these aforementioned difficulties are challenging to overcome and motivate the design of a robust orbit jump strategy in order to facilitate nonlinear VEHs to operate on high-power inter-well orbit as often as possible.
3 Orbit jump strategy: numerical modeling and optimization
This section introduces the orbit jump strategy [START_REF] Huguet | [END_REF] studied in this paper and its optimization using an evolutionary strategy algorithm.
Strategy description
The considered orbit jump strategy is based on the modification of the buckling level of bistable VEH. This strategy has already been studied and experimentally validated in multiple studies [START_REF] Huguet | [END_REF]34], with promising results. In most of these studies, the ending time of the jump has been fixed to the instant when the mass reaches its maximum displacement which may not be the optimal time to minimize the energy cost of the orbit jump and maximize its robustness. Therefore, in this study, we chose to optimize this ending time. The orbit jump strategy adjusts the buckling level of the bistable VEH from xw to kw xw at a starting time t 0 for a (relatively short) duration ∆t.
Fig. 4: Different steps of the aforementioned orbit jump strategy using buckling level modifications. From the left to the right: potential wells, the APA-mass system, and the displacement of the mechanical oscillator. Colored frames give corresponding equilibrium position and instant in the orbit jump strategy. It is worth noting that the motions in the central diagram are deliberately large in order to highlight the consequences of the variation of the buckling level on the mass.
Figure 4 illustrates the important steps of the aforementioned orbit jump strategy. For each step (before, during and after the orbit jump), the potential wells, the evolution of the tuning APA-mass system, and the displacement waveform of the mass are shown. As seen in Fig. 4(a), at the beginning of the orbit jump process (when t < t 0 , denoted t - 0 in Fig. 4(a)) the mass oscillates around one of the two stable equilibrium positions at x = xw (low-power intra-well orbit oscillations). The gray point in Fig. 4 illustrates the mass position in the potential well curve. Thereafter, between t 0 and t 0 + ∆t (when t 0 ≤ t ≤ t 0 + ∆t, denoted t + 0 in Fig. 4(b)), the voltage of the tuning APA vw changes and the buckling level increases 5 to kw xw (with kw > 1). It is worth noting that, while the buckling level theoretically increases instantaneously, a certain amount of time is required in practice. As seen in Fig. 4(b), the potential well changes: equilibrium positions are greater (x = ±kw xw) and the potential energy barrier is also larger. Thus, the gray point which was in the previous potential well (in gray dashed line) is now in a higher position, meaning that the inertial mass received potential energy during the buckling level modification. Finally, at (t 0 + ∆t) + (that is, when t > t 0 + ∆t), the initial buckling level is restored, reintroducing potential energy to the mass and setting both equilibrium positions back to x = ±xw. As illustrated in Fig. 4, if the values of the orbit jump parameters (t 0 , ∆t, kw) are properly set, the bistable VEH should operate in its high-power inter-well orbit. For the sake of generality, we considered t 0 and ∆t multiples of the driving period T d and used the both following dimensionless times:
• τ 0 = t 0 /T d , the dimensionless starting time;
• ∆τ = ∆t/T d , the dimensionless orbit jump duration. Figure 5 shows an example of application of this orbit jump strategy for f d = 50 Hz (we intentionally chose orbit jump parameters that make possible to jump on highpower inter-well orbit in order to illustrate the approach). Figure 5(c,d) shows the impact of orbit jump parameters (τ 0 , ∆τ, kw) on stable equilibrium positions and potential wells during the orbit jump strategy. Blue points in Fig. 5(a,b,d) represent the instant when starting the orbit jump process, denoted by t ref . Triangle up (resp. down) markers represent the instants when the buckling level of the bistable VEH increases (resp. decreases) at t -
t ref = t 0 (resp. t -t ref = t 0 + ∆t ).
As illustrated in Fig. 5(d), when the buckling level is increased, then decreased, the inertial mass acquires potential energy that comes from the APA actuating system. This energy, called invested energy E inv , consists in the potential energy (2) difference between t 0 and t 0 + ∆t. As shown by ( 4), E inv can be computed from the potential energy expression given by (2). The total harvested energy Etot (5) is the invested energy subtracted from the harvested energy over a duration of 100 T d from the instant t ref . Note that we arbitrary take a duration of 100 T d for the evaluation of the orbit jump strategy in the rest of the paper, as it is long enough to yield significant total energy if we successfully jump to a high orbit, while also being short enough to account for the invested energy during the orbit jump.
E inv (t 0 , ∆t, kw) = Ep t + 0 -Ep t - 0 + Ep (t 0 + ∆t) + -Ep (t 0 + ∆t) - (4)
= ∆E 0 + ∆E 1 Etot(t 0 , ∆t, kw) = tref+100 T d tref v 2 R dt -E inv (t 0 , ∆t, kw) (5)
As a matter of example, (4) and ( 5) allow to estimate the invested energy (E inv 1.27 mJ) and the total harvested energy over a duration of 100 T d (Etot 5.54 mJ) in the orbit jump shown in Fig. 5. The harvested power in high orbit (after the jump) is about 45 times larger than the power in low orbit (before the jump), for this driving frequency.
- On the other hand, inherent experimental imprecision exists due to the non-ideal experimental setup (such as delays and parasitics effects) and imperfect experimental identification (with uncertainties regarding the values of ω 0 or xw). All of these possible variations in parameters must be considered, which emphasizes the need to optimize the orbit jump strategy to reduce its sensitivity to parameter variations (i.e., its robustness) and enhance its performance. In the rest of the paper, we will investigate the optimization of the orbit jump strategy (for several driving frequencies) that can enhance its effectiveness based on its energy cost and its robustness against variations.
Optimization of the orbit jump strategy
As noticed in the previous subsection, the success of an orbit jump strategy depends drastically on the values of its parameters (τ 0 , ∆t, kw) (which then depend on the driving frequency or the starting intra-well orbit for example). Properly defining both time parameters (τ 0 , ∆τ ) is crucial to the success of the orbit jump, regardless of the buckling factor (kw). For example, setting the starting time of the orbit jump τ 0 to 0.9 renders the orbit jump strategy in Fig. 5 ineffective, meaning that the VEH remains in low-power intra-well orbit even after the orbit jump. To ensure an effective orbit jump strategy, we conduct a numerical investigation of the optimal values of the orbit jump parameters which:
(C 1 ) maximize the total harvested energy over 100 cycles, Etot;
(C 2 ) maximize the success rate of the orbit jump within a neighborhood of the orbit jump parameter values, with a variation of ±15%.
The criterion (C 1 ) allows to select orbit jump parameters that maximize the harvested energy while minimizing the invested energy. This criterion allows to evaluate effectiveness of the orbit jump strategy. While the criterion (C 2 ) makes it possible to anticipate potential experimental deviations in the characteristics of the VEH or in the parameters of the orbit jump strategy. This criterion allows to evaluate robustness of the orbit jump strategy. Then, the optimization of the orbit jump parameters according both (C 1 ) and (C 2 ) criteria is performed by means of an evolutionary strategy algorithm [35] implemented in our in-house Python CUDA code. Evolutionary strategy algorithm has been selected due to its robustness in handling multi-extremal and discontinuous fitness functions, as well as its ability to benefit from GPU parallel computing. For that we define the average total harvested energy in (6) which is the fitness function6 to maximize,
Etot(τ 0 , ∆τ, kw) = N -1 i=0 Etot(τ i 0 , ∆τ i , k i w ) N, ∀N > 1, N ∈ N (6)
where N > 1 is the number of parameter combinations tested (e.g., N = 8 000) and for all i ∈ 0, N -1 , (τ i 0 , ∆τ i , k i w ) ∈ V(τ 0 , ∆τ, kw), the neighborhood of a given parameters combination (τ 0 , ∆τ, kw) with a variation of ±15%. Therefore, the optimization problem to solve is formulated as (7).
S : max Etot(τ 0 , ∆τ, kw) (τ 0 , ∆τ, kw) ∈ D where D = [0.2, 1.2] × [0.2, 1.5] × [1, 2] (7)
Note that we only consider τ 0 and ∆τ larger than 0.2 for the ease of experimental implementations. Moreover, the maximum mechanical constraints that can be supported by the considered prototype of bistable VEH have been taken into account by limiting the kw to 2. kw > 1 was chosen due to the prototype's reference buckling level xw being close to the estimated minimum value, and optimization results showed no interesting solutions for kw < 1. The detailed optimization procedure is described in Appendix A. Figure 6 presents a comparison between the optimized and suboptimized 50 Hz orbit jump strategy (as shown previously in Fig. 5). The optimized orbit jump strategy requires an invested energy of 0.49 mJ and yields a total harvested energy of 6.06 mJ over 100 oscillation cycles. It is worth noting that the end of the optimized orbit jump is defined slightly after the maximum displacement of the mass, in contrast to previous studies where the end time was generally defined at the instant of maximum displacement. The next section will investigate optimal orbit jump parameters combination (τ 0 , ∆τ, kw) satisfying ( 7) for [30 Hz,60 Hz], and the optimization results will be presented jointly with the experimental results.
Optimized orbit jump strategy: experimental validation and energy analysis
This section compares experimental and numerical results of the optimized orbit jump strategy.
Experimental validation
In order to experimentally validate the aforementioned optimized orbit jump strategy, experimental tests have been made around each optimal orbit jump parameters combination for driving frequency in Hz, 60 Hz]. Figure 7 shows the experimental setup. The bistable VEH prototype shown in Fig. 2(b) is fixed on an electromagnetic shaker driven by a power amplifier. The acceleration amplitude A of the shaker is measured by an accelerometer and sent to the control board. As illustrated in Fig. 7(b), the amplitude of the signal driving the power amplifier (v A ) is regulated in order to maintain a constant acceleration amplitude A = 4 m/s 2 by means of an internal Proportional Integral (PI) controller. The piezoelectric electrodes of the energy harvesting APA (in blue) are connected to:
a voltage follower in order to prevent the control board's impedance impacting the piezoelectric element and to avoid the control board to be exposed to a voltage stricly higher than 10 V which could damage it;
a resistive decade box whose resistive value can be adjusted with a signal sent from the control board.
Displacement and velocity (x, ẋ) of the inertial mass are sensed with a laser differential vibrometer. At given times, when modifying the buckling level of the bistable VEH, the control board sends a signal to the high speed bipolar amplifier which controls the voltage across the tuning APA, vw. In order to smooth the variation of the buckling level and avoid to damage the VEH prototype, we implemented in the control board a second-order filter that reduces the sharpness of vw variations. The rise time of the buckling level is approximately one twentieth of a cycle, which is acceptable. Before any runs, the acceleration amplitude is gradually increased to A = 4 m/s 2 and the buckling level is decreased to obtain xw = 0.71 mm. It is worth noting that the parameters of the VEH prototype have been identified in low-power orbit characterization and given in Table 2.
In order to experimentally validate the model described in equation ( 1) and the numerical orbit jump modeling, 2 000 experimental results are launched with several values7 of τ 0 and ∆τ (kw = 1.5, Figure 8 shows corresponding experimental and numerical scatter plots (τ 0 , ∆τ ) with each point associated to its final orbit after the jump and gives comparison between experimental and numerical structures of the basins for f d = 40 Hz and R = 20 kΩ. As shown in Fig. 8, there are three possible behaviors: low-power intra-well orbit (in light blue), high-power inter-well orbit (in dark blue) and chaos (in dark salmon). As an example, for experimental data in Fig. 8(a), increasing the buckling level from xw to 1.5 xw over a duration of 0.3 T d starting at t-t ref = 0.7 T d will result in the bistable VEH operating on the high-power inter-well orbit. The ranges of parameter values where the VEH jumps are approximately the same, although more chaos is observed experimentally. This may be attributed to an insufficient waiting time for the nonlinear VEH to reach steady-state conditions in the experimental setup. However, the experimental and numerical basins' structures given in Fig. 8 are almost identical which validates the numerical model of the bistable VEH and numerical model of the orbit jump application. Additionally, Fig. 8 shows the pseudo-periodicity of the inter-well orbit's basin in τ 0 (described with the two basins in the middle of Fig. 8(a,b)). Therefore, since the starting orbit is T d -periodic, the values of τ 0 can be restricted to a semi-open interval of length 1 without loss of information and justifies the values of τ 0 .
τ 0 ∈ [0.2, 1.2], ∆τ ∈ [0.
- In order to validate the model of the VEH and the orbit jump strategy effect, we perform experimental tests around the optimized orbit jump parameters (obtained with the evolutionary strategy algorithm introduced in section 3.2) for f d = 50 Hz. Figure 9 compares experimental (Fig. 9(a,c)) and numerical (Fig. 9(b,d)) time displacement signals and trajectories in the phase plane respectively before, during, and after the application of the orbit jump strategy. As seen in Fig. 9, experimental results are consistent with the numerical results. Experimental orbits are asymmetric, as shown in Fig. 9(a,c), which can be attributed to mechanical irregularities resulting from the manufacturing process of the bistable VEH. Moreover, the experimental transient just after the jump in Fig. 9(a,c) (in green) shows excitation from higher modes of the VEH prototype due to the quick buckling level variation. The corresponding experimental trajectory in the 3D plane (t, x, ẋ/xw ω 0 ) is presented in Appendix B.
In quantitative terms, the mean harvested power is 26.5 times higher after the orbit jump in Fig. 9, while the experimental invested energy required is equal to 0.65 mJ and can be recovered in 0.21 s. Then, we optimize the orbit jump strategy in the frequency range [30 Hz,60 Hz] and obtain an optimal triplet (τ opt 0 , ∆τ opt , k opt w ) satisfying the criterion S (7) for each driving frequency. Subsequently, we launch experimental maps around each optimal triplet and driving frequency in order to evaluate the robustness of the approach. It is worth mentioning that experimental maps are defined with 49 parameters values and a variation rate of ±15%. Specifically, we take 7 values in [0.85×τ opt 0 , 1.15×τ opt 0 ] for τ 0 , 7 values in [0.85×∆τ opt , 1.15×∆τ opt ] for ∆τ and kw = k opt w . The VEH starts in an intra-well orbit at each driving frequency. Then, the optimal jump is applied, and the power is measured in order to evaluate the inter-well orbit power. Through optimization of the orbit jump strategy, the highest orbit was achieved at each driving frequency both experimentally and numerically, as illustrated in Fig. 10. Differences between experimental and numerical data may result from the mismatch between the numerical model and our vibration harvester prototype. It is worth mentioning that applying an orbit jump strategy always yields a significant increase in power in this frequency range. As a matter of example, for f d = 55 Hz the experimental mean harvested power of the intra-well orbit is 0.044 mW despite of 4.44 mW for the inter-well orbit leading to a power gain of 100 after a successful orbit jump. Then, as shown in Fig. (8,9,10), experimental results are consistent with numerical results and allow to validate the model and the proposed orbit jump strategy.
Energy harvesting performance analysis
Figure 11 shows the optimal orbit jump parameter combinations (blue points) and successful experimental parameters closest to the optimal (red stars) for each driving frequency. Note that the automated pre-characterization of the relationship between xw and vw allows the experimental determination of the modified buckling level kw xw with good accuracy. As shown in Fig. 11, experimental data with a variation of ±15% around the optimal times parameters and fixed kw = k opt w are in good agreement with the optimized data except for f d = 40 Hz. The discrepancy between experimental and numerical models observed at f d = 40 Hz can be attributed to the sudden change in behavior of the intra-well orbit due to softening nonlinearity of the potential wells for this particular acceleration amplitude (which equals 4 m/s 2 ), as seen in Fig. 10. The experimental success rate associated with tested (τ 0 , ∆τ ) pairs, distributed with a variation of ±15%, is shown in Fig. 12. It is worth noting that despite the relatively large variation around the optimal times parameters, the average experimental success rate is about 48 %, which demonstrates the robustness of the optimized orbit jump strategy. However, the highest inter-well orbit ceases to exist beyond 55 Hz (both numerically and experimentally), resulting in the subharmonic 3 [8] becoming the highest inter-well orbit. Nonetheless, this orbit is challenging to reach and highly unlikely, leading to a decline in success rate between 55 Hz and 60 Hz.
Figure 13 shows invested energy (4) and total harvested energy (5) over 100 cycles as a function of the driving frequency for both successful experimental data and optimal numerical data. The dotted orange curve corresponds to the minimum difference in mechanical energy between the high-power inter-well orbit and the low-power intra-well orbit, ∆E min , whose expression is given by (8).
∆E min = min ∀t∈[0,T d [ Ep(t) + 1 2 m ẋ(t) 2 inter-well -Ep(t) + 1 2 m ẋ(t) 2 intra-well (8)
As shown in Fig. 13, the invested energy required for the orbit jump does not exceed 1 mJ, even experimentally. Moreover, the invested energy associated with the optimal parameters is close to this minimum energy limit, validating the optimization strategy. Note that a portion of the electrical energy injected into the system is currently lost as electrostatic energy in the tuning APA. For example, at f d = 50 Hz, the mechanical energy injected into the system is equal to 0.6 mJ, while the electrostatic energy lost in the tuning APA is 4.15 mJ. As a result, the total invested energy is 4.75 mJ, and the recovery time should not exceed 2 s.
To adress this issue, a power electronic converter could be used to store the lost electrostatic energy in the tuning APA and reintroduce it into the system at the appropriate time, although this approach was not implemented in this study.
As illustrated in Fig. 13, the driving frequency increases, achieving a high-power orbit becomes more challenging. This leads to an increase in the amplification factor, the invested energy and the total harvested energy over 100 cycles. The evaluation of the invested energy for orbit jumping is a major parameter for analyzing the quality of an orbit jump strategy. Additionally, the recovery time to achieve a positive energy balance allows the evaluation of the interest of jumping and the assessment of the cost-effectiveness ratio. ery time), but also whether they are optimized or robust to parameter shifts and if they were experimentally tested over a wide frequency range. It can be noted that very few strategies are optimized in the literature and that the only reference where a complete optimization of an orbit jump strategy has been considered (Udani et al. [4]) has only been tested for a single driving frequency, which does not validate its robustness, nor the generality of the optimization method. Wang's et al. [START_REF] Wang | 31st Conference on Mechanical Vibration and Noise[END_REF] strategy requires a high amount of energy which can be optimized. On the other hand, Huang et al. [27] have defined an innovative strategy that combines two other strategies (buckling level modification and VIE) and is therefore more complex. However, the jump duration is high (90 s), increasing the difficulties of implementation and decreasing both the robustness and efficiency of the strategy with a long recovery time equals to 120 s. Huguet et al. [START_REF] Huguet | [END_REF] Total harvested energy over 100 cycles for the numerical optimal orbit jump parameters (in blue dotted curve) and the experimental orbit jump parameters that allowed to jump (green star-shaped markers) as a function of the driving frequency with optimal load resistor. The dotted orange curve represents the minimum mechanical energy difference (8) between the highest inter-well orbit and the lowest intra-well orbit.
the strategy considered in this paper and examined two orbit jump parameters: the starting time of the jump (tested in 4 different values) and the amplification factor of the buckling level (tested in 6 different values). They fixed the ending time as the instant when the mass reaches its maximum displacement. However, using the optimization criterion defined in our study, results show that the optimal ending time for the jump occurs slightly after the maximum displacement. Nonetheless, their study has the merit of presenting numerous experimental trials, which allowed them to partially evaluate the robustness of the approach through statistical analysis of the jumps (with a 0% variation around each combination). The evolutionary strategy algorithm as well as the new optimization criterion proposed in this paper enable the achievement of performant orbit jumps, combining the shortest time duration (8.3 ms), lowest energy cost (0.6 mJ), shortest recovery time (0.1 s) while being robust to large parameters shifts (±15% variation).
Conclusion
Due to the existence of low-power orbits in nonlinear VEHs dynamics, robust and effective orbit jump strategies are essential to ensure good energy harvesting per-formance by enabling transition from low-power to high-power orbits. To achieve this, orbit jump parameters can be optimized. This paper presents the optimization of an existing orbit jump strategy using an evolutionary strategy algorithm. As a result of the development of an in-house Python CUDA code compatible with GPU computations, the optimization results are accelerated. The experimental results consistently demonstrate that the optimized orbit jump parameters generated high-power inter-well orbits, while maintaining their performance even under potential fluctuations in the VEH bistable environment. By considering the experimental amplification factor at its optimal value and times within a ±15% variation of the optimal times, the optimized strategy's robustness was demonstrated with an average success rate of 48% for the orbit jump. Finally, the energy required for the orbit jump does not exceed 1 mJ, even in experimental conditions. The proposed optimization strategy allows to enhance the confidence of an orbit jump despite fluctuations in the environment of the VEH. This generic approach can be applied to other types of multi-stable VEHs to design optimized orbit jump strategies. In future works, optimization of the final buckling level, which would add a fourth orbit jump parameter associated with the strategy, will be investigated.
(a) Schematic representation of the bistable VEH.(b) Experimental prototype from[START_REF] Benhemou | 2022 21st International Conference on Micro and Nanotechnology for Power Generation and Energy Conversion Applications PowerMEMS[END_REF] with dynamics described by(1).
Fig. 2 :
2 Fig. 2: (a) Schematic structure of the bistable VEH. (b) Experimental bistable VEH studied in this article from [29].
Fig. 3 :
3 Fig. 3: (a) Mean harvested power P h as a function of the driving frequency f d , (b) inter-well (in dark blue) and intra-well (in light blue) orbits trajectories, their associated basins of attraction and attractor in the dimensionless phase plane (x/xw, ẋ/xw ω 0 ) for f d = 50 Hz. The denomination "Other" (in gray) regroups all the orbits not indicated in the legend (i.e., sub-harmonic orbits and chaos). The basins of attraction in (b) were obtained after 80 000 resolutions of the ODE system (1).
1 Fig. 5 :
15 Fig.5: Example of a successful orbit jump strategy for f d = 50 Hz with (τ 0 , ∆τ, kw) = (0.46, 1.01, 2.00): (a) time displacement signal, (b) trajectories in the phase plane (x, ẋ/xw ω 0 ), (c) evolution of the left stable equilibrium position before (in blue), during (in orange) and after (in green) the application of the orbit jump strategy and (d) the both elastic potential energy curves associated to the both buckling levels.
- 3 1 Fig. 6 :
316 Fig.6: Comparison between optimal (a,c) and suboptimal (b,d) of time displacement signals and trajectories in the phase plane (x, ẋ/xw ω 0 ) for f d = 50 Hz before (in blue), during (in orange) and after (in green) the application of the orbit jump strategy. Blue points correspond to the beginning of the orbit jump process, triangle up (resp. down) markers refer to the instant when the buckling level increased (resp. returned to its initial value). Optimal orbit jump parameter values (τ opt 0 , ∆τ opt , k opt w ) = (0.23, 0.46, 1.81). Suboptimal orbit jump parameter values (τ sub 0 , ∆τ sub , k sub w ) = (0.46, 1.01, 2.00).
Fig. 7 :
7 Fig. 7: (a) Experimental setup used to test the optimized orbit jump strategy and (b) its schematic representation.
2 , 1 . 5 ] 1 Fig. 8 :
21518 Fig. 8: Experimentally (a) and numerically (b) maps (τ 0 , ∆τ ) with kw = 1.5, f d = 40 Hz and R = 20 kΩ.
1 Fig. 9 :
19 Fig. 9: Comparison between experimental (a,c) and numerical (b,d) of time displacement signals and trajectories in the phase plane (x, ẋ/xw ω 0 ) for f d = 50 Hz before (in blue), during (in orange) and after (in green) the application of the orbit jump strategy. Blue points correspond to the beginning of the orbit jump process, triangle up (resp. down) markers refer to the instant when the buckling level increased (resp. returned to its initial value). Experimental orbit jump parameter values (τ exp 0 , ∆τ exp , k exp w ) = (0.26, 0.44, 1.81). Numerical orbit jump parameter values (τ num 0 , ∆τ num , k num w ) = (0.23, 0.46, 1.81).
1 Fig. 10 :
110 Fig. 10: Orbital mean harvested power P h obtained numerically (points) and experimentally (stars) as a function of the driving frequency f d for a sinusoidal excitation of amplitude A = 4 m/s 2 with optimal resistor R = 1/2Cpω d . The experimental data (stars) were obtained through the implementation of the optimized orbit jump strategy.
Figure 10
10 Figure10compares numerically (points) and experimentally (stars) mean harvested power (3) as a function of the driving frequency. Note that experimental power (stars) plotted in Fig.10comes from experimental results of the orbit jump. The VEH starts in an intra-well orbit at each driving frequency. Then, the optimal jump is applied, and the power is measured in order to evaluate the inter-well orbit power. Through optimization of the orbit jump strategy, the highest orbit was achieved at each driving frequency both experimentally and numerically, as illustrated in Fig.10. Differences between experimental and numerical data may result from the mismatch between the numerical model and our vibration harvester prototype. It is worth mentioning that applying an orbit jump strategy always yields a significant increase in power in this frequency range. As a matter of example, for f d = 55 Hz the experimental mean harvested power of the intra-well orbit is 0.044 mW despite of 4.44 mW for the inter-well orbit leading to a power gain of
Fig. 11 :
11 Fig.11:The optimal numerical values (blue points) and the optimal experimental values (red stars) of (a) the amplification coefficient kw, (b) the starting time τ 0 and (c) the orbit jump duration ∆τ for successful jumps as a function of the driving frequency.
introduced
Fig. 13 :
13 Fig.13:(a) Invested energy and (b) Total harvested energy over 100 cycles for the numerical optimal orbit jump parameters (in blue dotted curve) and the experimental orbit jump parameters that allowed to jump (green star-shaped markers) as a function of the driving frequency with optimal load resistor. The dotted orange curve represents the minimum mechanical energy difference (8) between the highest inter-well orbit and the lowest intra-well orbit.
Fig. 15 :
15 Fig.15: Experimental trajectory in (t, x, ẋ/xwω 0 ) 3D plane for f d = 50 Hz before (in blue), during (in orange) and after (in green) the application of the orbit jump.
Table 1 :
1 Main properties associated to the both groups of orbit jump strategies defined in the current state of literature.
External Forcing Authors Parameter or variable modified Perturbation waveform Validity range Energy cost Recovery time Optimality
Hand impulse Erturk et al. [16] Velocity 0.25 s * Multiple freq. 6 -8 Hz N/A N/A
Fast Burst Perturbation Sebald et al. [17] Voltage 0.7 s * Multiple freq. 27.3 -29.8 Hz N/A 1.5 s *
Impact-induced Zhou et al. [20] Velocity 0.1 s * Multiple freq. 4 -23 Hz N/A N/A
Electrical switching Mallick et al. [19] Voltage 0.2 s Single freq. 70 Hz 0.563 mJ * 1 s *
Attractor selection Udani et al. [4] Voltage 2 s Single freq. 19.8 Hz 1.21 mJ 5.66 s
Characteristic Modulation Authors Parameter or variable modified Perturbation waveform Validity range Energy cost Recovery time Optimality
Negative resistance Lan et al. [21] Resistance 0.1 s Multiple freq. 9 -11 Hz 0.2 mJ * 0.535 s
Load perturbation Wang et al. [22] Stiffness and damping 4 s Single freq. 5.2 Hz N/A N/A
Negative resistance Ushiki et al. [23] Resistance 0.9 s * Single freq. 70 Hz 35 mJ * 20 s
Bidirectional Energy Conversion Circuit Wang et al. [24] Stiffness and damping 10.9 s Single freq. 7.6 Hz 22 mJ 120 s
Buckling modification Huguet et al. [25] Buckling level 20 ms * Multiple freq. 30 -70 Hz 1 mJ * 1 s ∼ ∼ ∼
Voltage Inversion Excitation Yan et al. [26] Stiffness 1.5 s. Multiple freq. 48.6 -49.5 Hz 1.43 mJ 23 s
Adjustment strategy Huang et al. [27] Buckling level and voltage 90 s * Multiple freq. 35 -40 Hz 2.8 mJ 120 s
* indicates that the values have been estimated based on the given papers. N/A denotes the absence of data.
Table 3
3 Fig.12:The experimental success rate provided by the maps of ±15% around each optimal times parameter combination as a function of the driving frequency.
compares orbit jump
Table 3 :
3 Comparison between the optimized strategy developed in this paper and other previous strategies in literature. * indicates that the values have been estimated based on the given papers.
which is the effective excitation during the orbit jump, i.e., the FBP application on the ambient excitation (which is harmonic in this study).
There are two electromechanical transducers, one for energy harvesting and the other for buckling level tuning.
which correspond to the initial conditions converging toward the considered orbit.
which correspond to the states (when t = k T d , for k ∈ Z) into which the bistable VEH converges when stabilized.
The elongation of the tuning APA leads to a higher level of buckling, resulting in the two equilibrium positions moving further apart from each other.
which is used for evaluating how close a given solution is to the optimum solution.
Note that we opted to fix kw and to vary τ 0 and ∆τ because the times are more susceptible to experimental variations due to the time delay of the board and the amplifier.
Acknowledgements The authors would like to acknowledge Dr. T. Huguet for the fruitful discussions on orbit jump strategies and nonlinear dynamics.
Funding
This work has received funding from the European Unions Horizon 2020 research and innovation program under grant agreement No 862289.
Data Availability Statement
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Declarations
Conflict of interest
The authors declare that they have no conflict of interest. Fig. 14: Evolutionary strategy algorithm for optimizing orbit jump strategy. This flowchart illustrates different steps that allow to determine optimal orbit jump parameters for each driving frequency. Here (τ 0 , ∆τ, kw) ∈ D denotes the individuals sequence for a given population in D.
Figure 14 illustrates the various steps involved in the evolutionary strategy algorithm. The procedure starts with a first random generation of 8 000 orbit jump parameters combinations (i.e., individuals). Then, the simulation of the orbit jump associated with each set of parameters is performed, and each individual is evaluated based on its fitness function value (6). This involves considering 7 elements in the neighborhood of each direction, comprising τ 0 , ∆τ and kw. That is, we take |I| = 7 then for each individual we launch 7 3 new orbit jump simulations to compute its fitness function value (6). That means that for each generation, we simulate 2 744 000 orbit jumps using parallel GPU computations. Each individual is then evaluated and selected based on its average total harvested energy (6) over 100 cycles. To complete the population of the next generation, classical operations of crossing and mutations are applied to the 10% best individuals present in the next generation. |
00395179 | en | [
"spi.signal",
"info.info-ni"
] | 2024/03/04 16:41:20 | 2008 | https://hal.univ-brest.fr/hal-00395179/file/2008_06_MTA_review_TheoreticalPerfsDSE_.pdf | Crépin Nsiala-Nzéza
Roland Gautier
email: [email protected]
Gilles Burel
C Nsiala Nzéza
Crépin Nsiala Nzéza
Theoretical Performances Analysis of the Blind Multiuser Detection based on Fluctuations of Correlation Estimators in Multirate CDMA Transmissions
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
One of the salient features of CDMA-based third-generation (3G) wireless systems is the capability to support transmission of diverse data such as voice, low-resolution video, compressed audio... Since these heterogeneous services produce digital information streams with different data rates, their implementation requires the use of multirate CDMA systems where each user may transmit his data at one among a set of available data rates. An easy way to view the multirate CDMA transmission is to consider the variable spreading length (VSL) technique where all users employ sequences with the same chip period. Moreover, each data rate is tied to the length of the spreading code of each user.
Many blind approaches have been devised to either improve the performance of a CDMA receiver in a multirate multiuser context or reduce its complexity. Some prior knowledge of user parameters, e.g. the signature waveform [START_REF] Roy | Subspace blind adaptive detection for multiuser CDMA[END_REF], the processing gain, the code of a group of active users [START_REF] Wang | Group-blind multiuser detection for uplink CDMA[END_REF], the chip rate [START_REF] Wang | Group-blind multiuser detection for uplink CDMA[END_REF], [START_REF] Haghighat | A subspace scheme for blind user identification in multiuser DS-CDMA[END_REF] is always assumed, but its nature depends on the technique employed.
Thus, we recently proposed a blind multiuser detection scheme requiring no prior knowledge about the transmitter [START_REF] Buzzi | Iterative cyclic subspace tracking for blind adaptive multiuser detection in multirate CDMA systems[END_REF]. Therefore, its performances have to be analyzed in order to evidence its efficiency prior to any further implementation in operational systems, which is of great interest for both military and civil applications.
The remainder of the paper is organized as follows. Section 2 introduces the signal model and assumptions made. Section 3 briefly describes the blind-detection approach, while Section 4 highlights its performances in terms of probability of detection and false alarm. Finally, numerical results are detailed in Section 5, and conclusions are drawn in Section 6.
Signal modeling and assumptions
Let us consider the general case of asynchronous DS-CDMA system where each user can transmit at one out of S available data rates 0
1 1 S R R R - < < < �
. By denoting i u N the number of active users transmitting at i R and u N the total number of users such that 1 0
S i u u i N N - = = ∑
, the complex equivalent model of the received signal can be expressed as
1 1 0 0 ( ) ( ) ( ) ( ), i u i n i N S n i n i s d i n k y t a k h t kT T b t , - - +∞ , , = = =-∞ = - - + ∑ ∑ ∑ (1)
where
1 0 ( ) ( ) ( ) i L n i n i i c k h t c k p t kT - , , = = - ∑ . In (
n i h t
, is a virtual filter corresponding to the convolution of all filters of the transmission chain with the spreading sequence
0 1 { ( )} i n i k L c k , = - �
, where i L is the spreading factor for the ( ) th n i , user. • Signals are assumed to be independent, centered, noise-unaffected and received with the same power
• Because of the VSL technique, the symbol period
0 0 2 2 n i s s σ σ , ,
=
, for all ( ) n i , .
• The signal-to-noise ratio (SNR) in dB ) at the detector input is negative (signal hidden in the noise).
By denoting ( ) s t the global noise-unaffected received signal, (1) can be rewritten as
1 1 0 0 ( ) ( ) ( ) ( ) ( ). i u n i N S n i d i n y t s t b t s t T b t , - - , = = = + = - + ∑ ∑ (2)
Blind detection approach
As explained in [START_REF] Nsiala Nzéza | Blind Multiuser Detection in Multirate CDMA Transmissions Using Fluctuations of Correlation Estimators[END_REF], the analysis of the autocorrelation fluctuations estimators allows to achieve a blind multiuser detection. Successive investigations of the contributions of noise and signal through the analysis of the second-order moment of the autocorrelation estimator allowed its description, as hereinafter briefly reported.
The received signal is first divided into M temporal windows, each of them of duration F T . Then, within each window m , an estimation of its autocorrelation is computed as
� 0 1 ( ) ( ) ( ) , F T m m m yy F y t y t dt R T τ τ * = - ∫ (3)
where ( ) m y t is the signal sample over the th m window. Hence, the second-order moment of the estimated autocorrelation � ( ) yy R τ using M windows can be expressed as
� � � 1 2 2 0 1 ( ) ( ) ( ) , M m yy yy m E R R M τ τ τ - ⎧ ⎫ ⎨ ⎬ ⎩ ⎭ = Φ = | | = | | ∑ (4)
where � ( ) E ⋅ is the estimated expectation of ( )
⋅
F F m WT M WT σ σ σ Φ Φ = = (5)
Second, by only considering the independent, centered and noise-unaffected signal, it was shown that, on average, high amplitudes of the fluctuations ( )
s τ Φ of
i s i i T m pgn τ τ Φ Φ = ⋅ (6)
where ( ) ( )
s i i T s k pgn kT τ δ τ +∞ =-∞ = - ∑ , ( ) 1
δ τ = for 0 τ = , and ( ) 0 δ τ = if 0 τ ≠ . Thus, the fluctuations due to the global noise-unaffected signal are:
1 1 0 0 ( ) ( ) ( ). i s i S S s i T i i m pgn τ τ τ - - Φ = = Φ = Φ = ⋅ ∑ ∑ (7)
Since signals are assumed to be received with the same power (e.g., 0 0 2 s σ , ) and
i s i c T L T = , the fluctuations ( ) i τ Φ average amplitude i m Φ can be written as 0 0 0 0 4 4 i i s i i i c u s u s F F T L T m N N T T σ σ , , Φ = = (8)
Equation ( 8) shows an increase in the average fluctuations amplitude concomitant with that of the number of users. It also indicates that the average fluctuations amplitude is tied to the sequence lengths.
Let us at last compute the standard deviation, i σ Φ , of i Φ within each group i . The hypothesis of noise-unaffected and independent signals leads to
1 0 ( ) ( ). i u n i N i s n τ τ , - = Φ = Φ ∑ (9)
As detailed in [START_REF] Nsiala | Récepteur adaptatif multi-standards pour les signaux à étalement de spectre en contexte non coopératif[END_REF][Chap. 3,pp. 38-39], since the fluctuations are computed from many independent randomly-selected windows, ( )
n i s τ , Φ
is similar to a Gaussian distribution.
Consequently the variance 2 i σ Φ is written as
1 2 2 0 , i u i s n i N n σ σ , - Φ Φ = = ∑ (10)
where 2
s n i σ , Φ
stands for the variance of fluctuations due to the ( ) th n i , signal. Then, from [START_REF] Burel | Detection of spread spectrum transmission using fluctuations of correlation estimators[END_REF], the standard deviation i σ Φ can be expressed as
0 0 4 2 . i i u i c s F N L T M T σ σ , Φ = (11)
Equation ( 11) evidences that any increase in the number M of analysis windows allows to lower the standard deviation. Moreover, [START_REF] Nsiala Nzéza | Parallel Blind Multiuser Synchronization and Sequences Estimation in Multirate CDMA Transmissions[END_REF] shows that the longer the sequence is, the higher the average fluctuations amplitude is. Thus, the fluctuations curve highlights high equispaced peaks whose average spacing corresponds to the estimated period symbol i s T .
Therefore, standards can be differentiated like in [START_REF] Williams | Personal area technologies for internetworked services[END_REF]. It ensures that, the blind synchronization process recently proposed in [START_REF] Nsiala Nzéza | Parallel Blind Multiuser Synchronization and Sequences Estimation in Multirate CDMA Transmissions[END_REF], [START_REF] Nsiala Nzéza | Blind Multiuser Identification in Multirate CDMA Transmissions : A New Approach[END_REF], through which the number 2 .
2 i i b u i c M N L T W ρ , Γ = (12)
Hence, (12) proves that increasing the number M of windows improves the detection, but at the cost of a larger computing time.
Performances analysis
To determine whether a spread spectrum signal is hidden in the noise, a detection threshold η is taken from [START_REF] Nsiala Nzéza | Blind Multiuser Detection in Multirate CDMA Transmissions Using Fluctuations of Correlation Estimators[END_REF] as
, b b m η α σ Φ Φ = + ⋅ (13)
where α * ∈ � . Like in the case of radar detection, if this threshold is low, then more targets will be detected at the expense of increased numbers of false alarms. Conversely, if the threshold is high, then fewer targets will be detected, but the number of false alarms will also be low. In most radar detectors, the threshold is set in order to achieve a certain level of false alarm.
Study within the group i, only considering the AWGN
The number M of independent randomly-selected analysis windows is assumed large enough. Hence, for values of τ multiple of i s T , ( )
i τ Φ is Gaussian distribution centered in i m Φ and of variance 2 i σ Φ .
Then, its probability density ( ) i p Φ is given by:
2 1 1 ( ) exp . 2 2 i i i i i m p σ σ π Φ Φ Φ ⎧ ⎫ ⎛ ⎞ Φ - ⎪ ⎪ Φ = -⎜ ⎟ ⎨ ⎬ ⎜ ⎟ ⎪ ⎪ ⎝ ⎠ ⎩ ⎭ (14)
Probability of detection
Let us set as i D P the probability of detecting fluctuations peaks due to signals which belong to the group i , i.e., the probability for the fluctuations average amplitude to be above the threshold η for a given τ multiple of i s T . From (13), we get
( ) ( ) . i D p p d η η +∞ = Φ > = Φ Φ ∫ P (15)
Equation ( 15) can be rewritten using the complementary error function erfc as
1 . 2 2 i i i D m erfc η σ Φ Φ ⎛ ⎞ - ⎜ ⎟ = ⎜ ⎟ ⎝ ⎠ P (16)
Then, introducing ( 8), ( 11) and ( 13) in ( 16) leads to
2 2 1 1 1 2 2 M i D i u i c M erfc N L T W α ρ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎧ ⎫ ⎛ ⎞ + ⎪ ⎪ ⎜ ⎟ = - ⎨ ⎬ ⎜ ⎟ ⎪ ⎪ ⎝ ⎠ ⎩ ⎭ P (17)
At this point, let us set as ( ) ) wished to be observed, which depends on M , F T and e F , the sampling frequency. It is also useful and important to recall that the detector is designed to work in an interactive way with an operator. Hence, he must choose a sufficient value of K in order to observe, in the case of a good detection, at least 3 or 4 fluctuations peaks due to signals transmitting at the lowest i R , thus spread with the largest factor i L .
K dim = Φ ,
Moreover, since symbols are assumed independent, the events occurrence of a fluctuations peak for a given value τ multiple of i s T , can also be considered as independent.
Consequently, the global probability of detection, set as i P D , must take into account the number peak N of fluctuations peaks. Therefore, from (17), we get
2 2 1 1 1 . 2 2 peak N M i i u i c M P erfc N L T W α ρ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎧ ⎫ ⎧ ⎫ ⎛ ⎞ + ⎪ ⎪ ⎪⎪ ⎜ ⎟ = - ⎨ ⎨ ⎬⎬ ⎜ ⎟ ⎪ ⎪ ⎪⎪ ⎝ ⎠ ⎩ ⎭ ⎩ ⎭ D (18)
Probability of false alarm
When only focusing on a fluctuations point, its probability, denoted (1) b P to be above the threshold is
(1) 1 1 . 2 2 2 2 b b b m P erfc erfc η α σ Φ Φ ⎛ ⎞ - ⎛ ⎞ ⎜ ⎟ = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ (19)
Since the false alarms occur when the noise fluctuations are above the threshold at any point τ , the false alarm probability, denoted fa P is defined as
(1) 1 1 , K fa b P P ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ - = - (20)
where K allows to take into account all points of the global fluctuations. Then, (20) can be simplified as follows
1 1 1 . 2 2 K fa P erfc α ⎛ ⎞ ⎛ ⎞ = -- ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ (21)
Extension to the multirate multiuser case
When only two groups i and j of signals ( n i s , and n j s , ) interfere, it is improbable, even impossible, that there are fluctuations peaks due to n i s , where one awaited fluctuations peaks due to n j s , , since symbol periods are different. However, if that occurred, that should reinforce the probability to detect the fluctuations peaks due to n j s , .
Probability of detection
When there are several groups of signals transmitting at different rates, for given i , the probability i P / D B of detecting the fluctuations peaks due to signals within the group i depends on fluctuations, set as Φ B , due to both AWGN and MAI noise.
Since symbol periods are different, fluctuations Φ B have to be considered at the places where they are not supposed to exhibit peaks, because there, they can be awkward and have a similar effect with that of the noise. Hence, they can be characterized by their mean
m Φ B as ( ) ( ) ( ) 0 0 0 0 2 2 2 2 4 2 1 1 1 , i b u s F i s u F N m WT N WT σ β σ σ β ρ ρ , , ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ Φ + - = + - = B (22)
0 0 0 0 2 2 2 2 4 2 1 2 1 1 2 . i b u s F i s u F N M WT N M WT σ β σ σ σ β ρ ρ , , ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ Φ + - = + - = B (23)
Then, like in (13) a new detection threshold η ′ , taking into account ( 22) and ( 23) is defined as . m
η α σ ′ Φ Φ = + ⋅ B B (24)
Since signals are assumed to be received with the same power, using ( 5) and ( 24 ( ) ( )
0 0 2 4 2 1 1 2 1 . i s u F N WT M σ β ρ η α ρ , ⎧ ⎫ ⎛ ⎞ ⎪ ⎪ ⎜ ⎟ ⎪ ⎪ ′ ⎜ ⎟ ⎨ ⎬ ⎜ ⎟ ⎪ ⎪ ⎜ ⎟ ⎪ ⎪ ⎝ ⎠ ⎩ ⎭ + - = + (25)
Hence, focusing on a peak, the probability i / D B P of detecting fluctuations due to signals in
the group i is 1 , 2 2 i i i m erfc η σ ′ Φ / Φ ⎛ ⎞ - ⎜ ⎟ = ⎜ ⎟ ⎝ ⎠ D B P (26)
which can be expressed using ( 8), ( 11) and (25) as ( ) ( )
2 2 2 1 1 1 1 1 . 2 2 i u M i i u i c N M erfc N L T W β ρ α ρ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ / ⎧ ⎫ ⎧ ⎫ + - + ⎪ ⎪ ⎪⎪ = - ⎨ ⎨ ⎬⎬ ⎪ ⎪ ⎪⎪ ⎩ ⎭ ⎩ ⎭ D B P (27)
Finally, like in (18), the global probability of detection, set as i P / D B , taking into account the number peak N of fluctuations peaks is given by .
peak N i i P ⎛ ⎞ ⎜ ⎟ / / ⎝ ⎠ = D B D B P (28)
Probability of false alarm
If we first focus on a fluctuations peak of signals within the group i , and since the false alarm is due to the fluctuations of the global noise (additive noise and MAI noise), its probability, noted (1) P B , to be above the threshold is
(1) 1 1 . 2 2 2 2 m P erfc erfc η α σ ′ Φ Φ ⎛ ⎞ - ⎛ ⎞ ⎜ ⎟ = = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ B B B (29)
Thus, as previously detailed in the multiuser single rate case, the probability of false alarm is the same as in (21). Finally, like in (12), let us set as i B , Γ the ratio ( 8) to (23) (noted ( ) i B out SNR , ), which stands for the SNR at the detector output ( ) ( ) ρ , while i,
2 2 . 2 1 1 i u i c i B i u N L T W M N ρ β ρ , Γ = + - (30
Γ B is bounded as ( ) ( ) 2 max( ) . 2 1 i u i c i i i u N L T W M N β , , Γ ≤ Γ = - B B (31)
Moreover, for high SNRs, the ratio (30) to (12) leads to (in dB)
( ) ( ) ( ) ( ) 20 log 1 1 . i B i b i out out u SNR SNR N β ρ , , = - + - (32)
Equation ( 32) shows a loss of ( ) ( )
20 log 1 1 i u N β ρ
+ -dB at the detector output in the multiuser multirate case compared to the single rate one. However, the same performances can be achieved by increasing for example M as also proved in ( 31), but at the expense of a higher computation time.
Let us set as (
) i m η Φ ,
∆ the ratio (8) to (13), and (
) i m η ′ Φ ′ ,
∆ the ratio ( 8) to (25). Then, we get
2 ( ) 2 , 1 i i u i c m M N L T W η ρ α Φ , ∆ = + (33) ( ) 2 2 ( ) 2
. 1
( 1)
1 i i u i c m i u M N L T W N η ρ β ρ α ′ Φ ′ , ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ∆ = + - + (34)
In addition, if the number of analysis windows M is chosen large enough, i.e.,
, i i u i c m N L T W η ρ Φ , = ∆ (35) � ( ) 2 ( ) 2 . 1 ( 1)
i i u i c m i u N L T W N η ρ β ρ ′ Φ ′ , = ∆ + - (36)
Moreover, in the case
0 0 2 4 2 1 1 . i s u F N WT σ β ρ η ρ , ′ ⎧ ⎫ + - ⎪ ⎪ = ⎨ ⎬ ⎪ ⎪ ⎩ ⎭ (37)
Then, the following equations give the approximations of probabilities of detection ( 18) and (28).
1 1 1 , 2 2 peak N i i u i c M erfc P N L T W ρ ⎧ ⎫ ⎧ ⎫ ⎛ ⎞ ⎪ ⎪ ⎪⎪ = - ⎨ ⎨ ⎬⎬ ⎜ ⎟ ⎪ ⎪ ⎝ ⎠ ⎪ ⎪ ⎩ ⎭ ⎩ ⎭ D (38) � ( ) ( ) 2 2 1 1 1 1 . 2 2 peak N i u i i u i c N M erfc P N L T W β ρ ρ / ⎧ ⎫ ⎧ ⎫ ⎧ ⎫ + - ⎪ ⎪ ⎪ ⎪⎪⎪ = - ⎨ ⎨ ⎨ ⎬⎬⎬ ⎪ ⎪ ⎪ ⎪⎪⎪ ⎩ ⎭ ⎩ ⎭ ⎩ ⎭ D B (39)
It is obvious to check that (36) and (39) are equivalent to (35) and (38) respectively, by setting 1 β = , which corresponds to the case i u u N N = , i.e., when all users are transmitting at one rate i R . Moreover, (38) and (39) exhibit a similar behavior asymptotically. Thus, whatever the MAI noise power is, the probability of detection can be improved just by increasing the number of windows M . However, it depends on both the available computation power and the time allocated for the detection.
Numerical results
Detectors parameters setting
Let us set as s T the duration of the global intercepted signal, 1
⎢ ⎥ ⎛ ⎞ ⎢ ⎥ ⎜ ⎟ ⎢ ⎥ ⎝ ⎠ ⎣ ⎦ ⎧ ⎪ = ⎪ ⎨ ⎪ = ⎪ ⎩ (40)
where * ⎢ ⎥ ⎣ ⎦ stands for the integer part of * . Let us also recall that the average spacing between fluctuations peaks gives an estimate of symbol periods inside each group i . Consequently, since the detector is designed to work in an interactive way with an operator, some parameters can be set according to the number of peaks which one wishes to observe. Without detailing calculations, following equations give K and .
i peak N , max e i c F L K F γ ⎢ ⎥ = ⎢ ⎥ ⎣ ⎦ (41)
.
max i max i peak i i i L N L L L γ = , ≠ (42)
where 0 1 max ( )
max i i S i L L = , , - = �
, γ is a real coefficient to be adjusted in order to obtain the minimum number of peaks wished for users whose signals are spread with the processing gain
Performances analysis
Figure 2 illustrates the probability of false alarm fa P according to the number K of analysis points for different values of α . It evidences that any increase in the threshold concomitantly with K induces a decrease of fa P . However, that also lowers the probability of detection. Thus, 3 α = and 889 K = seem to be a good trade-off between fa P and the probability of detection. In addition, it is obvious that when the threshold increases, best performances are achieved with the largest sequence lengths as highlighted on Figure 7 and Figure 8. , approximated probabilities curves are very close to the real ones, in agreement with theoretical analysis. In addition, the larger the sequence length is the highest i P D is. In addition, for larger thresholds, performances can be improved either by an increase of the number of analysis windows, or by detecting first fluctuations peaks corresponding to largest spreading codes lengths, as shown in Figure 13 and Figure 14. Let us note that one also could adjust the other parameters of the detection, since the detection method described herein this paper can be performed in an interactive way.
T
for the users transmitting at the rate i R is tied to the common chip period c T : a centered additive white Gaussian noise (AWGN) of variance 2 b σ .
-unaffected signals and the noise, respectively. Moreover, since fluctuations are computed from many randomlyselected windows, they do not depend on the signals relative delays n i d T , . First, only the fluctuations due to the additive noise are uniformly distributed over all values of τ . Depending on the spreading sequence properties, the Multiple Access Interference (MAI) noise generates rather null, or very low incoherent fluctuations. Since the noise is random, the fluctuations of the autocorrelation estimator, denoted ( ) b τ Φ , are also random. Assuming a receiver filter with a flat frequency response in [ 2 2] W W -/ , + / and zero outside, they can be characterized by their mean b m Φ and standard deviation b
N
of active users is determinate, can be performed.In addition, let us denote ρ , the SNR at the detector input (denoted SNR in (in dB)) ratio (8) to (5) (b), noted i b , Γ , represents the SNR at the detector output (noted SNR( ) i b out , ) as explained in[START_REF] Burel | Detection of spread spectrum transmission using fluctuations of correlation estimators[END_REF]
the number of points of the global fluctuations ( )
where
β-
= is the ratio between the total amount of users transmitting at all rates and those transmitting at particular rate i R , and( is the power of signals inside other groups. Their standard deviation σ Φ B can be expressed as
Γ
are close for low SNRs or when the total number u N of users is equal to i u N . However, for high SNRs, i b , Γ increases with2
sampling frequency at the detector input. It ensues that the total number e N of samples of the signal to analyze and the number M of analysis windows can be
LN
, and i peak N stands for the minimum number of fluctuations peaks within each group i . Moreover, since the signal spectrum must be contained in the detectors ADC (Analog to Digital Converter) bandwidth, W has to satisfy c e F W F ≤ ≤ . A last, for signals transmitting at i R , the number of symbols i s contained within an analysis window is given by .
Figure 1 .
1 Figure 1. Fluctuations ( ) 0
Figure 1 N
1 Figure 1 was obtained with processing gains of 0 31 L = , 1 127 L = (complex Gold sequences), and 3 α = . It also was set
Figure 2 .
2 Figure 2. Probability of false alarm according to K for different thresholds.
Figure 3 and∆
3 Figure3and Figure4show the ratio (8) to (13) in dB, set as (
Figure 3
3 Figure 3.
Figure 4
4 Figure 4.
Figure 5 and
5 Figure5and Figure6show the ratio (8) to (25), set as (
Figure 5
5 Figure 5.
Figure 6
6 Figure 6.
Figure 7 .
7 Figure 7.
Figure 8
8 Figure 8.
Figure 9 and
9 Figure 9 and Figure 10 illustrate the probability of detection i P D in the multiuser single rate case. They evidence that any increase in the number M of analysis windows induces an increase of i P D . Moreover, for larger M , i.e., 2 2 M α ≥, approximated probabilities curves are very close to the real ones, in agreement with theoretical analysis. In addition, the larger the sequence length is the highest i P D is.
Figure 9 .
9 Figure 9.
iP
D according to M and
Figure 10 .
10 Figure 10.
Figure 11 and
11 Figure 11 and Figure 12 depict the probability of detection i P / D B in the multiuser multirate case. They first show that approximated probabilities curves are very close to the real ones. Second, for a fixed threshold, the probability of detection increases concomitantly with sequence lengths. Moreover, any increase in the number M of analysis windows improves the probability of detection.
Figure 11.
Figure 12.
Figure 13.
Acknowledgment
This work was partially supported by the Brittany Region (France).
At last, Figure 17 that despite the increase in the number of active users, the ( ) i B out SNR , can be strongly improved by focusing first on fluctuations peaks due to signals spread with the largest sequences lengths.
Conclusion
We have evidenced the efficiency of the blind multiuser multirate detection method based on fluctuations of correlation estimators through its theoretical analysis. Assuming a large number of independent randomly-selected analysis windows, we have expressed second-order moments of fluctuations due to both signals and noise. Then, we have derived the probabilities of detection in both multirate and single rate case.
Moreover, the analysis of detection probabilities, confirmed by numerical results have proved that very good performances in terms of probability of detection and false alarm can be achieved, even at very low SNRs or with many interfering users.
At last, we demonstrated that performances could be significantly improved just by increasing the number of analysis windows, but at the expense of a higher computation time. |
00410195 | en | [
"math.math-mp",
"phys.mphy"
] | 2024/03/04 16:41:20 | 2009 | https://hal.science/hal-00410195/file/ART-18082009.pdf | Jean-Marc Oury
Bruno Heintz
The Absolute Relativity Theory
come
any natural transformation in ℵ corresponds to a true physical phenomenon, and reciprocally. Taken together, those principles lead to think that physics should be the description of the structures in ℵ as seen through the point of view functor. This is the aim of our first section that process in twelve successive steps that can be summarized as follows.
Since Hopf algebras can be classified according to their primitive part, the well known classification of real Lie algebras leads to determine a set of compatible ones, the representations of which could correspond to usual particles. The characterization of each type of these real Lie algebras through its Cartan subalgebra leads to associate with each type a specific space-time that is linked to the others by the usual embedding of algebras.
Two mathematical facts play then a key role. The first one is a specific property of the endomorphisms of sl n -algebras, seen as acting on those Cartan subalgebras that define different space-time, that have an order two specific symmetric representation that can be identified with the Lagrangians of physics. The second one is the fact that the point of view functor, being contravariant, induces the change to dual Hopf algebras that have infinite dimensional polynomial-type representations. Any object is represented in those algebras as inducing a creation/annihilation operator that generates the passing of his time. This mathematical fact corresponds globally to the so-called "second quantization" introduced by Quantum Field Theory. The use of Yangians representations gives a useful tool for representing this operator as its infinite dimensional generator J.
The capacity to algebraically represent Lagrangians and the "passing of the time" leads naturally to define physical observers as objects in ℵ together with a well defined Lagrangian and passing of time. The identification with usual physics can now be made.
Euler-Lagrange equations first appear as the expression in our quantized context of the monodromy of the Knizhnik-Zamolodchikov equations. The well-known relations between Lie algebra together with the representation of the passing of time gives a computation process, that should permit to calculate from only one measured data, the fine structure constant α, the characteristics of the particles (like their mass) as identified by contemporary physics.
The theory also predict the existence of non identified (as of today) other particles that could give new interpretations of the "dark matter" and the "dark energy". All those computations may be seen as a new theory that explains why and how the matter does appear in a well defined quantized way : We propose to call this new branch of physics the Mass Quantification Theory.
The first section finally mentions that if one wants to represent space-time from the point of view of a physical observer as a Lorentzian manifold, the identification of local monodromies on this manifold with the above KZ-monodromy applied to our algebraic Lagrangians gives exactly the Einstein equation, but now its right-hand side has also found a mathematical interpretation. Since it comes from the quantized part of physics, this interpretation could be seen as what introduces General Relativity in the realm of quantum physics, i.e. as the unification of the two.
The second section of this paper is dedicated to a more mathematical presentation of the foundations of the Absolute Relativity Theory.
The third section contains the first computations that can be obtained from the Mass Quantification Theory. The almost perfect concordance of the results obtained by pure calculations with the best experimental values can be seen as promising for future progresses.
1 Introduction to the Absolute Relativity Theory
1.1 The status of space-time in physics
By refuting the old Aristotelian distinction between to be "at rest" or "in motion", the classical relativity principle asserted that "motion" can only be defined relatively to Galilean "observers", which presupposes the existence of a Newtonian "space" that defines those observers and contains observable "objects". Three centuries later, Einstein's special relativity theory replaced Galilean observers by Lorentzian ones and extended space to space-time, without altering the universal character of the latter.
Only the General Relativity theory began to inverse the factors by stating that space-time does not contain the observable "objects" but is defined by them in an interactive process encoded by Einstein's equation that links Ricci curvature tensors of space-time seen as a Lorentzian manifold with the distribution of energy (or masses) induced by those objects. But the price paid for this reversal was quite high since the use of Ricci curvature tensors prevented the theory from describing internally asymmetric phenomena. So, until recently, general relativity theory has remained an isolated branch of contemporary physics, mainly dedicated to the study of gravitational phenomena.
The second branch of contemporary physics, born with Schrödinger, Heisenberg and Dirac, kept asserting the intrinsic "existence" of a space-time, thought as "the referential of the laboratory of the observer" on which Hilbert spaces of C-valued functions are defined. Only later, with the emergence of quantum electrodynamics (QED) and quantum fields theory (QFT), systematic efforts were made in order to reverse the factors by assuming for instance that "observable" could also be defined as elements of abstract algebras, and that "states of nature" may be seen as R-valued operators on such algebraically predefined "observable".
Finally, the status of space-time in string theories is even more astonish-ing since its existence is a prerequisite for each of them, but its dimension and folded structure differ from one theory to the other. So, despite many crucial discussions about space-time, string theories do not really challenge its intrinsic existence either2 . Thus, the modern and contemporary idea of the intrinsic existence of some (generalized) space-time could well be the Gordian knot that has been blocking the path to the unification of physics for one century.
Since this idea amounts to representing the space-time perceived by the set of observers we belong to as universal, we propose to refer it as the egocentric postulate.
Unfortunately, since this egocentric postulate always stands at the very beginning of physical theories, refuting it immediately leads to the question of the foundations of the construction we intend to build.
Observers, observables, and physical phenomena
The theory presented here answers this question by noticing that the first step in the construction of a physical theory should be to define a representation of the observation process itself, which means a definition of observers and observables together with a description of their relations.
The algebraic approach suggests to treat observers and observables on the same footing by seeing them as objects of an appropriate category, that we propose to call the preuniverse ℵ, and defining the point of view of an object a (observer) on an other object b (observable) as the usual set of arrows from b to a: hom(b, a). Any object will thus be both an observer and an observable.
We will call the usual contravariant functor hom(., a) the point of view functor of a, and any functor F equivalent in ℵ to hom(., a) will be said to be represented by a. The consequences of the contravariance of the point of view function cannot be overstated since any object, as an observer, belongs to the preuniverse ℵ, but, as an observable, it belongs to the opposite category ℵ opp defined by "reversing the arrows". In other words, from the very first algebraic foundations of the theory, physical objects have to be described with a "double nature", first as observers in ℵ, second, as they are seen from other objects, namely objects in ℵ opp .
By considering then some internal consistency conditions (mainly requiring appropriate "diagrams of arrows" to be commutative) and completeness conditions (mainly requiring the preuniverse to have "enough" objects to represent "composed" objects and point of view functors), we will precise the structure of the preuniverse ℵ that will appear as a well defined category of representation functors ρ A,V (or A -bimodules) from various complex Hopf algebras A to finite dimensional bimodules V .
ℵ opp is then the dual category ℵ * , that is the category of (non necessary finite dimensional) bimodules over various dual algebra A * (in the sense of the restricted Hopf duality). Both algebras A and A * fundamentally differ : for instance, non trivial A-modules we will consider have a greater than one finite dimension, while A * -modules are either one-or infinite-dimensional. Therefore, the above "double nature" of objects will have tremendous consequences 3 .
Hopf algebras are generally not cocommutative. This means that their dual cannot be seen as a space of functions, and thus that our categorial approach leads directly to so-called "quantized" (or "deformed") situations.
We will get in fact two type of quantizations, the first one associated with the non cocommutativity of A will globally correspond to the one of quantum mechanics, the second, more subtle, associated with the non cocommutativity of A * , will generate creation/annihilation operators that correspond to the "second quantization" introduced by the QFT.
Let us emphasize that those quantizations do not appear in cocommutative algebras that therefore cannot be seen as basic cases, but as degenerate ones, which explains the devastating consequences of the egocentric postulate.
By working indeed on operators on an Hilbert space of functions, QFT is compelled to use "perturbational methods" in order to generate quantized situations, instead of representing them directly, as we will do below, by noticing that such quantized situations intrinsically induce the existence of creation/annihilation operators that, in some mathematical sense we describe precisely, produce at each present instant the immediate future of particles and associated space-time.
Finally, a physical theory has to be able to be confronted with experimental results. This implies that the theory includes a representation of what the universe we perceive is, which we propose to refer to as made of true physical objects and true physical phenomena, and that it describes 3 Representations of Hopf algebras are bimodules that reflect both the algebra and coalgebra structures of the Hopf algebra (including their compatibility). Nevertheless, the usual way to deal with coalgebraic structures is to see them from the dual point of view, i.e. as an algebraic structure on the dual Hopf algebra. We will proceed so, and correlatively use the word "module" on one of those algebras, instead of speaking of "bimodule" and precising which of the algebraic or coalgebraic structures we refer to.
the impact of the perception process from the point of view of the observers we are. Considering our purpose, we will not view them as embedded in any given space-time, nor as having any a priori given proper time.
To this aim, we will postulate that all the true physical objects are representation functors ρ A,V for appropriate algebras A and modules V that we will precise.
We will then use two (and only two) principles to characterize true physical phenomena, namely any natural transformation between representation functors induces a true physical phenomenon, and, reciprocally, any physical phenomenon can be seen in the theory as induced by a natural transformation.
Since those principles will appear as the ultimate extensions of Einstein's relativity and equivalence principles, we propose to call this new theory the Absolute Relativity Theory (ART), and to refer to these two principles as the ART principles.
Finally, as predicted by contemporary theories, we will represent ourselves as made of objects we will identify with electrons, and nucleons ( made of quarks linked by strong interaction, which is not a trivial fact since it implies, among others, that there are objects and interactions that arise from algebras bigger than the ones we belong to : we necessarily perceive them not as they are, but only through "ghost effects" as representations of the particles we are made of ). The weak interaction will appear as belonging to this category, which explains why its bosons do appear to us as massive, and induces the existence of an energy that we do not perceive as it is, and that therefore could be part of so-called "dark energy".
In the ART framework, the way the observables do appear to observers does not only depend on their relative positions and motions, but also on the type of each observer.
Here is the way we will follow.
A short description of the Absolute Relativity Theory
The next subsection of this paper gives a tentative twelve steps presentation of the main foundations and results of ART.
We will first precise the definition and characteristics of the above preuniverse ℵ, and see how a first classification of its "objects" and "morphisms" leads to a first classification of particles and interactions as associated with some specific real form of simple finite dimensional complex algebras. Ultimately, "our perceived" space-time itself will arise in a very natural way from those algebraic foundations.
A specific emphasis has to be made on the algebraic meaning of the well known Lagrangians, and on the origin of the "passing of time" phenomenon.
The first will appear as the consequence of a very specific property of the representations of the algebras sl n ⊗ sl n ( with n > 2) that admit, as a representation of sl n , a trivial component that we will call the free part of the Lagrangian and a non trivial symmetric component that we will refer to as its interactive part.
The free part defines an invariant bilinear symmetric form on the dual of the representation space on sl * n . Since we will apply sl n to the Cartan subalgebras h of the Lie algebras g (of rank n) we will work on, this free part will correspond to a (complex) scaling to be applied to the standard restriction to h of the dual of g Killing form. It will appear as directly connected to mass and charge of particles.
The interactive part is more subtle, as defined by the following projection operator π (viewing elements of sl n as matrices) :
π(x ⊗ y) = xy + yx -2 n trace(xy).Id.
It associates in a quadratic way with any element of sl n a bilinear symmetric form on sl * n . Identifying sl n with sl * n through its own Killing form, we get so a bilinear symmetric form on sl n ⊗ sl n itself. Identifying it, again canonically through the Killing form, with sl n ⊗sl * n , we get a bilinear symmetric form defined on endomorphisms of sl n . This property is easily transferred to su n -algebras when working on maximal tori of real compact Lie algebras.
Taken together, the free and interactive parts of Lagrangians define therefore a bilinear symmetric form on the endomorphisms of gl n . The importance of Lagrangians comes from the fact that scalings and outer automorphisms of a Lie algebra g have a non trivial action on the endomorphisms of its Cartan subalgebra, while all the inner automorphisms by definition do not have any action.
On another hand, the "passing of time" will appear as directly coming from the contravariance of the point of view functor that induces for any simple Lie algebra g the passing from the Drinfeld-Jimbo algebra U h (g) to its dual, the quantized algebra F h (G) that has a family of infinite dimensional representations in the formal algebra of one variable polynomials. This family is in fact induced through the principal embedding of sl 2 into g by only one case that arises in F h (SL 2 ) by applying the Weyl symmetry to U h (sl 2 ) before taking the dual.
By considering the point of view of an object on itself and thus going from U h (g) to F h (G) due to the contravariance of the point of view functor, we have first to notice that from the uniqueness of principal embeddings of sl 2 in the Weyl chamber of any complex algebra g, we get a family, indexed by the maximal torus and the Weyl group, of surjection from F h (SL 2 ) to F h (G). Focusing thus on the situation in F h (SL 2 ), we see the apparition of a new type of creation/annihilation operators that recall those of the Quantum Field Theory (QFT), but applied to the considered object itself, creating so a new instant after each instant, inducing therefore the "passing of time"of this object. Time itself (and thus space-time) so appears in the ART as defined from objects in a quantized way : in that sense the "second quantization" of QFT should be thought as the first one.
Furthermore, since those representations are indexed by the maximal torus of the compact form g c of g, and by elements of the Weyl group, we will pull back the main characteristics of the real algebra g on the maximal torus T of g c by specifying its dimension, its Weyl invariance group, the induced action of the outer automorphisms of g, and the involution that defines the considered real form4 . Furthermore, the action of the above creation/annihilation operator that generates the passing of time can also be represented on the dual of the Cartan subalgebra (or maximal torus) of sl 2 (or su 2 ) as inducing the increase/decrease of two units on the line diagram that corresponds to making the tensor product by the adjoint representation.
This leads to carry on with the story by working on su n -type algebras seen as acting on the maximal torus of g c . Now, su n -type algebras are precisely those on which Lagrangians can be defined. They furthermore correspond to a common framework where everything concerning the different algebras g can be encoded, and, since Lagrangian as above defined, have the aforesaid wonderful property to be invariant under su n adjoint action, they also give rise to something that will be independent of the chosen point of view, and that will correlatively be thought as independently existing : here is the explanation of the feeling we have that "things" do actually exist, ...and the origin of the egocentric postulate.
The above connection between creation/annihilation operators and polynomial algebra then leads to work on Yangians algebras defined from those su n algebras. In this framework, the above creation operator corresponds to the endomorphism J of su n used to define Yangians5 . Successive applications of this operator become successive applications of J6 .
A key point is that, by identifying su n with su * n through the Killing form, J itself may be seen as an element of su n ⊗ su n and has thus a Lagrangian projection, which defines a relative scaling of su n that is very small (we will compute it as being of order of 10 -37 ), but exists. We will show that this fact makes the gravitation appearing as a (quantized) consequence of the passing of time that is also quantized, as we just have seen.
More generally, one can study the evolution of the Lagrangian with successive applications of the creation/annihilation operator : this corresponds to the passing from a weight N to the N + 2 one, and therefore, as transposed in a smooth context, to twice the left-hand side of QFT Euler-Lagrange equation, namely ∂L ∂φ . But there is a consistent way to go from one representation of U h (sl 2 ) to another one that corresponds to the above passing of time, namely to use Knizhnik-Zamolodchikov equations (KZ-equations) in order to define the parallel transport of Lagrangians and get a monodromy above the fixed point of an appropriate configuration space. Since in the sl 2 case, Lagrangians reduce to their scalar part (the so-called "free part") defined by a projector analog to the QFT quantity ∂ ∂[D i φ] , the monodromy of the KZ-parallel transport of this 2-tensor has precisely to be equal to the above right-hand side of the QFT Euler-Lagrange equations. Computing this monodromy from the canonical invariant 2-tensor associated with g gives therefore the true value of the variation of Lagrangian between the two successive positions, that corresponds to twice the right-hand side of Euler-Lagrange equations. Equalizing both sides and considering any possible direction in space-time gives Euler-Lagrange equations.
Those equations therefore appear as describing in the usual continuous and smooth context of differential geometry, the above well defined discrete algebraic creation process that generates the passing of time. Euler-Lagrange equations can therefore no more be seen as some mathematical expression of the old philosophical Maupertuis's principle of least action.
Furthermore, KZ parallel transport implies successive applications of the flip and antipode operators. Now, two successive applications of the antipode in a Yangians algebra are equivalent to a Yangian-translation of c 2 , with c the Casimir coefficient of the adjoint representation : in the sl 2 -case, this translation is by 8 2 = 4 , which confirms that KZ-parallel transport has to be made with the above two by two steps of the adjoint representation. KZparallel transport therefore defines a motion on the so called "evaluation representations" of the algebra Y (su n ) that sends it back to U(su n ) (in a algebras instead of Yangians. Since both are closely related, we will not explore this more general way in this first presentation of ART.
consistent way with the inclusion map
U(su n ) → Y (su n )).
These algebraic views can be used to come back to Einstein's equation and the unification of physics. Indeed, if it is a priori postulated that spacetime is a Lorentzian manifold, its local monodromy algebra is encoded in the space of Riemann curvature tensors of which the symmetric part is the Ricci curvature 2 -tensor. This tensor may thus be seen as the equivalent of the above right-hand term of the Euler-Lagrange equation that arises, in our context, from KZ monodromy.
Writing the equality of both monodromies gives exactly the Einstein equation. Instead of being seen as a physical principle, Einstein equation therefore appears as a mathematical consequence of the way the "passing of the time" do arise. As a by-product we get simultaneously an answer to the question of the origin of the "arrow of time" in General Relativity (GR) : namely, only the "past part" of GR Lorentzian manifold that represents the universe does exist from the point of view of an observer, and this is precisely the "passing of time" of this observer that contributes locally to extend it.
All this leads to the theoretical possibility to compute the Newton constant G from the value of the fine structure constant α and the consistency of the theoretical result with the measured value of this constant could be seen as the symbol of Einstein's come back in the quantized universe, giving to General Relativity Theory its right place in quantized physics.
It makes thus sense to explore the consequences of the above interpretation of the weak interaction as a "ghost effect" from a higher order four dimensional today unknown interaction. The introduction of the corresponding Lagrangian gives a new interactive term that could correspond to the mysterious "dark energy", but we did not try to evaluate this term by introducing it in Einstein's equation, nor to test this conjecture. We also did not try to explore systematically the cosmological implications of ART.
The last step is to use ART to explain the nomenclature and compute the characteristics of particles and interactions. This is the purpose of a new branch of physics that we propose to refer as the Mass Quantification Theory. With the exception of the above new interpretation of weak interaction as a "ghost effect" from an higher order one, it should have as an objective to compute and precise the nomenclature and characteristics of the particles as recognized by contemporary physics. MQT also leads to conjecture the existence of new types of particles that appear as good candidates to be constituents of the "dark matter", but we did not try to test this conjecture.
Although the MQT theoretically leads to describe and compute any phenomenon it predicts, only some computations and predictions are introduced in this first paper.
We summarize in the next subsection, as twelve successive steps, the con-struction we have just sketched.
The second section is a more mathematical one dedicated to a more special presentation of the theoretical foundations of the ART. It does not contains any physical result, but we thought it was necessary to ensure the global consistency of ART. This second section is completed by an Appendix that outlines the abstract theoretical conditions that a physical theory has in any case to comply with and that stand behind those foundations.
The third section is dedicated to the first computations that result from the MQT, giving first the aforesaid relation between Newton constant G and the fine structure constant α. As illustrative instances some precise results concerning among other the ratios between well-known masses of some particles are also presented here.
Far from being meant to give an achieved view of the Absolute Relativity Theory, this paper has to be considered as a first proposal to the scientific community to explore new ways that could lead contemporary physics to some new results. It will certainly also lead to new questions and difficulties that will have to be answered by a lot of complementary work.
The Absolute Relativity Theory in twelve steps
Since the new approach we propose will lead to many constructions and results that differ from those of contemporary theories, but are linked to them, a short review may be useful for a first reading of this paper. This is the purpose of this subsection. Precisions and justifications will be given in the following sections.
The algebraic preuniverse ℵ
The preuniverse ℵ may be first chosen as the category of complex finite dimensional A-modules with A any deformation of any complex quasi-Hopf algebra. We will restrict ourselves to Hopf algebras and to the quasi-Hopf algebras that they generates by a gauge transformation. Those quasi-Hopf algebras may be classified according to their primitive part g. We will furthermore restrict ourselves to Hopf-algebras with g a finite dimensional semi-simple complex Lie algebra, beginning with simple ones. As aforesaid, those Hopf algebras are generally not cocommutative : thus instead of beginning as usual with the universal enveloping algebras, we will consider modules ever Jimbo-Drinfeld Quantum Universal Enveloping algebras (QUE), U h (g) as a starting point. We will restrict ourselves to finite dimensional ones in order to apply the restricted duality to go from U h (g) to its dual F h (G).
The intrinsic breach of symmetry in complex Lie algebras
Since the ART is founded on a categorial framework that is defined up to an equivalence, it is invariant under isomorphisms of Lie algebras. We may thus choose any Cartan subalgebra h max of the maximal (in the sense of inclusion) simple algebra g max (of rank r max ) we will work on (we will precise it below). We may then choose a Cartan subalgebra and an ordering of its roots to define the corresponding elements of any g i of its subalgebras (with i ∈ I, the finite set of Lie algebras we will work on, h i the corresponding Cartan subalgebra and r i its rank). Any irreducible object V of ℵ, as an U h (g i )-module or U h (g i ) -representation functor ρ U h (g i ),V (or in short ρ g i ,V ), is characterized by a maximal weight λ g i ,V and will be said to be at the position V (or
λ g i ,V ).
So, like tangent spaces in General Relativity, our main objects will be linear representation, but instead of using only the four dimensional representations of the Lorentz group and seeing them as linked by a Lorentzian connection, we will work on different QUE and their different finite dimensional representations linked together by their algebraic intrinsic relations that simultaneously define their relative characteristics and (discrete) positions.
The set of all the λ g i ,V defines a set of lattices all included in the dual of the Cartan subalgebra (h max ) * . We will call the lattice of weights Λ i associated with the algebra g i the g i -type space-time (in short g i -space-time). It is not the usual space-time, but it corresponds to the first step of our algebraic construction of space-time from observables. As it is easy to check in any Serre-Chevalley basis, although g i is complex algebra, each Λ i is a copy of (Z) r i , and any g i -space-time is thus an essentially real object. So, although all the possible choices of a Cartan subalgebra h max are linked by inner automorphisms that are complex ones, the set of finite dimensional modules defines in any isomorphic copy of h * max , the same real lattice. Since a multiplicative coefficient e iφ does not preserve this lattice, we have here an algebraically defined breach of symmetry in the complex plan. This breach of symmetry is intrinsic in the sense that his existence does not result from any postulated field such as, for instance, Higg's field, but from the fact that finite dimensional representations of a simple complex algebra are defined by a Z-lattice.
Working with both complex and real algebras
We have thus to focus on real algebras, and we will proceed very carefully by begining with the compact form canonically associated with each complex algebra. Since any real form of any g i is defined by a conjugate linear involutive automorphism, two real forms σ and τ are said to be compatible if and only if they commute, which means that θ = στ = τ σ is a linear involutive automorphism of g i . Furthermore, since any real form is compatible with the compact one σ c , the study of real forms comes down to the study the compact form and of its linear involutive automorphisms.
Since any linear involutive automorphism of g max that respects a subalgebra g i , induces on g i a linear involution, any real form on g max induces a real form on this subalgebra g i ; reciprocally, if we identify a specific real form τ i associated with some linear involution θ i of an algebra g i as associated with a family of true physical objects, θ i will have to be compatible (i.e. commute) with the restriction to g i of θ max . The same reasoning applies to other inclusions of algebras, and will determine a lot of necessary theoretical conditions on compatible real forms of algebras. Since the compatibility of real forms is a "commutative diagram"-type consistency condition for the theory, this way of reasoning will be the key of the determination of the real algebras that define the particles that can coexist in our universe.
Finally, the only real algebras we will be interested in, are the compact ones, and only one family of other ones. This leads to distinguish in the Weyl chamber associated with the lattice of weights Λ i those that are compact and the other ones : the first characterizes what we propose to call folded dimensions, the other ones to unfolded dimensions. Any (different from sl 2 ≈ su 1,1 ) non compact real g θ i i -space-time has unfolded dimensions that we will identify with usual space-time, but also folded dimensions (that correspond to its maximal compact subalgebra).
Therefore, any real form g θ i i of the complex Lie algebra g i of complex dimension n i and rank r i , has d i unfolded dimensions, r id i folded dimensions, n ir i dimensions that characterize the Lie algebra structure, but do not correspond to any dimension of space-time : that is the reason why we propose to call them structural dimensions.
A first algebraic classification of particles
We are now ready to give the basic correspondence with usual particles
• the neutrino corresponds to the algebra su 2 . It has no unfolded dimension, and thus no intrinsic proper time (which implies that they appear as going at the speed of light). Nevertheless, we will see that neutrinos appear to us as having a mass that comes in fact from the asymmetry of our proper time that we will describe below. This mass could therefore be seen as a "dynamical ghost" mass.
• the electron corresponds to the algebra so 1,3 , the realified algebra of sl 2 (C). It has one folded and one unfolded dimensions that supports its proper time. We will see how the unique unfolded dimension is linked to the gauge invariance group SU(1) that characterizes electromagnetic interaction. The splitting so 1,3 ≈ su 2 ⊕ su 1,1 has a very special importance since it generates a one dimensional non rigidity that we will characterize by the fine structure constant α seen as the ratio between the inertial and the electromagnetic energies of the electron. Another important fact is that the smallest representation of so 1,3 is four dimensional, which is the dimension of our usual spacetime. As an advantage, this allows us to have a complete view on its structure, and thus to deduce its equations from experimental processes. As a drawback, this has reinforced the idea that this space-time in some sense includes all the physical phenomena, which is a completely wrong idea as far as other interactions (excepting gravitation) are concerned.
Finally, the outer automorphism that corresponds to the symmetry between the two (isolated) nodes of the so 4 Dynkin diagram, induces at each position the fugitive coexistence of a representation of so 1,3 with the direct sum of two copies of su 2 that corresponds to two copies of the above algebra. Since the corresponding neutrinos, as aforesaid, leave the electron almost at the speed of light, we will say that the true physical phenomenon associated with this symmetry of the so 4 Dynkin diagram is an anti-confinement phenomenon.
• the usual quarks correspond to the algebra so 3,5 (or so 5,3 ). It has one folded and three unfolded dimensions. We will see how those three unfolded dimensions are linked to the gauge invariance group SU(3) that characterizes the strong interaction. The well known triality, that comes from the group of automorphisms of D 4 Dynkin's diagram which is S 3 , ensures the coexistence of three copies of the different representations of the algebra at each position : this very special mathematical property explains the usual confinement of three quarks by giving a precise mathemati-cal sense to their usual three "colors", namely the three isomorphic forms (standard, spin + and spin -) of the complex algebra so 8 that necessarily do appear together on any position V independently of any complex involution that defines the considered real form of this algebra. The folding of those three in the algebra g 2 leads to protons and neutrons at the corresponding position, as one could have guessed.
The important fact is that one can compute the characteristics of all those particles from the well known characteristics of those algebras.
As aforesaid, we will give some instances of such computations in our last section.
The algebraic interpretation of the four known interactions
From ARP, natural transformations between representation functors in the theory should correspond to interactions, and reciprocally. Here is the basic correspondence with usual interactions :
• outer automorphisms of any complex algebra, that are associates with the automorphisms of its Dynkin's diagram, correspond to interactions : the one of D 2 applied to so 1,3 corresponds to electromagnetism, the triality of so 3,5 to strong interaction.
The weak interaction will appear as corresponding to the outer involutive automorphism that comes from the compact form the real form E(II) of the algebra E 6 . We do not perceive it as such, but only through "ghost effects", i.e. as representations of the particles we are made of, which induces a splitting of the corresponding E 6 -bosons (that have to be massless) into different parts.
Let us emphasize that such parts can individually appear as massive, exactly in the same way a photon would be perceived as a massive particle by one-dimensional observers with our proper time. Bosons W and Z correspond to this situation. E 6 is also the "largest" exceptional Lie algebra that has outer automorphims, which is one reason why we can choose it as our g max .
• Weyl symmetries applied to the space-time defined by an observer correspond to changes of the Weyl chamber in the corresponding weights diagram : we propose to call changes of "flavors" the corresponding relativistic effects, since, in the case of the algebra g 2 (that corresponds to our nucleons), the six "flavors" of leptons and quarks, associated with the corresponding antiparticles, will appear as related to the twelve copies of the Weyl chamber in this algebra. The important fact is, as above, that this view of the "flavor" of particles gives another example of relativistic effects that modify the characteristics of the observables according to their position (in the general sense of ART) relative to the observer.
• Since any element of any simple compact group G c of rank r is conjugate to exactly one element of each of the |W | sheets that cover its maximal torus, there is exactly one element t in a given copy of the Weyl chamber associated with any element of G c . This allows us to see any element of SU(r) as acting on any element of any specific sheet of the Weyl chamber, by keeping this element if its image remains in the same sheet, or combining with an appropriate Weyl symmetry, that appears as a change of "flavor", if it is not the case. This allows us to encode everything in the maximal torus as explained in subsection 1.3., and then to restrict ourselves to one of its sheets. We propose to call generalized Lorentz transformations the pull back of the so defined transformations through any specific involution that defines any non compact real form.
There is also a possibility of working on any complex Cartan subalgebras of the algebra g instead of the maximal torus of G c by substituting sl r (C) to SU(r) and replacing exchanges of sheets by exchanges of roots 7 . A key fact is that those two ways are equivalent since we will need both of them : the compact real form in order to access to any other real form by an appropriate complex involution, and the complex algebra in order to give sense to this involution.
• the gravitation will appear as a direct consequence of the passing of time as we will explain below.
Some algebraic conjectures on particles and interactions
Our algebraic approach will also lead us to conjecture the existence of some unknown particles and interactions :
7 The way ART is built legitimates the use of "Weyl's unitary trick" : we will use it mainly to pull back any configuration in a Cartan subalgebra on the maximal torus of the compact real form. Beginning with a postulated space-time prohibits from applying the necessary involutions to go from the compact real form to any other one. By contrast, in ART, this gives in particular a way to make probabilist computations without having to deal with infinites by some renormalization methods. Nevertheless, we will not use those possibities in this first paper.
• The compact form of the so 8 algebra should define particles that are, as those associated with so 3,5 , linked by the strong interaction induced by the mathematical triality. They should be also folded into compact copies of representations of g 2 . We thus propose to call them dark quarks that have to be folded into what we suggest to call dark neutrons8 . Since they correspond to a compact form, the space time associated with those particles should not have any unfolded dimension, and they should be related to usual quarks by an as above defined anticonfinement phenomenon. Dark neutrons should also get a mass from the same relativistic effects as neutrinos. Therefore, they could be part of the "dark matter" contemporary physics is looking for.
• The representations of the algebra F 4 comes itself from the folding of the outer automorphism that links the compact form of E c 6 and the afore mentioned E(II) real form of E 6 . Those three algebras should define particles and phenomena we do not perceive as they truly are. As aforesaid, the weak interaction should be one of them, but it is only a ghost effect. To access to the true effect, it is necessary to represent the above outer automorphism, and compute the corresponding Lagrangian as explained below. This Lagrangian should define a new interaction that encompasses the weak interaction. Furthermore, since both algebras F 4 and E(II) have real rank four, which explains the dimension of our usual space-time, this new interaction should correspond to high valued Lagrangians, which makes it a good candidate to be the main part of the mysterious "dark energy". On another hand, let us notice that, as made of smaller particles, we perceive those particles not as they are but through splitting effects that transform the rigid structure of any irreducible representation of F 4 into a direct sum of representations of smaller algebras. Since "branching formulas" on those special algebras are growing very fast with the maximal weights of their representations, this splitting process will induce the appearance of huge numbers of splitted particles. Since, furthermore, direct sums do not induce any rigidity, this splitting process also explains the incredible freedom of their possible arrangements.
The existence and meaning of Lagrangians
The next step is to consider the impact of the point of view functor by considering the algebra F h (G), the (restricted Hopf) dual of the corresponding U h (g), that should define what we perceive of all physical objects and phenomena.
Beginning as above with compact algebras, the first key point is that, as we said before, all the representations of all the Hopf algebras F h (G c ) are indexed by the maximal torus T G c that is a copy of (T 1 ) r G (with T 1 the trigonometric circle).
Let us then notice that the group SU(r G ), that corresponsal form of SL(r G ), define the group of all the transformations of the maximal torus.
But only those that correspond to unfolded dimensions can generate transformations that we can perceive. With above notations, we have thus also to consider the smaller torus (T 1 ) d G and the smaller group SU(d G ). With the above definitions of electromagnetism and strong interaction, we find here the usual gauge invariance groups of the QFT, namely SU(1) and SU(3). Now, any Lie algebra sl n (C) with n > 2, which compact real form is su n , has a very specific property : by identifying it through the Killing form with its dual, one sees that the tensor product su n ⊗ su n contains, as for any simple Lie algebra, a one dimensional representation of su n (the image of the identity map, i.e. the Casimir operator of su n ), and the image of the adjoint representation of su n (that is antisymmetric). But, in the su n (or sl n (C)) cases, and only in them, there is in su n ⊗ su n a second irreducible representation of su n that is symmetric and thus defines a bilinear symmetric form L ′ on su * n ⊗ su * n that may be identified with the help of the Killing form with an endomorphism of su n . Let us emphasize that Lagrangians do not act directly on the Cartan subalgebra, but on the algebra of the endomorphisms of this algebra seen as a vector space. Therefore, although the action of any element a of GL n on the Cartan algebra h of any Lie algebra g respects h only globally -this corresponds to a simple change of observer in usual physics -, Lagrangians as corresponding through the Killing form of su n to endomorphisms of su n itself, will remain invariant : the action of a for them is the one of a simple change of base in su n .
Coming back to any simple algebra g, an outer automorphism φ of g induces a change of the primitive roots, and thus of the fundamental roots and Weyl chamber, that does not result from the action of the Weyl group. It sends correlatively the connected component of the identity of Aut(g) to another one. This change in the roots diagram induces an automorphism Φ belonging to Aut(su r ) ; Φ may be seen as above through the Killing form of su r as an element Φ of su r ⊗su r . It splits therefore into irreducible representations of su r with a one dimensional projection on the Casimir operator, which we will call the free part of the Lagrangian L Φ , and the above L ′ Φ that we will call its interactive part. The sum of both parts will be the Lagrangian of the interaction corresponding to Φ.
Since it neutralizes the component that comes from the adjoint representation of su r that generates the above connected component of the identity, the Lagrangian can be seen as characterizing a scaling on su r and a change of connected component of the identity in Aut(g) independently of any basis chosen for the Cartan subalgebra h . This explains its additivity and its independence of the basis chosen for representing g, and also of the choice of the Cartan subalgebra (since they are all conjugate) and finally of g itself (except through r and d). and the Lagrangians it defines Therefore, the choice of coherent Cartan subalgebras we made in the beginning appears now only as a commodity for computations. Furthermore, if an algebra is included in another one, the smallest defines a degenerate bilinear form on the space-time associated with the first, but the construction keeps making sense. This gives its main interest to the Lagrangian. This unfortunately does not mean that it is easy to compute. Indeed, since it is invariant under the action of the adjoint group and thus of the one of the Weyl group, the computation of the Lagrangian, in the cases where it is not reduced to its free part, has to include at least as many elements than the cardinal of the Weyl group (which is 192 for so 8 , 51840 for E 6 and 1152 for F 4 !). Although there are many symmetries to be used, we will not try to make those computations here.
Since the algebra so 1,3 has a two-dimensional Cartan subalgebra, electromagnetism has only a free part easy to identify with the usual one. The same is true for the gravitation that is, as we will see, directly connected to the Casimir operator.
Another historically important, but singular aspect of electro-magnetism has to be quoted here : it admits a true four dimensional representation, which means that it can fully be represented in our space-time that is four dimensional, but for other reasons. The successes of corresponding theories have therefore sustained the idea that this space-time should contain everything that concerns physics : this is the egocentric postulate.
The correct way to approach electromagnetism in our context is to see it as associated with a two dimensional Cartan subalgebra (forgetting it is only semi simple), which means a two dimensional space-time. The exchange of perpendicular roots (that defines so 1,3 from so 4 ) defines a rotation on the compact form that expresses the charge in a two dimensional context : it is in fact the two dimensional version of Dirac equation. Let λ 1,2 be the generator of this rotation. It easy to check (for instance by using the complex representation) that the associated (free) Lagrangian is given by λ 2 1,2 . But, as we have said, Lagrangians that comes from the Killing form are invariant under the action of the Weyl group, that in our degenerated case corresponds to the group S 4 . The true Lagrangian is therefore i,j=1,4 i =j λ 2 i,j , that corresponds to the usual one with appropriate choices of units.
Finally, let us notice that, essentially for simplicity of notations, we have presented here all the elements in the non quantized situation. They are easy to transpose to the quantized one.
The passing of time and the definition of physical observers
The change to F h (G) induced by the point of view functor has a second fundamental consequence.
Beginning with the SU 2 case, F h (SU 2 ) admits, beside one dimensional representations, a family π t (indexed by its maximal torus T 1 ) of infinite dimensional representations on the space of formal polynomials with one variable that may be seen, through formal characters, as the space of finite dimensional representations of SU 2 (with highest weight the degree of the polynomial). Let us emphasize that this representation π t does not appear in the non quantized case, and that, considered as a quantization of a Poisson-Lie group, it is associated with the Weyl symmetry w of SL 2 (C) that exchanges the compact and non compact roots (associated with SU 1,1 ) of its realified.
More precisely, those polynomials that correspond to SL 2 (C) are polynomial in (x + 1
x ), the one that corresponds to the standard twodimensional representation V ; (x+ 1
x ) n corresponds to V ⊗n and x n + 1
x n to the representation with maximal weight n ; π t has thus to be seen as acting on the polynomials in the variable9 X = (x + 1 x ), an operation that is associated with acting on tensorial powers V ⊗n of V .
It is easy to check10 that, with the usual notation a b c d for the linear forms that define F h (G), a reduces the degree by 1, while d increases it also by 1, and b and c leave this degree unchanged. Furthermore, from the defining relations of F h (G) (seen as morphisms of the q-plan with q = e h/2 ), the action of the commutator [a, d] splits into a sum of multiples of bc, cb and the identity, all three that keep the degree of the polynomial unchanged : considered as acting on the space of representations, [a, d] is therefore, up to a scaling, equivalent to the identity.
Since we have seen that maximal weights define the positions of the objects, the action of the linear form d may be seen as a move to the right on the one-dimensional space-time induced by taking the tensorial product by V , which "creates" a new instant after the last one, and "moves" to the right all the others. The action of the linear form a is reciprocal, inducing a move to the left and annihilating the first instant. We get thus creation/annihilation operators that apply directly to time.
More precisely, in our case of su 2 ⊕ su 1,1 that may be seen as corresponding to "a pure time" (it is in fact the time of an electron), d corresponds to a "creation operator" of new (discrete) instants that induces an increasing move of all the past instants on the time line : this is exactly a (discretely) mobile point on the time line ; a has exactly the opposite effect.
The sum qa + q -1 d, that is the dual of the quantized identity K q of the q-plan, induces therefore a double motion : the first one that corresponds to the first coordinate in the q-plan, may be seen as the motion of a fixed frame expressed in a moving one, the second as a motion of a moving frame expressed in a fixed one. Exchanging left and right actions exchanges those two situations as K q and K q -1 , which is perfectly consistent. L q = Kq-K -1 q q-q -1 that quantized the usual generator of the Cartan subalgebra has an analog action.
In that sense, in this simple case, the passing of time may be seen as a relativistic phenomenon induced by the point of view functor applied by an observer belonging to the space generated by K q and K -1 q on himself. Using Drinfeld-Jimbo QUE, the same reasoning applies to the dual of the usual quantized Cartan generator.
It is interesting to notice that making a tensor product by V is equivalent to applying the point of view functor from the point of view of V * (seen as a representation of sl 2 (C), thus differing as a sl 2 (C)module,from V by the application of the antipode S h ).
By identifying V and V * through the symplectic invariant form, one sees that the above operation of tensoring by V is equivalent to applying the usual algebraic flip that exchanges left and right operations, the antipode and the contravariant hom functor : the present instant appears so as a (reversed) point of view from the last past instant on always the same sl 2 -type particle at the same position V . One therefore can see the above equivalent of a moving frame as defined by successive applications of the point of view functor on the same particle at the position V , that is thus seen as non moving from its own point of view.
Coming now to the general case F h (G), the existence of a surjective homomorphism (that comes by passing to the dual from the principal embedding
11 of U h (sl 2 (C)) into U h (g)) from F h (G) to F h (SL 2 (C))
canonically associated with any choice of a Weyl chamber allows us to generalize the above construction after any choice of an element of the maximal torus and an element of a Weyl chamber, which defines an operator π t,w .
Any such operator π t,w defines as above the proper time associated with any g-type particle that takes place in the space of successive powers V ⊗n of the standard representation V . The dependence in this definition on the choice of an element of the Weyl group (and thus a Weyl chamber) justifies what we have said above on the action of this group as changing the "flavor" of particles.
Furthermore, the indexation of those representations by the maximal torus corresponds to the above gauge invariance groups, the complete one SU(r) and the observed one SU(d). The application of this last group may be seen as transferring the time direction in any other one in the Weyl chamber : this corresponds to what we have called above a generalized Lorentz transformation. This gives the possibility to transpose the above defined proper time direction in any other one in the Weyl chamber extending correlatively the above definition from time to space-time.
A physical observer may be now defined as the combination of :
• a real type g R , which means a complex simple Lie algebra g characterized by its compact form g c and an involutive transformation
• a position defined by a maximal weight λ 0 in h * the dual of the Cartan subalgebra of g,
• three elements that characterize the above principal embedding of U h (sl 2 ) into U h (g), namely :
the choice of a copy of the Weyl chamber by reference to a first one arbitrary chosen : this is equivalent to the choice of an element of the Weyl group (or to the choice of one of the|W | copies of the maximal torus by pulling back as above explained to the compact form) ; -the choice of an element ν in the Weyl chamber (or t in the above chosen sheet of the maximal torus by pulling back v to the compact form) by reference to a first one arbitrary chosen ; this is equivalent to the choice of an as above defined generalized Lorentz transformation that applies to an intially chosen roots diagram in order to identify the direction defined by ν with the one of the principal embedding of sl 2 ; -the choice of a free Lagrangian µ 2 = (m 0 + iq 0 ). m 0 defines the embedding of the unfolded part of the realified of sl 2 , while q 0 the compact one. By convention, we take m 0 > 0 for a physical observer, and we will say that -µ defines the corresponding anti-observer. m 0 will be called the mass of the physical observer and q 0 its charge. Since m 0 corresponds to a scaling factor applied to the dual of the algebra sl 2 that contains the dual h * of its Cartan subalgebra, it defines a scaling of h * and thus the density relative to the dual of the Killing form of integer values, which means successive finite dimensional representations. In that sense, m 0 defines the speed at which the time creation operator generates new instants. The higher it is, the closer (in the sense of the dual Killing form) are the successively generated instants, which is simply the generalization of the Einstein-Planck relations.
The existence of a mass (and a charge) associated with any physical observer is no more a mysterious property of nature, nor something that particles take out from a mysterious ambient field in order to "slow" them : it is the purely logical consequence of two mathematical elementary facts, namely the passing of time induced by the point of view functor as above explained in F h (SL 2 ), and the existence of a uniquely defined (in a given root ordering) principal embedding of sl 2 in any simple Lie algebra g.
Let us emphasize that, by the main property that led us to their definition, Lagrangians are invariant under the action of generalized Lorentz transformations that generalized usual changes of point of view of physical observers. This means that all the physical observers that differ only through such transformations will share the same Lagrangians.
In usual context of physics, this explains the universality of Lagrangians.
In our context, this explains first why we perceive an "existing universe" (and therefore why physics has been so trapped by the egocentric postulate). Indeed, as being (up to "flavors" that appear like generating different particles) associated with the same Weyl chambers of the same algebras, our particles share the same Lagrangians despite their relative moves, and therefore appear as sharing the same universe. This universe is thus seen as independent of each of them and can be easily taken as the general scene where the whole story is played.
Another very important fact has to be emphasized here : there is an elegant way to express µ in a Lagrangian context : since µ 2 is proportional to the Lagrangian and µ defines the above "speed" ż of the "passing of time" of this observer, ∂L ∂ ż is proportional to µ. The righthand side of Euler-Lagrange equations will simply be the expression of that fact in a context taking also into account the principal embedding of sl 2 in g.
Let us notice finally that the creation/annihilation operators of a type of particles seen as physical observers appear as the creation/annihilation operators of their proper time. Since, in ART, space-time is defined from particles, space is naturally expending while time is passing. Therefore, the so-called second quantization corresponds exactly to the quantization of the time, as the first one to the quantization of energy.
It induces a quantized structure of time and correlatively of space-time as we will see now, beginning as usual with the one dimensional case. 9. The structure of the time of a physical observer In order to explore the structure of space-time from the point of view of a physical observer, one sees from the above construction of F h (g) representations, that we have in fact to describe the situation for the pure time of a physical observer of type su 1,1 ⊕ su 1,1 (i.e. an electronictype observer), then to extend it to a space-time by inserting it in the Weyl chamber of a bigger algebra g and finally to apply all the possible generalized Lorentz transformations.
Let us thus consider an electronic observer at the position N, which means it is associated with the space V ⊗N . This space splits into irreducible representations ; the biggest one V N corresponds to the weight N, with multiplicity 1 and may be seen as the actual position of the observer. The other ones have multiplicities given by the Newton coefficients in (x + 1 x ) N . So defined and split, V ⊗N may be seen as the universe from the point of view of the considered observer at the instant N.
In this construction, the universe at the instant N 1 < N was isomorphic to V ⊗N 1 that is not the restriction to representations of weights less than or equal to N 1 of V ⊗N , which corresponds to the past instants as seen from the present. This means that, as far as time itself is concerned, an observer cannot access to the past without being influenced by N -N 1 successive applications of the above mix of the flip and antipode operators combined with the point of view functor.
Since the situation with su 1,1 can, as above, easily be transferred to the quantized case and to any other simple real Lie algebra g, this means that the past universe as seen from the present is not the past universe as it was when it was the present.
In other words, when we try to access the past through some experimental processes (for instance by long distance astronomical observations), we always access to representations of the past as seen from the present ( V ⊗N ) and not as it was from the points of view of successive instants (which leads to work on V 1 ⊗ ... ⊗ V m ). Let us emphasize that by dualizing to Hilbert spaces of function as in QFT, we get in the first case usual the Fock spaces of QFT while in the second case, we have to represent the fact that past instants were not exactly the same as present ones.
We will call the past as it is seen from the present as the ghost past, and the past as it was the true past. Since ART defines space-time from observables, the notion of space-time from the point of view of a physical observer is defined by some V ⊗N with V the standard representation of the Lie algebra that characterizes the type of the observer. In the above sense, space-time is thus a "ghost space-time", and, since a physical observer "knows" only his own point of view, it is doubtful that the notion of "a true space-time" from his point of view could even make sense.
Nevertheless, both "past" are interesting from an experimental point of view, since the study of the ghost past predicts what we perceive from the past, and the study of the true past gives access to the true story allowing us to compute or predict physical data that can also be compared with measured values, which increases therefore the falsifiability of the theory.
In particular, we will see in our last section that it is theoretically possible, by representing the true past, to compute the characteristics of all the particles and interactions from our actual point of view. The same set of algebraic relations implies that all the computations can also be done in the ghost past that defines what we perceive today, which means that all those characteristics paradoxically appear as having been constant, despite their true variations in the true past.
We propose to call this law, inconceivable with the usual vision of an existing space-time, the ghost constancy law of constants : it means the numerous contemporary observations that seem to show that usual constants of physics are actually constant, are in fact biased by the fact that the models used for those studies do not represent the difference between the true past and the ghost past.
In other words, as no observer can hope to determine directly his own motion by looking to objects attached to himself, nobody can hope to determine directly if the constants of physics are really constant without taking into account and compensating the fact that all our past is in some sense moving with us.
We are now quite far from the usual vision of a space-time as a smooth preexisting Lorentzian manifold. In particular, the above process of creation in a quantized way of "new instants" for all the physical observers in all the space-time exclude to use any preexisting differential structure. In fact, the only solid tool we have at our disposal in order to represent space-time from the point of view of a physical observer is using the creation operator itself applied to different (and different types of) physical observers.
We will proceed by successive steps beginning with, as usual, the sl 2 case.
We know that the t-indexed representations π t (F h (SL 2 (C))) induce the move on representations of U h (sl 2 (C)) that corresponds to the passing of time while the group of rotations SU(1) generated by the standard Cartan element of su 2 acts on the torus T that indexes the representations π t .We have to deal with this complex situation in a canonical way that will allow us to generalize our results to other algebras. The idea is to see the free linear space generated by the set of irreducible representations as a configurations space12 with N dimensions and to quotient by the symmetric group S N in order to have all the representations at the same point P .
It is then possible to define an operation of parallel transport of the 2 -tensors algebra su 1 ⊗ su 1 , that contains the above Lagrangian, according to Knizhnik-Zamolodchikov equation (KZ equation) associated with some 2-tensor r ij . This operation is consistent, i.e. defines an appropriate flat connection allowing us to define a monodromy group above P , if and only if r satisfies the classical Yang-Baxter (YB) equation with spectral parameters (which corresponds to a "commutative diagram"-type consistency condition for the dodecagon diagram associated with an operator of this type).
We have now to determine the 2-tensor r ij that corresponds to the passing of time of a physical observer. Since we have seen that the above defined creation/annihilation operator that induces the passing of time is the (quantized) identity which corresponds through the Killing form to the Casimir operator, this 2-tensor has to be build from the canonical polarization of this operator, namely the canonical 2-tensor t ij defined by :
t V,W = 1 2 (C V ⊗W -C V ⊗ Id V -Id V ⊗ C W ).
This tensor is invariant by the gauge transformations, therefore its use is compatible with the categorial fondations of the ART.
In order to satisfy YB consistency condition, we have then to set :
r(z 1 -z 2 ) = t z 1 -z 2 .
KZ equation may then be written for a function f with values in the true past
V 1 ⊗ ... ⊗ V N : ∂f ∂z j = h 2iπ N k=1 k =j t jk z j -z k f ,
where the formal coefficient is introduced only in order to facilitate the below formulation of Drinfeld-Kohno theorem.
Let us emphasize that, since we are in fact working on the realified of sl 2 (C), the choice of the identity map that corresponds to four applications of the rotation of roots induced by the outer automorphism of so 4 (C), and therefore of the Casimir operator, is perfectly coherent with the (free) Lagrangian associated with the electromagnetic interaction that we have above defined. This justifies our choice of the definition of the electron and will ensure that in the ART, like in the QFT, it will be possible to consider the charge as an imaginary mass justifying the above given definitions of those quantities.
On another hand, we know that the Hopf algebra structure of U h (su 1 ) defines on each representation V a canonical bilinear element of su * 1 ⊗ su * 1 that corresponds to the action of the Casimir operator on this representation, which is a scaling by the coefficient C 2 (V ). It is given for the representation of sl 2 (C) with maximal weight N by the formula C 2 (N) = (N + 1) 2 -1. This coefficient is also well known for other algebras in such a way that the transposition to other algebras of our construction will be possible.
We can thus attach to any irreducible representation V an element of su 1 ⊗ su 1 defined by 1 C 2 (V ) . Therefore, the change from representation N to N + 1 induces a change of this canonical element of su 1 ⊗ su 1 according to the ratio C 2 (N )
C 2 (N +1) . The key point is the following : we may apply to this element the closed parallel transport that defines the KZ monodromy getting a coefficient given by e ht/2 with t the above canonical 2-tensor and h the formal coefficient in KZ equation. We can therefore choose13 a specialization of the coefficient h in Drinfeld-Jimbo algebra such that this coefficient corresponds to the ratio C 2 (N )
C 2 (N +1) . We get a Lagrangian (that in this simple case is reduced to its free part) that by parallel transport in the configuration space is a continuous function from one representation to the next one.
In an informal way, the "jump" of Lagrangian that is induced by the change of intrinsic form at each change of representation can be "absorbed" by the R-matrix effect of the flip that, as above explained, corresponds to the successive applications of the point of view functor. This means that when the passing of time induces a move from the position N to N + 1, the quadratic form that has to be applied to define the corresponding variation of the scalar coefficient in N ) for some fixed positive coefficient k. Therefore, the increase of this scalar coefficient that corresponds to the passage from
U h (sl 2 ) is k 2 C 2 (
N to N + 1 is u n = 1 √ C 2 (N )
.
Finally, the scalar coefficient after N steps as seen from the point of view of a physical observer, i.e. as pulled back from its dual after application of the point of view functor, is given by the corresponding series S N = N n=1 u n . Now, in order to be able to represent a physical observer as stable on the maximal torus, we have to normalize the bilinear forms we use in such a way that, on the adjoint representation, where we know that the action on h * goes on a 2 by 2 basis and the action on the torus has a π-periodicity. We have thus to normalize the Killing form in such a way that the image of π is 2 which gives a coefficient 2 π by reference to a standard orthonormal initial choice. We will refer to this quadratic form as the normalized quadratic form. It is the one that should be used in the computations of the MQT since it induces the stability of any of the t-indexed representations of F h (SU 2 ) that define the passing of the time from the point of view of the considered observer.
Correlatively, we get a well defined sequence h i of values of h that we will call the h-sequence. Since we have seen that Lagrangians characterize the outer automorphisms of the simple Lie algebras, using an h-sequence instead of a fixed value of h is fundamental : otherwise, there would be small jumps in the Lagrangians that do not correspond to any true physical phenomenon and that, correlatively, would lead to the introduction of many corrective actions.
Let us emphasize furthermore that the structure of the time of a physical observer arises from this monodromic approach that corresponds to a topological braided quasi-bialgebra usually called A sl 2 ,t that is not coassociative, but only quasi-coassociative, and has the same product and coproduct than the non deformed universal enveloping algebra. It has thus the same algebraic left-right symmetry than this latter algebra, which is a huge advantage, but thinking the composition of duration as non associative is quite difficult, and in any case not currently used by physics.
That is the reason why we have chosen to begin by working on Drinfeld-Jimbo algebras that are coassociative. Furthermore, since by Drinfeld-Kohno theorem14 , for any semi-simple Lie algebra g there exist a gauge transformation F and a C[[h]]-linear isomorphism α that links U h (g) to (A g,t ) F , the representations of both quasi-bialgebras define the same categorial structure, which ensures the consistency of our whole construction that began precisely by those categorial monoidal structures.
There is nevertheless a price to pay to go from the one to the other : we transform quasi-associativity in associativity, but we loose the algebraic left-right invariance of A g,t . We will refer to this asymmetry as the asymmetry of the time of any physical observer.
From a physical point of view, it is the counterpart of having an associative law for durations, and any precise calculation should be made either on U h (g) taking into account this asymmetry of time, or on A g,t , but taking into account Drinfeld associator that involves iterated integrals and Riemann ζ function.
Nevertheless, in this introducing paper, we will compute the Anomalous Magnetic Moment of electron in the algebra U h (sl 2 ) without compensating the asymmetry of its proper time, and use the difference with experimental results to access directly to an evaluation of this asymmetry.
Let us come back finally to "ghost" space-time. It is easy to understand where does the difference come from with true space-time we have just described : the variations of the specializations of h defined by the above defined h-sequence that are necessary to have a continuous Lagrangian are not taken into account since all the KZ story is told on the same (initial) representation.
Unfortunately, contemporary physics seems to work exclusively on the "ghost" space-time from our point of view. This has tremendous consequences since it forbids physics from accessing to its unification with the understanding of the nature gravitation and to the Mass Quantification Theory as we will see it below. Let us before come back to Euler-Lagrange equations and their interpretation.
Recovering Euler-Lagrange equations
We have already given in our subsection 1.3 above the main ideas in order to understand the origin of Euler-Lagrange equations. So we only give here some complementary elements :
• We have seen that, if one thinks, in any given direction the operator ∂ ∂ ż applied to the Lagrangian gives its free part that corresponds to a ratio of scaling in the corresponding direction. We are thus always working on multiples of 1 ⊗ 1. Since the Casimir operator, and correlatively the canonical 2-tensor makes successively an action and its dual on anyone of the directions of the considered Lie algebra, those variations are simply added if one uses an orthogonal basis of the Lie algebra. But, since the canonical 2tensor t and KZ equations are defined intrinsically, and therefore independently of the choice of any basis, the action of t makes it possible to extend our way of reasoning in all the directions to be considered.
• Remembering that any universal enveloping algebra of a Lie algebra g of simply connected Lie group G can be seen15 as the algebra D(G) of operators on C ∞ (G) generated by the identity and all the left invariant vector fields on G. Let us apply this to the group SU(d) (or SU(r) on C). The element of the universal enveloping algebra are now represented by partial derivative operators that correspond to the usual expression of the right-hand side of Euler-Lagrange equations.
11. The unification of physics, the gravitation and a conjecture on dark energy
As for the preceding step, we have already given in subsection 1.3 the main aspects of this part, except as gravitation is concerned. Therefore, we give only here some complements and precisions.
• As far the unification of physics is concerned, let us first notice that despite the name we propose, ART is clearly a quantized theory. Let us emphasize that quantizations in ART comes from the contravariance of the point of view functor since it obliges in the beginning to work on bialgebras of which primitive part are well classified Lie algebras, and since it gives rise to the fundamental above creation/annihilation operators that sustain all the construction. In particular, we did not introduce any physical hypothesis to get such a quantization.
On another hand, we do not present here any formal link with usual branches of Quantum Theories. But, uncertainty principle for instance is easy to deduce from the fact that a physical observer is only spectrally defined despite the continuous view on itself obtained from KZ-parallel transport. In the same way, the fact that exchanging algebraic right and left (i.e. changing to the linear but not algebraic dual in non deformed algebras) exchanges (roughly) time and energy directions should lead to derive easily usual "first" quantization formulations that concern energymomentum. We have also sketched the way to get Dirac equation from the ART context. There are also many connections with QFT we have quoted , but we did not try to achieve formally the construction of a way to derive QFT from ART.
• Far more interesting is the way we sketched in 1.3 in order to derive Einstein equation from ART. The left hand of this equation is purely mathematical, and directly results from the choice made by Einstein to use a Lorentzian manifold in order to represent space-time. One could therefore say that to derive from quantics a tensor that can be identified with usual Lagrangian achieve the unification of physics since the right-hand side of the Einstein equation is considered as given.
In fact we have had a little more. Since the variations from some given point of our Lagrangians arise as a well defined KZ monodromy of the symmetric part of an algebra of automorphisms of the Cartan subalgebra that represents space-time in our context, and since the Ricci-curvature tensor is its exact equivalent in a Lorentzian manifold context, both sides of Einstein equation have now exactly the same mathematical status. Using, as Einstein did, Euler-Lagrange equations for the representation of the physical universe in the right side of Einstein equation appears now as an useless detour : according to Einstein's hope, the true nature of his equation is mathematical.
• As far as gravitation is concerned, let us first notice that the KZ monodromy that we have described in order to define space-time from the point of view of an observer has, from any representation V to another one, an action on su n and thus a Lagrangian. As it comes from an sl 2 -embedding effect, this Lagrangian has only a free part, and his square root may be seen as defining a scaling of the Cartan subalgebra itself. We have already analyzed this effect that corresponds, from the definition of the KZ-parallel transport, to the change of the intrinsic quadratic form associated with the change of the value of he Casimir coefficient. This leads directly to the computation of the Newton constant presented in section III.
• Coming back to Einstein equation, the fact that the gravitation arises from a passing of time phenomenon that in any case defines a Lagrangian, leads to conjecture that the right-hand side of Einstein equation would not be modified by considering particles with an associated space-time of only two-or three-dimension. But, the left-hand side would be modified by the change of the unitary substraction that gives the projection on the pure Ricci component in the space of Riemann curvature tensors. The law of decrease of the gravitation would correlatively be slower (with a logarithmic potential in dimension 3, instead 1 r in dimension four). This phenomenon is probably not purely a theoretical one. Indeed, since, as we will see in the MQT part, protons and neutrons because they are of type g 2 that as the normal form of G 2 has a rank of 2, cannot be described by small balls in space but by small bars in their own spacetime. Therefore, a concentration of them in one or two spatial dimensions if they are rotating seems to be conceivable and could generate a gravitational field equivalent to the existence of some "dark matter". This remains nevertheless for us only a conjecture.
• Finally, let us remind the conjecture in 1.3 about "dark energy".
If one admits that, for the aforesaid reasons, weak interaction as we perceive it, is only a "ghost effect" induced by some E 6 -type interaction, there is necessarily an (heavy) Lagrangian that corresponds to it : a very good candidate for "dark energy". Furthermore, if such particles and interactions exist their "passing of time" should generate a sequence of increasing representations that split very fast in smaller particles as the one we are made of. This leads to an alternative scenario to the Big-Bang for the cosmological preradiative era... that we did not at all explore.
12. The Mass Quantification Theory
The way to compute the mass and the charge of particles is to compute the free part of the associated Lagrangians, and, after taking the square root that has a positive mass (for particles), to compare with corresponding characteristics of the electron that we have chosen as a reference.
In order to compute this Lagrangian, it is first necessary to position rigorously the principal embedding of sl 2 (C) that defines the direction of the proper time of the considered particle : this in fact a simple consequence of the way the creation/annihilation operator that generates passing of time is positioned relatively to F h (G) as above explained.
Positioning the real and imaginary parts of the sl 2 embedding has then to be made, either, if it is non ambiguous, by identification of an unfolded direction, or by making the embedding in the compact form and applying consistently the complex involution that defines the considered real form.
The most difficult part is to compute how the passing of the time of the considered particle has modified its free Lagrangian, since we do know neither this evolution, neither the number N of successive "instants" that correspond to this particle in its present proper time.
Fortunately, there is one exception : the electron that does not correspond to a simple Lie algebra, but only to a semi-simple one. Therefore, the ratio between its electromagnetic energy (that is orthogonal to its passing of time, and thus does not change) and the one that corresponds to its mass, that evolves as above explained, may be used to compute the present value of N. This gives the apparent age of the universe (in its ghost past) and permit, as aforesaid, the computation of Newton constant. As aforesaid we will also use, in this first paper, the anomalous magnetic moment of the electron in order to evaluate the impact of the asymmetry of time that should be taken into account for other computations here. By contrast with α that characterizes our cosmological position, the anomalous moment of electron should be computed from Drinfeld's associator in further developments of the theory.
The idea for the other particles is then to use this value of N to determine the impact of their passing of time. To this aim, one has first, by computing the above principal embedding and projecting on it the root that corresponds to the adjoint representation of the considered algebra (or its image through a Weyl transformation if the flavor is not the standard one), to determine the relative speed of the passing of time of the considered type of particle and compute its relative lag or advance by reference to the speed of the adjoint representation of sl 2 , and after a small conversion, relative to the one of the electron.
Once this lag or advance ratio is determined, one has to compute the impact of this lag (for instance) relative to the electron. To this aim, the idea is to consider the point of view of the electron looking to the past, which means passing to the dual and correlatively representing the annihilation operator from his point of view.
Using Yangians gives an easy way to make this computation since motions in time can be represented by translations on the evaluations representations. Now the derivation D canonically associated with the Yangian J (which is obtained by applying successively the cocommutator and the Lie bracket on the non deformed algebra) corresponds simply to D = d du . Applying this twice,since we move in the adjoint algebra, we get the equation that has to be applied to any scaling to describe its evolution, namely :
d 2 K (du) 2 = K.
Since the initial value is 1, and since if one forgets the asymmetry of the time, which implies a symmetric law, we get what we propose to refer as the cosh law : the coefficient to be applied in order to compute the actual value of a particle is (in the lag case) equal to the cosh of the delay. Appropriate transformations have to be made in cases of advance.
It is theoretically possible to improve those computations by adding a small part that corresponds to the above explained asymmetry of the time, but we do not present here any such improvement.
Although everything should be representable and thus computable, all those difficulties lead us to present only some instances of such computations in this first paper. This the purpose of our third section.
It is time now to go further into details on more theoretical aspects of the ART. After those twelve steps, we come back to the foundations.
The mathematical foundations of the Absolute Relativity Theory
Our objective in the first section was to outline the main physical issues and perspectives of the ART. Therefore we only sketched its mathematical foundations in the first steps of our overview in subsection 1.4.
Our purpose now is to give some complementary mathematical description of those foundations. In order to facilitate an independent reading of this section, we do not refer to elements given in the first section, but present here a complete set of algebraic definitions and justifications.
The algebraic approach
Since we do not allow ourselves to postulate the existence of any space-time, the first difficulty we have to overcome is to find a way to begin with our theory.
According to Einstein's intuition quoted as epigraph, algebra soon appears as the most natural way to undertake our quest.
We then begin to determine the algebraic conditions physics has to respect to be a consistent and falsifiable science : with Kant's terminology in mind, we propose to call them the transcendental conditions. Those transcendental conditions are algebraic and define a very precise framework, independent of any observer and of any observation, that also allows us to represent the perception process itself as the action of a specific contravariant functor.
This general algebraic framework has to be connected with "the reality" we perceive and that we suggest referring to as the set of "true physical phenomena".
With this aim in view, we will state two and only two physical principles
• The first one, the absolute relativity principle, will appear as the ultimate extension of the above relativity principle.
• The second one, the absolute equivalence principle, will appear as the ultimate extension of Einstein's equivalence principle, and, in some way, as a reciprocal principle of the first one.
The preuniverse and the point of view functor
In order to avoid beginning with any specific restriction to the theory we are trying to build, it is natural to base the foundations of our construction on the very general algebraic language of graphs. As explained in Appendix I, Popper falsifiability conditions then immediately lead us to focus on graphs that respect the axioms of categories : from now on, we will assume that the general framework of the theory we intend to build is a category that we will call the preuniverse ℵ. It will appear as a category of representation functors, namely functors between two one-object categories, each one of them equipped with morphisms that characterize its own internal structure. The morphisms in ℵ will thus be morphisms of functors, i.e. natural transformations.
It is then natural to define in each category the point of view of an object a on an object b as the set of arrows Hom(b, a) ; this defines the point of view (contravariant) functor at a, Hom(., a). Now, a functor F is said to be representable and represented by a if there exists some object a in the target category of F such that F is equivalent to Hom(., a). Thus, the point of view functor Hom a (., a) will be the functor that sends a to the set of arrows that defines its algebraic morphisms as a one object category, while Hom ℵ (., ρ a ) will be the functor that sends the representation functor ρ a in ℵ to the set of natural transformations with the functor ρ a as target.
We sketch in Appendix I a way to show that some purely logical conditions on the theory we try to build, such as being consistent in the sense of categories and having "enough" representable functors, lead to specify a little more the preuniverse ℵ : it has first to be a category of A-modules (or "representations of A") for different algebras A. Then, since each of the subcategory ℵ A associated with an specific algebra A has also to be a monoidal category for some coherent coproduct ∆ A and counit ε A , each (A, ∆ A , ε) has in fact to be a quasi-bialgebra 16 . For technical reasons, we will restrict ourselves to those quasi-bialgebras that are equipped with an antipode operator, i.e. quasi-Hopf algebras.
More precisely, in order to begin with the simplest of those algebras and to use the restricted Hopf duality, we will focus on the category of all the finite dimensional A h (g)-modules for all the formal ( h-adic topological) deformations A h (g) of the universal enveloping algebras U(g) of all the finite dimensional semi-simple complex Lie algebras g. We will mainly use the well known Drinfeld-Jimbo Quantum Universal Enveloping algebras (QUE) U h (g).
A first approach on particles and space-time Classically, those modules may be split into a direct sum of irreducible submodules indexed by maximal weights that belong to the dual h * of any chosen Cartan subalgebra h of g. This leads to the idea that those irreducible modules could correspond to elementary particles, that arise, up to isomorphisms, as indexed by the elements of maximal weights included in h * .
This would mean that space-time does not preexist, but is defined as a set of indexes by each type of particles, and in particular that its dimension is defined by the rank r of the Cartan subalgebra that corresponds to their type. Now, since the lattice of maximal weights is linearly isomorphic to (Z) r , any choice of a Cartan subalgebra induces a breach of symmetry in the complex algebra h * that defines a specific embedding of R in C. Since everything in a theory is done up to an isomorphism, we will assume in all what follows that such a choice is made for the biggest algebra we will have to consider, namely, the algebra E 6 , and, except when otherwise specified, we will always consider that the Cartan subalgebra associated with any subalgebra of E 6 is chosen as the restriction of the one of E 6 . We will apply the same rule for the linear form that defines the ordering of the weights.
Since the above definition of space-time induces a specific embedding of R in C, we will have to work on real forms of Lie algebras. Each of those algebras has a real rank d that defines what we suggest calling its unfolded dimension, and a complex rank r, the difference rd corresponding therefore to folded dimensions.
Therefore making the definition of space-time dependant of each type of particles instead of postulating them in an a priori given space-time independant of them should make us able to understand why our space-time does appear as four-dimensional.
Let us define an elementary particle of type g at the position V as the irreducible representation functor of U h (g) on the one object category V , itself generally characterized by a maximal weight λ V .
For instance, the electron will be defined as the realified of the algebra sl 2 (C) that has a two dimensional real Cartan subalgebra. Then the spacetime of the electron will appear as only one-dimensional since the compact root will correspond to an unfolded dimension. We will see that our usual time is in fact the time of the electron, and that it arises algebraically equipped with a both spectral and continuous structure, far away from the copy of R that usually represents the time in contemporary physics. Any bigger algebra g with real rank d should in the same way generate a d-dimensional spacetime.
Concerning real forms of Lie algebras, let us remind17 that each of them is defined as an invariant conjugate linear involution of the corresponding complex algebra. The compact real form g c is unique up to isomorphisms and may be used to define all the other real forms of g : namely each one can be obtained by applying an invariant linear complex involutive isomorphism θ (that can clearly not be a real one) to the compact form. Furthermore, each real form is characterized by the way θ acts on its primitive roots, which can be summarized in a Satake diagram.
Finally, let us notice that Algebraically replacing the usual left action by the right one defines a second way to go from a set equipped with an algebraic structure to a one-object category : we will say that those two ways are left-right conjugates, and refer to the algebraic operation that exchanges algebraic left and right actions as the left-right conjugation that will play a very special role in the absolute relativity theory. Except when otherwise specified, the left convention will always be used.
This conjugation that corresponds to the change to opposite algebras (symbol op) or coalgebras (symbol cop) generally corresponds to the switch from one representation to the linear dual representation.
The dual category ℵ * The left-right conjugation that leads to the use of the linear duality has to be clearly distinguished from the algebraic operation of "reversing the arrows" that is involved by the contravariance of the point of view functor : in the latter case, everything is replaced by its dual, which means, in the case of the above bialgebras, that coproducts are turned into products and (with the restricted dual notion) products into coproducts.
As usual, we will note the restricted dual of the algebra U h (g) by F h (G). We will work on the dual algebra to describe and explore the properties of the coalgebras and comodule structures of the bialgebras and bimodules structures involved in our construction. F h (G) is classically a quantization of the space F (G) of complex valued functions on the simply connected Lie group G, and Tanaka-Krein duality allows us to see this group as the dual of F (G) seeing U h (g) as the quantum group associated with G.
The contravariance of the point of view functor will make F h (G) play a very central part. Their representations are indexed by elements of the Cartan subalgebra the duali of which is seen above as a support of the spacetime.
Furthermore, we some of those representations are defined on the space of one-dimension formal polynomials, a space which is isomorphic to the space of the representations of the algebra su 1,1 , the non compact component of the realified of sl 2 (C), the algebra we already guessed characterizing the electron. Now this algebra has a one-dimension non compact Cartan subalgebra, a natural support for a one dimensional space-time, i.e. a time. The change to the dual induced by the contravariance of the point of view functor leads to associate the electron with this polynomial representation that, as we have seen in section I, appears to be the origin of the passing of time. Our time therefore corresponds to the time of electrons (and thus of electromagnetism).
Finally, having renounced to postulate any specific structure for the spacetime leads us to represent it with a far more sophisticated structure than anyone we could have a priori postulated.
The ART two physical principles
At this point, the preuniverse is nevertheless a pure mathematical construction that needs to be linked to physics through the representation of the phenomena we perceive.
The refutation of the egocentric postulate gives here its first dividend : since the structure of the preuniverse is not breached by any specific point of view, one can guess that up to the application of the point of view functor, any natural transformation in the preuniverse should correspond to a physical phenomenon, and, reciprocally, that any physical phenomenon should correspond to a natural transformation in this preuniverse.
The above point of view functor would then associate any g type particle at the position V with all the true physical phenomena that have an action on it, which corresponds to a natural representation of the usual concept of "the perceived physical phenomena". Finally, physical laws should be defined as families of natural transformations defined in some universal way.
Those two reciprocal ideas are expressed respectively by the absolute relativity principle and by the absolute equivalence principle.
The absolute relativity principle Let us notice first that the contemporary way to express the old idea of "formal invariance of physical laws under a change of observers" is to say that such a change has to be defined by a natural transformation (the adjoint action of Galileo or Poincaré groups for instance). Now, the (classical) relativity principle defines a family of natural transformations that link together Galilean observers, which effectively induces a true physical phenomenon : the conservation of the energy-momentum. In the same way, Einstein's relativity principle induces the equivalence of Lorentzian observers, which implies also a true physical phenomenon : this is Einstein's famous equivalence between energy and mass.
From those two examples one sees that the first of the above guesses generalizes the idea that the compatibility of a natural transformation (like a change of observer) with the theory is significant of a true physical phenomenon : we therefore refer it as the absolute relativity principle (ARP) : any natural transformation in the theory represents a true physical phenomenon.
The absolute equivalence principle With his equivalence principle, Einstein went in the reciprocal direction by inducing, from the numerical identity of the gravitational mass and the inertial one, that the gravitation, as a true physical phenomenon, can be cancelled by extending the category of observers to accelerated ones. This means that gravitation may also be represented by natural transformations in this extended category.
Mathematically, this assertion has a direct consequence on the structure of the space-time that has to be a differential manifold equipped with a connection (necessarily Lorentzian in order to respect locally special relativity theory).
This equivalence principle that "cancels" a physical phenomenon by applying a natural transformation that embeds the category of Lorentzian observers in a bigger one (or equivalently the category of affine spaces in the bigger one of affine manifolds), we will be refered to the absolute equivalence principle (AEP) : any true physical phenomenon may be represented by a natural transformation of the theory.
The structure of the absolute relativity theory Basically, taken together, the ARP and the AEP splits the representation of the true physical phenomena into two parts :
1. the description and classification of the representations functors and natural transformations that are the objects and morphisms of the preuniverse ℵ,
2. the analysis of the impact of the point of view functor on those objects and morphisms, which leads to work simultaneously on ℵ and ℵ * .
By classifying all those representation functors and natural transformations between them, and combining this with the application of the point of view functor, we should thus get a natural classification of the physical particles and physical phenomena, which corresponds to the unification of all particles and interactions. As we have seen in section I, this physical unification comes with the theoretical one of the two main branches of contemporary physics.
As mentioned above, this also leads to the first Mass Quantification Theory (MQT) that should explain the existence, the nomenclature and the characteristics of the particles already identified by physics, and also predict the characteristics of some new ones.
Cutting the Gordian knot of space-time opens thus a new road to new landscapes we are now ready to explore.
So, let us now begin with the first of the two steps above.
A first classification of true physical phenomena
2.2.1 The fundamental structure of the preuniverse ℵ
The preuniverse ℵ may be seen as "encapsulating" three levels of "arrows" and universal constructions.
The first level encapsulates the basic transcendental conditions presented in Appendix I and leads to consider R-modules for some ring R.
The second level encapsulated in the different representation functors ρ A,V , defines monoidal structures (see also Appendix I) on the category of those modules, and induces the possibility to classify them according to the different (complex and real) Lie algebras g that correspond to the primitive part of different Hopf algebras U h (g). This level leads to the representation of particles and correlatively to the MQT. In that sense, the existence of a well defined nomenclature of particles is the consequence of the fact that, as explained in Appendix I, we want the theory to have "enough" objects to ensure that the composed functors defined by successive applications of the point of view functor, is representable.
The third level corresponds to morphisms between those representation functors, i.e. to natural transformations that will define, as aforesaid, true natural phenomena that appear to any particle at any position through the point of view functor.
Any natural transformation between two representation functors may thus be seen as reflecting a (generally partial) formal coincidence between two such structures of successive arrows and universal constructions that "exist" independently of any other point of view functor.
As true physical phenomena are usually characterized by the fact that they do not depend on any point of view, this explains why we have chosen to associate them with natural transformations accordingly to the above relativity and equivalence principles.
Objects and morphisms in the preuniverse ℵ
To pursue our road, we have now to go into more details with the description of the preuniverse.
Splitting the category ℵ in subcategories ℵ g First, if we specify different simple complex Lie algebras g, we get different and distinct subcategories of functors we will denote ℵ g . Furthermore, since we have to work on the real forms of those algebras, the same classification in subcategories applies to those real algebras.
In particular, the classification of elementary particles by their type arises this way : the leptons correspond (up to relativistic effects that define their flavor as we will see later) to the real forms of sl 2 (C), the neutrinos to the compact form and the electrons to the split one, while usual quarks will appear as representations of a specific real form of so 8 (C).
Clearly, this way of seeing the particles should enable us to predict the existence of so far undetected particles. For instance, since the compact form of a real algebra is compatible with any other real form, the compact form of the algebra so 8 should also correspond to physical particles that should go by three as the quarks and that we propose to call dark quarks. We propose correlatively to call dark neutrons the particles associated with their folding in g 2 .
Those particles are indeed good candidates to be elements of the unknown dark matter contemporary physics is looking for.
We will also be able to predict the existence of heavier particles (associated with the algebra E 6 and its folding in F 4 ) that we see only through the explained splitting and ghost effects induced by the fact that we see them from the point of view of the lighter particles we are made of. One could conjecture that those heavy particles originate the dark energy according to a process we will try to describe at the end of our second section.
The natural transformations between g-type representation functors : different physical effect
Let us now classify the morphisms in any ℵ g of the above subcategories of functors (with A = U h (g) in order to simplify notations) : Since each element of ℵ g is a functor ρ A,V between the one-object categories A and V , a natural transformation may be :
• either type I, induced by an intertwining operator. Each one is defined by a morphism of representation φ : V → W that connects the two representation functors ρ A,V and ρ A,W by the relation : φ
• ρ A,V = ρ A,W .
As a natural transformation, each one is a morphisms of ℵ g .
Furthermore, if V and W are irreducible, Schur's lemma implies V ≈ W and that φ is a scaling. An important consequence of this fact is that, since any two representations of su 1,1 at different positions are never isomorphic, two distinct representations (or instants) of su 1,1 cannot be related by any natural transformation, which explains why, as it is well known, from any specific point of view, all the past instants do not exist from the point of view of the present one.
Thus, if we restrict to irreducible representations, type I physical phenomena can only be automorphisms of any given representation.
From the universality of U(g) in the category of the representations of g, one may deduce that the elements of its center are the only ones (excepting trivial scaling) that induce a natural transformation on each representation functor. We will be mainly concerned by the Casimir operator C h that generates this center and defines the coefficients of the intrinsic duality on U h (g)-modules, and that will appear as strongly connected to the passing of time and the gravitation.
• or type II to type IV, induced by automorphisms of the Hopf algebra A = U h (g). Since they have to preserve the primitive part of this algebra, those automorphisms are associated with automorphisms of the Lie algebra g. They can come from outer automorphisms of g (type II physical phenomena), or by inner automorphisms that, for a fixed choice of the Cartan subalgebra, are the product of a an element of the Weyl group of g (type III physical phenomena), and a transformation that preserves the Weyl chamber (type IV physical phenomena or "generalized Lorentz transformation"). This classification first applies to complex algebras and has then to be specified by considering real forms of those algebras.
Let G = Aut(g) the group of the automorphisms of g : it is isomorphic to the adjoint Lie group associated with g, and may be obtained as a quotient of its universal covering that is simply connected. Let G 0 = Int(g) = exp(g) the group of inner automorphisms of g that is also the connected component of the identity in G. Classically, the quotient group G/G 0 defines the group of the outer automorphisms of g. It is isomorphic to the group of the isomorphisms of the Dynkin diagram of g, and each of its elements corresponds to a connected component of Aut(g). By contrast with the Casimir operator that acts differently on each representation functor, automorphisms of g act in the same way on all its representations and thus can only define "relativistic effects", namely effects that link a representation to another one : they thus define true physical phenomena that connect the corresponding particles.
Let us examine the main characteristics of each of those three types of natural transformations :
1. The existence of outer automorphisms (type II) implies that with each g-module V are necessarily associated as many non connected copies of itself as there are such automorphisms. Furthermore, the mathematical operation of "folding" of those copies that defines a specific subalgebra g ′ of g, induces on the same space V a g ′ -module structure.
The first example of type II physical phenomenon comes from the triality18 defined on the algebra so 8 , that links together through natural transformations on any position V , the three fundamental isomorphic representations of so 8 (standard, spin + and spin -). Such representations appear to be good candidates corresponding to the three "colored" quarks, while the natural transformations that connect those functors should correspond to the strong interaction that ensures the confinement of those quarks into nucleons that should therefore correspond to representations of the algebra g 2 .
There should be another couple of type II physical phenomena, that are still unexplored. Namely, S 2 -type automorphisms of Dynkin's diagrams induce another type of confinement, we propose to refer to as the anticonfinement since it links for instance electrons and neutrinos but in a repulsive way.
Those automorphisms also imply the existence of new particles such as the afore mentioned dark quarks that come from the compact form of D 4 -representations folded in the above mentioned dark neutrons.
Furthermore, we will see that there is also a type II physical phenomenon that concerns E 6 -particles with an F 4 folding we do not perceive directly.
Finally, since we will also be concerned with real representations, let us notice that, since it is an involutive automorphism, any S 2 -type automorphism of a Dynkin's diagram induces a modification of the real form of the algebra well defined by the corresponding compact form. This is the reason why, for instance, we mentioned above the anticonfinement of electrons and neutrinos and not of same type particles. We will refer to those changes of types of particles as Type IIr physical phenomena. In practice, those phenomena correspond to the arrows in the Satake diagram that defines each real form.
2. Type III physical phenomena, like type II, correspond to automorphisms of the Lie algebra g, but instead of outer automorphisms, we consider now elements of the inner automorphisms group G 0 . By contrast with type II, their action preserves each connected component of the adjoint group Aut(g).
It is known that the group of automorphisms of any (complex) Lie algebra g is generated by the three dimensional algebras g α i in the adjoint representation where α i runs on all the roots of a root system of g, and that the Weyl group W acts simply and transitively on the system of simple roots to get all the root system of g.
Our theory of the passing of time implies that the definition of an observer is biunivocally associated with the choice of a system of simple roots that defines an sl 2 -principal embedding. Thus, from the point of view of an observer, another observer associated with the same roots system appears as the image of the first one through an element of W . This group therefore defines a new set of natural transformations that, although non kinematical, depend on the choice of an observer, and thus correspond to purely relativistic effects.
We will see that the so called "flavour" of leptons and quarks correspond to type III physical phenomena associated with the group ( G 2 ) that defines the protons and the neutrons which we are made of.
As for type III physical phenomena, the change from complex numbers to real ones oblige to be careful since the composition of the conjugate linear form that defines a real Lie algebra with an involutive transformation changes the real form itself. Furthermore, the central Weyl symmetry always corresponds to the change to the dual representation, a fact with many consequences in the computation of the actual characteristics of the concerned particles in the framework of the MQT presented in the section III. When necessary, we will thus specify type III physical phenomena by adding the suffix r (for real).
3. Finally, since the W -orbit of any element in the dual of the Cartan subalgebra intersects a closed Weyl chamber exactly once, we may al-ways, by an appropriate type III transformation, send the sl 2 -principal embedding that defines, as we will see below, the proper time of an observer into the Weyl chamber associated with another one. This operation eliminates the "flavor" of the second particle relatively to the first.
We may then apply a change of basis that sends the set of primitive roots that defines the first sl 2 -principal embedding onto the corresponding one defined by the second principal embedding. Since the two systems are normalized by the Killing form of the algebra g, this transformation has to be an orthogonal one.
We will say that the corresponding physical phenomenon is a purely kinematical effect, or a relative speed effect, or else, since the first instance of such a phenomenon is clearly the Lorentz group, a generalized Lorentzian effect. Those natural transformations that arise in this way will be said to be type IV natural transformations or generalized Lorentz transformations.
We see here how restrictive the usual notions of relativity were : the kinematical effects that were the only ones concerned by those usual notions are now reduced to residual effects that play their part only after all the other ones.
The natural transformations induced by the inclusion functor between algebras Having listed the nomenclature of natural transformations for a fixed algebra g, i.e. the morphisms of the subcategory ℵ g , we have now to study the morphisms of ℵ that arise from the inclusion of a subalgebra g ′ in a bigger algebra g. Namely, if g ′ is a subalgebra of g, the inclusion morphism associates functorially with any representation functor of ℵ g a representation functor of ℵ g ′ that splits by a natural transformation of ℵ g ′ into irreducible elements according to well known "branching rules". Furthermore, since this process is functorial, any natural transformation in ℵ g induces a natural transformation between the corresponding elements of ℵ g ′ .
This means that a type g elementary particle at the position V seen at the same position V from the point of view functor Hom V,g ′ (., V ) decomposes into type g ′ particles, and that physical phenomena relating type g particles are seen through the point of view functor Hom ℵ g ′ (., ρ g ′ ,V ) as physical phenomena relating type g ′ particles.
We will refer those natural transformations respectively as type V and type VI. We will also say that a type V natural transformation induces a splitting effect, and a type VI induces a ghost effect.
Those two effects express that the elements and morphisms of a category associated with an algebra g do appear from the point of view of the objects of a category associated with a subalgebra g ′ , not under their true form but as g ′ -type objects linked by specific relations that only make sense in ℵ g ′ . By contrast type g ′ objects do not exist from the point of view of a g-type object since g does not respect any of its subalgebras.
We will thus also refer to the above two effect as the weakest law. It implies, by opposition to all the contemporary theories, that the "nature" and the characteristics of an object as perceived from the point of view of another object depends on the nature and characteristics of this second object.
So, having cut the Gordian knot of the space-time gives us the freedom to explore the algebraic relations between the objects that are usually simply set down as existing in a preexisting space-time. In other words, by restricting the scope of the relativity principle to kinematics, modern physics has implicitly postulated that the appearance and the nature of the "objects", and correlatively of the space-time itself, are independent of any observation and of any observer.
With the splitting and ghost effects, the absolute relativity theory frees physics from this constraint.
The consequences of this change can not be overestimated. Let us review some of them. Some consequences of the weakest law We have seen above that the "passing of time" corresponds to a change of position in the dual of the Cartan subalgebra of a copy of the realified of sl 2 (C) that we will associate with the electrons. Thus in each real Lie algebra such that the principal embedding of sl 2 (C) may be done with this real form, we will get a copy of this "passing of time" consistent with the one the electrons. This remark will help us to determine what are the real forms of the Lie algebras that can correspond to particles we perceive as able to share our proper time.
Another important consequence of the weakest law concerns nucleons. Protons (associated with G 2 ) do exist from the point of view of electrons (D 2 ⊂ G 2 ) that orbit them, but, from the weakest law, no electron does exist from the point of view of a proton ; therefore, the electrons around the nucleus may move from one to another since no one exists from the point of view of any nucleon. By contrast, each of the quarks (so 8 or D 4 ) included in a proton or a neutron (G 2 ) does exist from the point of view of this one since G 2 is a subalgebra of D 4 ; therefore, the three quarks belonging to a specific nucleon cannot be mixed up with other ones inside the nucleus.
Another very important consequence of the weakest law concerns the real form19 E II of the algebra E 6 , the highest real non compact dimensional one that contains in its primitive roots a copy of the realified of sl 2 (C) that represents electrons. Its real rank is 4, and, since its Dynkin diagram has an S 2 -symmetry, it has a singular subalgebra defined by folding, namely the split form F I of the algebra F 4 , the real rank of which is also 4
Now, the complex algebra F 4 contains the direct sum algebra A 1 ⊕ G 2 . This means that any representation of E 6 (or F 4 ) may be seen as a representation of A 1 ⊕G 2 , a direct sum that corresponds to the couple lepton-nucleon. Thus, the atoms we are made of, will appear as representations of this direct sum algebra, and instead of perceiving E 6 -bosons as they are, and consequently as massless like any boson, we perceive them as split in an A 1 type (anti-) particle and a G 2 type particle20 .
Considered separately, G 2 -bosons, since they are not the true ones for the description of the weak interaction, may naturally appear as having a mass exactly in the same way as a 1-dimensional observer with our time direction would perceive a photon as a very short-lived massive particle. This approach gives a way to have an order of magnitude of the masses of W-and Z-bosons, and therefore to explain their surprisingly high values.
At a smaller scale, the inclusion of algebras D 4 ⊂ B 4 ⊂ F 4 means that E 6 -bosons may also be seen as D 4 -representations, which means that one can also interpret their action at the level of quarks.
Let us note furthermore that, if, as above guessed, the weak interaction as a true physical phenomenon corresponds to the type II isomorphism associated with E 6 , seen with some ghost effects from our point of view, the space-time from the point of view of the observers we are, made of both neutrons and protons, containing both electrons and neutrinos, should correspond to the Cartan subalgebras of two non connected copies of those E 6 algebras that for some phenomena can be grasped globally through their folding in F 4 .
But, since the algebra F 4 is fundamentally asymmetric, the space-time associated with it in the above sense does not preserve the left-right symmetry (symmetry P) respected by the space-time associated with D 4 . Thus, the symmetry we assign to the space-time does not come from its true structure, but also from ghost effects induced by the symmetries of the particles we are made of.
In other words, if space-time were the one associated with D 4 particles, all the physical phenomena would respect the symmetry P. The fact that this is not experimentally the case justifies the above guess (that may also be seen as a mathematical consequence of the absolute equivalence principle applied to E 6 algebras).
Finally, let us notice that this dependence of the perceived objects on the perceiving observer extends also to kinematics effects.
For instance, we have already mentioned that neutrinos have a null real rank and thus should have, as we will see, a null mass. But, from the point of view of an electron, they do appear with a "relativistic" mass that is in fact inherited from the non associativity of the structure of the time of the electrons (or nucleons) we are made of. This "relativistic" mass allows us to explain oscillations with other flavored neutrinos, therefore avoiding the contradictions that would arise from the existence of a speed of light massive particles.
All those examples are hopefully sufficient to give a first idea of the various physical phenomena explained by the weakest law : we are thus now ready to go to the next step by examining how those phenomena arise through the point of view functor.
The impact of the point of view functor Any embedding of g ′ into g also induces also a surjective homomorphism from the dual algebra F h (G ′ ) into the dual algebra F h (G) that allows us to associate with any representation of the first a representation of the second.
Let us note that in the non quantized case, such dual algebras F (G) are algebras of functions, and the functor is a trivial restriction functor. Furthermore, since all those algebras are commutative, their irreducible representations are one dimensional and thus isomorphic. Only their own dual is non trivial and corresponds to the group G itself through the Tanaka-Krein duality21 .
In the quantized case, there is also a family of equivalent one dimensional representations parametrized by the elements of the Cartan subalgebra (or the maximal torus in the real compact case), and the weakest law in that case defines what we propose to call a projection effect : any element of the Cartan subalgebra (or maximal torus) of g can be projected onto the one of g ′ (that is included into), and defines so a one dimensional representation of F h (G). This means that the process can not be reversed, which justifies the name of weakest law we proposed before.
But, the most important fact comes from the case g = sl 2 with g any Lie algebra, which defines a family of infinite dimensional representations of F h (G) on l 2 (N) with only a finite number of terms different from zero, itself isomorphic to the polynomial algebra C[X] (which is isomorphic to the ring of representations of U(sl 2 )) 22 .
In the real compact case, those representations are parametrized by the elements w of the Weyl group W g and those t of the maximal torus T g of g c23 . We will denote by ρ w,t any such representation. Those representations will play a fundamental role to explain "the passing of time" and to describe the structure of space-time, as we will see in the next subsections.
We will call this fundamental consequence of the weakest law the structural law. It will appear as the mathematical foundation of the fundamental theorem of dynamics.
Recovering space-time and Lagrangians
The paradox of space-time
Since we just have seen that space-time appears only as a by-product of the particles on a type by type and relativistic way, we have now to explain why its use in physics has been so fruitful for more than four centuries.
This apparent paradox comes from two mathematical facts :
• The first one is well known : we have just seen that space-time associated with g type particles should be defined from its Cartan subalgebra h and its dual (the maximal torus T of the group G c and its dual in the real compact case). This Cartan subalgebra is commutative and may thus be seen as the product C r of r copies of C ((R/Z) r in the real compact case) with r = dim(h). Since it is a solvable Lie algebra, all its representations are one-dimensional and its only interesting structure is the one of a vector space.
But, despite its triviality, a Cartan subalgebra h encodes almost everything that concerns a complex simple Lie algebra g. Namely, the Weyl group that completely characterizes g may be defined from its restriction to h. The root system, the embeddings of subalgebras can also (up to isomorphisms) be defined on h and its dual h * . Furthermore formal characters that characterize all the representations are also defined on the Cartan subalgebra. In the real compact case, the character functions, defined on the maximal torus T, generate a topologically dense subset of the class functions, and every element of G c is exactly conjugate to | W | elements of T. The system of primitive and fundamental roots which permutations or embeddings in the roots system define the above type II and type III natural transformations correspond clearly to Cartan subalgebras (or maximal torus) properties.
Turning to the real non compact algebras, it is also well known that all the real forms are characterized by a Satake diagram, itself also connected to the diagram of primitive roots included in h * . Finally, all the finite dimensional representations of U(g) and its deformations U h (g) are indexed by the weights lattice that is included in h * , while, as aforesaid, the representations of the dual quantized algebra F h (G) are indexed24 by the product of the Weyl group and the Cartan subalgebra h itself (or the maximal torus T in the real compact case).
It is thus understandable that, by putting everything "by hand" on the vector spaces that correspond to Cartan subalgebras, physics has been able to induce directly from experimental facts many properties that characterize Cartan subalgebras of specific simple Lie algebras, although, from a mathematical point of view, those properties could have been directly deduced from the analysis of each Lie algebra.
The usual inductive method of physics has also been made easier by the following, more subtle, mathematical fact that stands at the origin of Lagrangians and of their universal application.
• The Lie algebras sl n+1 of type A n with (n > 1) that act canonically on vector spaces of dimension n + 1 without any specific structure have a very unique property 25 . Namely, from the Jacobi identity, the tensor product sl n+1 ⊗sl n+1 contains, as the product g ⊗ g for any other Lie algebra g, a copy of the adjoint representation. But in the A n case, it contains also a second copy of sl n+1 that belongs to the symmetric part of sl n+1 ⊗sl n+1 . The projection π on this second copy is given by: π(x ⊗ y) = xy + yx -2 n+1 trace(xy).Id. Any element of Aut(sl n+1 ) 26 corresponds to an element of sl n+1 ⊗ sl n+1 through the Killing form, and, therefore, has a projection through π onto a symmetric representation of sl n+1 . This projection is clearly null for the image of the identity map that corresponds to the Casimir operator, but it can be non trivial in other cases. Thus, we may associate a bilinear symmetric form on sl r * that is a representation of sl r with any Cartan subalgebra of rank r with r > 2. This correspond to the Lagrangian that we have defined and used in our first section.
Let us notice finally that for algebras with r = 2, the above component of sl r ⊗ sl r is null, and the Lagrangian is reduced to its free part as we shall see below for the electromagnetic interaction (that will appear as a type II interaction that changes the compact real form so 4 to the realified so 1,3 of sl 2 (C)).
The structure of space-time
Let us come back before to space-time as it appears now associated with each type of Lie algebra and therefore to each type of particles.
Let us keep the above notations and define space-time associated with g (or g-space-time) as the r -dimensional vector space as above defined by the dual h * of a Cartan subalgebra h supposed chosen in the beginning of the construction.
The first two properties of this space-time are that it is canonically equipped with the bilinear symmetric form defined by the restriction to h of the Killing form kill g , and that it contains the lattice of maximal weights that indexes all the finite dimensional representations of U h (g).
Since this lattice is defined as a copy of (Z) r , it is clearly not invariant through e iφ changes of phase : space-time arise first as a discrete real mathematical object. This leads us to work on real Lie algebras without any assumption of an "external" breach of symmetry in C. Now 27 , any real form is defined by an invariant conjugate linear form σ. Two linear forms σ and τ are said to be compatible if they commute, which means that the composite map θ = στ , that is a linear one, is involutive. Any real form is compatible with a real compact form. We may thus begin by working on the compact real form g c of g, and go then to any other real form by applying an invariant involutive complex transformation28 .
Let us emphasize that, since such a transformation corresponds to a type II or type III natural transformation, the particles associated with two compatible real forms do coexist in space-time, and are related by a true physical phenomenon.
For instance, neutrinos do coexist with electrons and positrons29 as associated respectively with the compact forms so 4 and so 1,3 of the real algebra sl 2 (C). Furthermore, as associated with the same compact form, neutrinos and antineutrinos should be the same particle, but we will see that they differ slightly due to dynamical ghost effects that generate their mass30 as we will explain in the next subsection. In the same way, dark quarks should coexist with usual ones corresponding with the compact form so c 8 and be folded in dark neutrons associated with the compact algebra g c 2 . On the other hand, if two non compact forms are associated respectively to the involution θ and θ ′ that do not commute, which is the general case, the corresponding τ and τ ′ do not commute and the two real forms are not compatible : the corresponding particles can not coexist, and therefore, in general only one real form of each type of algebra can coexist with the compact form. This remark has been used extensively in our section II, since we will have to find only one non compact real form for each type of particle 31 .
We have now to analyze how g c -space-time is seen through the point of view functor. Going to the corresponding analysis for other algebras will then be easy by applying the transpose of the above involution, which is legitimate since, by tensoring by C, we can take everything on the complex field, and then apply the above complex involution. Now, beginning with the su 2 case, it is known 32 that the representations of F ε (SU 2 ) are parametrized by the maximal torus t with t ∈ C, |t| = 1, and are either one-dimensional τ t or infinite dimensional π t given by, for linear forms a, b, c, d arranged as a b c d in the usual presentation :
τ t (a) = t ; τ t (b) = 0 ;τ t (c) = 0 ; τ t (d) = t -1 ; π t (a)(e k ) = (1 -ε -2k ) 1/2 e k-1 ; π t (b)(e k ) = -ε -k-1 t -1 e k ; π t (c)(e k )ε -k te k ; π t (d)(e k ) = (1 -ε -2k-2 ) 1/2 e k+1 ,
with the e k belonging to the formal one variable real polynomial algebra.
The first one may be connected with zero dimensional symplectic leaves of the Poisson Lie group SU 2 , while the second corresponds to the twodimensional ones (associated with the opposite face of the sphere S 2 , and therefore to the application of the Weyl symmetry).
Finally we see that, in that case, the representation of U ε (su 2 ) are parametrized by a lattice in the dual of the maximal torus, and that representations of the dual Hopf algebra F ε (SU 2 ) are parametrized by the maximal torus itself. Furthermore, if, by putting X = (x + 1
x ), we identify the above polynomial algebra with the algebra of characters, and correlatively of representations, of U ε (su 2 ), we see that successive applications of the diagonal elements of the dual Hopf algebra induce a move of increasing degrees representations which we will refer to as a growth of representations and that may be seen as corresponding to successive applications of the point of view functor : this will be the origin of the passing of time we will describe in the next subsection. Reciprocally, a diagonal element of the universal enveloping algebra generates a one-parameter group that induces a continuous move on the representations of F ε (SU 2 ) as parametrized by t.
We have thus a perfectly reciprocal situation. It is well-known that particles mysteriously have a double nature, corpuscular and undulatory ; the structure of the time that arises canonically as we just have seen shows that in some sense, time itself has a very precise double structure, spectral and undulatory, but there is nothing mysterious in this situation : the one corresponds to the Quantum Universal Algebra ; the second to its (restricted) dual as it appears after application of the point of view functor.
and also purely dynamical ones, as we will explain below.
32 See for instance [V.C.-A..P.], page 437.
Such a structure that arises canonically in our context was probably difficult a priori to postulate, although it is not so far from many recent approaches.
As mentioned before, this situation may be generalized to any compact algebra g c by using the principal embedding of su 2 in the Weyl chamber of g c , and then composing with applications of the Weyl group : this gives the aforesaid ρ w,t representations. 33 .
We therefore have exactly the same situation for any real compact algebra g c as the one we have just described for su 2 , except that we have now an r -dimensional torus, and that there is no more one copy of the algebra of representations, but as many as there are elements of the Weyl group W , each copy being connected to the others by a type III transformation.
Applied to g 2 (or so 8 ), this situation gives rise to the "flavors" of particles: their proper time may stand, up to a generalized Lorentz transformation, in any of the copies of the Weyl chamber of g 2 that corresponds to the nucleons we are made of. Furthermore, since generalized Lorentz transformations can partially cover two distinct copies of the Weyl chamber of g, oscillations of "flavors" can arise.
The structure of space-time is finally totally defined by the characteristics of the algebra it is generated by : it nevertheless always has a quite sophisticated fibered continuous structure combined with a discrete one, each one of those two components acting on the second.
The Mass Quantification Theory
As announced at the twelfth step of section 1.4, the general methodology that we will use will take as a reference for our time the time of the electron. Then, we will do the calculations in this time frame considered as embedded in the algebra that caracterizes the type of particle chosen, alongside the direction defined by the principal embedding of sl 2 , sometimes composed with the appropriate transformation of the Weyl group of the Cartan subalgebra.
The principal embedding and the Weyl group are indeed the two elements that define at each point of the compact representation associated with the particle considered the creation / annihilation operators which generate the time of this particle. The MQT therefore implies a preliminary step that will make more precise our vision of the electron.
It will be the first point of this section, and the second one the calculation of the Newton "constant" and the third point the application of the genreral guidelines defined in the previous sections from a frame tied to an electron as an an bserver to some examples that will illustrate the possibilities offered by the MQT.
The computations given as examples below, by no means extensive, have the main goal to show that the ART provides numerous numerical results that can be checked by measures, and that the MQT can appear as a new branch of physics. The authors recognize that these calculations have to be upgraded and extensively completed for other types of particle
Calculations for the electron
The electron is defined from the realification of the complex algebra SL 2 (C). The tensorial product by the adjoint representation will amount to a displacement of two units in the dual of the Cartan subalgebra. On the other hand, it is also on the adjoint representation of SU 2 (which is like the sphere SO 3 ) a rotation of π angle on the maximal torus.
A necessary condition of coherence for the fact that the KZ-monodromy that defines the passing of time is not changed by the rotation on the torus on which the representations of F h (SL 2 (C)) are indexed, is that the quadratic form which connects the Cartan subalgebra to its dual be such that to 2 on the dual corresponds π on the torus.
The time unit for the time of the electron corresponds therefore to 4 units on the dual (this corresponds to two successive monodromy in the adjoint algebra, each one of them inverting the algebra orientation because of the flip operator) or to a translation of 4 in terms of the Yangians.
Thus a creation cycle of one unit of time for the electron is like a 2 π rotation on the maximal torus.
The involution that generates the real form of Sl 2 (C) from the compact form of SO 4 (C) being also the transformation that exchanges algebraically the left with the right, one period of time of the electron will then be like 2 π on pure imaginary direction. Now, as we have done before, let us represent the Cartan subalgebra in the 0 and 1 directions (or the real and imaginary ones) associated to the spacetime of the electron, which corresponds to su 1,1 . The dimension 2 and 3 will correspond to the compact part, and therefore will be be associated with its charge. The "hyperbolic" rotation in su 1,1 is the image of the rotation in su 2 before the application of the involution which transforms the compact form into the real one.
The ratio between these two rotations is therefore a characteristic of the relative energies of these two components (electromagnetism and mass). It is thus given by the fine structure constant α. And we just showed that we have to use instead α ′ = α 2 π , that characterizes the ratio between the energy of the electron that comes form his mass and the electromagnetic energy, because of our normalisation.
This modelisation allows several calculations. The figure before the application of the Weyl transformation contains a right-left symmetry if we read the product e 0 ∧ e 1 as the dual of e 2 ∧ e 3 (Hodge- * operator).
1. The rotation that happens on directions 0 and 1 before the application of the involution induces an equivalent rotation on directions 2 and 3 because of the Hodge- * operator.
2. The rotation in the 2 and 3 directions that corresponds to the electron charge is thus changed by a precession coming from the previously described rotation that comes from the passing of time of the electron. As a first approximation, it is then
p = 1 - 1 1 + α ′
To be more precise, the measure to consider on the maximal torus has to take into account the fact that this torus is in reality a main circle of a sphere of radius 1 that stands for SO 3 . The norm to take into account at the second order is therefore 1-1 3 α ′2 (the limited development at the second order of the Riemannian form in the neighborhood of a point), and thus we obtain
p = α ′ -α ′2 - 1 3 α ′2
If we compare this result with the coefficients that comes from the QFT, usually expressed as a function of α ′′ = α π , and that are 1 2 α ′′ -0.3285α ′′2 , we see that the difference at the second order is, taking into account the measured value α ′ = 0.00116140981411
( 1 3 -0.3285) × α ′′2 ∼ = 6.6057 × 10 -10
The QFT includes higher components that come either from the electromagnetic part or from potential other particles. We did not try in this first paper to represent this phenomena. Roughly, our first approximation gives p calc ∼ = 0.001159611317 compare to the measured value p exp ∼ = 0.001159652
In our calculation we did not take into account the asymmetry of time which comes from the passage from the KZ-algebra to the DZ-algebra. We can guess that at least a part of the difference comes from this approximation.
3. On the other hand the fact that the left-right conjugation exchanges the directions 0 and 1 with the directions 2 and 3 implies that the precession p is the right equivalent of α, the fine structure constant. Each term is calculated by supposing the other fixed. We will then introduce the coefficient ψ = α ′ p to represent the impact of the left conjugation on the electron representation.
4. We will calculate now the position N of the electron in its own mobile frame. By reasoning in the "ghost time", we consider that the electron in the past is like it is now. So by getting N we will get the age of the universe.
We have seen that a move of two units in the dual of the Cartan subalgebra is equivalent because of the Casimir coefficient to a variation of
1 √ C 2 (n)
in the subalgebra itself (cf. the previous sections I and II). This reasoning applies to the electron and its associated bosons (that is to say to all the successive value of n), and therefore we have to consider that the variation of the quadratic form on the Cartan subalgebra is given by the sum
Σ N = N k=1 1 (k + 1) 2 -1
This length is measured in the basic algebra, and thus in the dual algebra of the one the time belongs to. Since the energy is given by ∂L ∂ ż with L the above defined Lagrangian, it is merely proportional to the quantity ż in the dual of time. Therefore, it is thus natural to define its inverse as the energy associated to the position N.
Since even and odd representations correspond to distinct groups representations, and thus to distinct quantum groups, it will be more convenient to apply the above construction separately to even and odd representations that will correspond to distinct particles. One get then two distinct summations :
S N = N k=1 2 √ (2k-1+1) 2 -1 and T N = N k=1 2 √ (2k+1) 2 -1 .
A small computation shows that for N > 10 6 , the sum Σ N = S N + T N may be approximated (with a less than 10 -11 precision) by : Σ N = ln(2N) + 0.08396412352.
From the modelling of the electron in the MQT previously described we have seen that the fine structure constant α corresponds to the ratio between the electromagnetic energy (that does not varies with time) and the energy corresponding to the inertial mass (that decreases with the above law) of the electron. It can thus be used to compute 2N with the above formula.
More precisely, we will get from hyperbolic trigonometric considerations34 that we must have
Σ N = 2 π 1 α 2 -1 and thus 2N = exp( 2 π 1 α 2 -1 -0.08396412352)
We will state again α ′ = α 2π and the measured value of α ′ ∼ = 0.00116140981411 gives 2N = 7.08432804709 × 10 37
To come back to the international system of units, we use the numerical value of the speed of light c = 2997932458 m/s to define the unity of length and Planck constant h = 6.6260755 × 10 -34 kg.m 2 /s to define the unity of mass. We get h c 2 = 7.37250327 × 10 -51 kg.s. With the mass of the electron m e = 9.109384 × 10 -31 kg, this leads to the period of the electron T e = 8.09355159 × 10 -21 s that defines the apparent age (or "ghost" age) of the universe from our point of view as (2N)T e = 5.73373745 × 10 17 s ≈ 18, 18 × 10 9 in years 35 .
The Newton constant calculation
The ART makes of the gravitation a consequence of the passing of time. It should then be possible to calculate the value of the Newton "constant" (remember it appears such from the present of an observer looking at his "ghost" past) from the previously calculated value of N.
The variation of the measurement unit in the direction of time corresponds to the component of the trace operator projected on the identity (cf. the coefficient τ 2 ). A given variation in 2 dimensions will therefore have an spatial impact twice the one in 4 dimensions. This is equivalent to replace N by N 2 . So to find the electron we have to change the coupling coefficient by two. Instead of calculating the Newton constant by itself, we will instead compare two dimensionless terms that should be equal if the intrinsic coupling coefficient is indeed 1 as the ART predicts. Such a checking is obviously the same as the calculation of the constant, depending only on the choice of units.
The right term is given by the effects of the two coefficients which after being multiplied make the volume form changes (here the volume form is a surface form for the space-time of the electron). If we take into account the 2 coefficient afore mentioned, we obtained, with α ∼ = 2π × 0.00116140981411 and the previously computed age of the universe N ∼ = 7.08432804709 × 10 37 :
ω t ω 0 = 2 × α √ 1 -α 2 × 1 N = 2.06019465385 × 10 40
35 This age is of the same order of magnitude as the one actually recognized by contemporary physics. It is nevertheless slighty higher, but the absolute relativity theory leads to a view on the cosmological beginning of the universethat goes far more slower scenario than the usual ones.
But the most significant confirmation of this value will be the computation below of the Newton constant that results from the absolute relativity theory that is based on this value and fits perfectly with experimental values.
On the other side, if m e is the mass of the electron, and if we take for G the value measured as of today, that is 6.67259 × 10 -11 in the SI unit system, and if we do not forget that the initial mass to take into account is modified by the duality, that is (ψα ′ ) -1 instead of α ′-1 , with α ′ = α 2π , the volume form should be modified by the dimensionless coefficient defined by :
ω t ω 0 = G(ψα ′-1 m) 2 c h which is 6.67259 × 10 -11 × ( 1 1.0015156511×0.00116140981411 × 9.109384 × 10 -31 ) 2 299792458 × 6.626176 × 10 -34 which is ∼ = 2.06016657202 × 10 -40
The ratio between the two coefficients is therefore 2.06019465385 × 10 40 2.06016657202 × 10 -40 ∼ = 1.00001363085 so the difference is less than 0.002 %, which corresponds to the margin of error on the numerical coefficients used and to some evolutions of the measured physical data, and also comes from the asymmetry of time ... We have therefore confirmed that
G = α 3 c h (2 πψm) 2 N √ 1 -α 2
The quality and precision of this result confirms at the same time the generalised Einstein equation we have established (and especially its dependance of the dimension of the space time associated with the particle) and also our calculation of the age of the universe, especially because it involves the exponential of a number around 100 : the precision we have obtained around 0.001 % show that the precision was before the application of this exponential around 10 -7 .
Mass calculations of the proton and the neutron
Having done the calculations in the case of the electron, now we only have to follow the guidelines given in the previous parts of this text. The proton and neutron will then be of type G 2 , and we will look first at the particle without an electric charge.
We will work in the normal (split) real form g of the the complex lie algebra g 2 of the complex Lie group G 2 .
We use the usual euclidean representation of the roots of g 2 , with the longest one having a length of √ 2. In that case, the first simple root is α 1 = (-1/3, 2/3, -1/3) and the second simple root α 2 = (1, -1, 0). The weight of the adjoint representation is the longest fundamental weight
α 6 = 3 α 1 + 2 α 2 = (1, 0, -1)
The principal embedding of sl 2 (C) in g 2 has the following representation f = 18 α 1 + 10 α2 = (4, 2, -6) and a norm of 56.
The cosinus of the principal embedding angle a with the adjoint representation root is then cos a = (α 6 , f ) (α 6 , α 6 )(f, f ) = 5 2 √ 7
Because we want to compare the time of the electron with a particle with a time lagged, we have to work in the dual of its Cartan subalgebra, that is the one after the weyl symmetry is applied. It is the algebra of the coroots, in which the fundamental embedding has the representation 5 α 1 +3 α 2 (the half sum of the positive roots) after the appropriate exchange and renormalization of the two simple roots. In that case, with the same convention for the norm (the longest root has a √ 2 length) its norm is 56 3 , that is a third of the norm of the initial principal embedding vector before the transformation.
Therefore, the Dynkin index of the principal embedding being 28, we have to divide it by 3 when we work in the dual because of the time lag. By taking into account the standard representation of the electron we begin with, we also have to correct it by the ratio of the Casimir coefficients determining the passage from the adjoint to the standard representation in sl 2 (C), that is 8 3 .
Finally, the ratio of the intrinsic forms we have to consider is
i = 28 × 1 3 × 8 3 = 224 9
Now we can compute the lag in the direction of the principal embedding of the time of a particle of type G 2 compared to the one of the electron. This "lag" will allow us to get the cosh t that has to be used, and then we will get the mass of the particle considered.
In the mobile frame of the electron, the change in the coefficient is therefore given by
t = (1 -cos a) × 2 √ 1 -α 2 α × 2 π × 1 i ∼ = 0.386181207486
The appearance of the coefficient 2 π was explained earlier. Taken into account the precession, we find for this particle, that we call the neutron, the following mass m n , if m e = 9.10938215(45).10 -31 kg is the mass of the electron m n = 2 cosh t × m e α ′ (1 + α) ∼ = 1.67488836.10 -27 kg to be compared with the measured mass m ′ n = 1.67492729(28).10 -27 kg, giving a ratio of m ′ n mn ∼ = 1.0002, next to 0.002 % of error.
If we calculate now on the other face of SO 4 (C), using a rotation of π 2 compare to the initial direction of the electron, we should see a particle with the appearance of an electric charge opposite to the one of the electron, and for the calculation we have to use the coefficient ψ ∼ = 1.0015156511 defined and calculated earlier, replacing α by α ψ everywhere it appears. So the new "lag" t ′ becomes
t ′ = (1 -cos a) × 2 ψ 1 -( α ψ ) 2 α × 2 π × 1 i ∼ = 0.38676658329
and the mass of the particle that we can call the proton m p = 2 cosh t ′ × m e ψ α ′ (1 + α ψ ) ∼ = 1.672744.10 -27 kg which has to be compared with the measured value of m ′ p = 1.672621637(83).10 -27 kg, giving a ratio of mp m ′ p ∼ = 1.00007, next to 0.0007 % of error.
Mass calculations for the electrons τ and µ
Here, the methodology is the same than before, but simpler because we do not have to compare the metric on the embedding with the one of g 2 . We just move the copy of sl 2 in g 2 by the Weyl rotation of the g 2 root diagram of angle π 3 . The first application of this rotation makes us cross the first zone (angle π 4 ) of the root diagram of so 4 , which implies to use the dual, and this has the following consequences :
1. Use the cosh -1 instead of the cosh to calculate the "lag" t.
2. Use the factor 1 2 instead of 2 in front of the cosh t. Indeed, the π 2 rotation of the roots of the realified of sl 2 needed to pull back the Weyl chamber of sl 2 in a position that contains the µ direction is equivalent to the application of the Weyl symmetry that corresponds to the change to the dual.
To determine the position on the embedding of the electron, we just have to project on it with the π 3 angle. Therefore,
t = cos π 3 × cosh -1 1 α ∼ = 2.8066887241
Then we use the electromagnetic mass of the electron, in GeV /c 2 units, that is m e ∼ = 0.440023544386 GeV /c 2 to get the mass m µ of the electron µ, m µ = m e × 2 cosh t ∼ = 0.105931407278 GeV /c 2 to compare with the measured mass of 0.1056 GeV /c 2 .
For the electron τ , after another application of the Weil rotation of angle π 3 of the root diagram of so 4 , we again change the second zone of it by going through the angle π 2 , which implies we go again to the dual and have to multiply by 1 2 cosh t instead of dividing by it, and use also the ψ coefficient. The "lag" becomes 4 Appendix I : On the foundations of the Absolute Relativity Theory
The transcendental conditions referred to in the first section can be expressed in a formal mathematical way by using the language of categories. Since it is not the main purpose of the theory, we only outline here shortly the framework of such a mathematical formulation that has been useful to made the main choices that lead to the ART.
From graphs to categories
Representing the objects and morphisms used by a theory as points and arrows of a graph seems to be the most general algebraic way to begin with a theoretical construction.
It is then natural, as mentioned in section one, to define the point of view of an object a on an object b as the arrows that have b as domain (or source) and a as codomain (or target). We will assume that any so defined point of view is a set.
If the objects of the graph are distinct, applying this operation on all the objects of the graph generates sets of arrows that are not intersecting, which forbids to make any comparisons between the different points of view of those objects. It is thus impossible to refute the theory by experimenting contradictions between the different points of view represented, which could lead to solipsistic approaches.
Furthermore, there could be isolated objects that have no connection with everyone including themselves, and such objects do not have any point of view and are not part of anyone. Thus, their withdrawal would not change the theory that therefore is not well defined.
By contrast, if we assume that the graph is equipped with a law of composition of arrows, there begins to be a possibility to make tests of consistency Therefore, requiring the composition law to be associative is a necessary and sufficient condition to make consistency tests on both objects and morphisms.
We will also assume that there is an identity arrow attached to each object which means that each object has a non empty point of view on itself and that one of the corresponding arrows acts as an identity for the operation of composition of arrows. Those conditions are those that define a metacategory. For technical reasons, we will assume that this metacategory is in fact a category and that this category is small enough to see each object as element of a "big enough" set. 36 We will call preuniverse this category that contains those objects and arrows used to describe the theory, and we will call it ℵ.
The above operation of taking the point of view becomes a contravariant functor that we will call the point of view functor. According to Yoneda Lemma the preuniverse may be embedded in the category of point of view functors by the operation that associates the functor hom(., a) with any object a, and the corresponding natural transformation with the arrows between any two objects a and b. Yoneda Lemma implies that one can hope to access to the preuniverse through the category of all the point of view functors with the corresponding natural transformations.
A functor F in a category C will be said to be representable if there exists some object a belonging to C such that hom C (., a) is naturally equivalent to F .
The point of view functor is contravariant which means that it is a covariant functor from the opposite category that is obtained by reversing all the arrows.
The next conditions we will require for the theory we are beginning to build is that it has "enough" objects to make usual composed functors represented : this will mean that the theory is required to be able to represents by an object the usual composed functors in a way we will precise now.
From categories to additive categories and R -modules
We follow here [S.M.L.] pp.198-201 and 209.
Since we want to work on set of arrows coming to a point, it is natural to require that the product of two hom-sets hom(b, a) and hom(c, a) defines a new object d that does not depends on a, which is, as above, a condition for authorizing comparisons between points of view. This correspond to the existence of a coproduct associated with any pair of objects. In the same way, by considering the opposite category, we see that the category need also to contain a product of any two objects.
One may ask also that there is an object the point of view of which contains all the objects of the category : otherwise one can formally add it in the category. Imposing the same condition to the opposite category leads to postulate the existence in the category of an initial object. In order to have some invariance by the "taking the opposite" operation, one postulates that those two objects coincide, which means that the category has a zero element.
Let us add two other technical conditions, namely :
-every arrow in the category has a kernel and a cokernel ; -every monic arrow is a kernel, and every epi is a cokernel.
In an informal language, the first of those conditions is again a condition requiring the category to have enough objects to make the point of view functor precise enough in order to allow to represent those fundamental (abstract) characteristics of arrows. The second one asserts that the category also has enough arrows in order to make possible the reciprocal way.
The key point is that those conditions are sufficient to ensure that the considered category is an Abelian category.
Finally, an embedding theorem referred to as Lubkin-Haron-Feyd-Mitchell theorem in the above reference, allows us to consider the category as a category of R-modules for some suitable ring R.
We are therefore now working on such a category. For an aim of simplicity here, we restrict ourselves to categories of Rmodules that are also complex vector spaces.
4.3 From additive categories to monoidal categories of modules over quasi-Hopf algebras.
Since we access everything through the above point of view functor, one can ask that there should be also objects in the category that represents (as an appropriate adjoint) the successive applications of this functor. This leads to consider that we have also to be able to define tensor products of the R-modules we are now working on. This means that R has to be a quasi-bialgebra. Since we want furthermore to have the possibility to exchange left and right, we need an antipode operator.
This leads to the idea to restrict ourselves to the cases where R is quasi-Hopf algebra.
26
considered as the tensorial product sl n+1 ⊗ sl * n+1 27 We follow here [A.L.O.-EB.B.V.], page 133.
mass m τ of the electron τ ism τ = m e × 1 2 cosh t ′ ψ ∼ = 1.82146 GeV /c 2to compare with the measured value of 1.784 GeV /c 2 .
since, if three objects a, b and c are linked by arrows ρ c,b from c to b and ρ b,a from b to a, there is a necessary relation between the points of view of a and b, namely : the composed arrow ρ b,a • ρ c,b has to belong to the set hom(c, a) that represents the point of view of a on c. This gives a possibility to test the consistency of the points of view of the object a and b on the object c, but does not allow any consistency test of those points of view on arrows. We have thus to consider a fourth object d and an arrow ρ d,c . From the point of view of b, the arrow ρ c,d associates an arrow belonging to hom(c, b) with any arrow ρ d,b . The same applies to the point of view of a on the same arrow ρ d,c. If we restrict ourselves to the arrows from the point of view of a are defined through b, it is now easy to express the consistency conditions that is :ρ d,a = ρ d,b • ρ b,a = (ρ d,c • ρ c,b ) • ρ b,a = ρ d,c • (ρ c,b • ρ b,a ) = ρ d,c • ρ c,a ,which corresponds to the associativity of the composition law.
Albert Einstein, The Meaning of Relativity, sixth edition, Routledge Classics, London,
2003, p.170.
In the same way, the recent loop quantum gravity theory (LQG) assigns a specific "loop" structure to space-time, but does not challenge either its a priori existence. Its first successes demonstrate how crucial for physics is the problem of space-time.
All those elements come from their equivalent in the complex Cartan subalgebra, and we will use systematically the equivalence between working on complex Lie algebras or on their compact real form.
5 See for instance [V.C.-A.P.], pp.374-391 for all notations and results concerning
Yangians. 6 In order to represent also the annihilation operator, one should work on affine
We will understand later on why there should not be white dark protons.
We use here the standard representation of the algebra in order to get a one unit progression. The same reasoning applied to the adjoint representation is more physical since it does not exchange at each step bosonic and fermionic representations. To use the square of X instead of X gives the transposition.
10 We give a complete expression of those representations in section II. References are in [V.C.-A.P.], pp. 435-439..
The choice of the principal embedding is the only consistent one for a given a Lie algebra after choosing a Cartan subalgebra, a set of roots and an ordering of them (that defines the Weyl chamber). See for instance [A.L.O.-E.B.V.],pp. 193-203.
See for instance [C.K.] pp.451-455 for exact definitions, and then pp. 455-479 for the KZ equation and Kohno-Drinfeld theorem. We refer also to [V.C.-A.P.] pp. 537-550 for the connection with Yang-Baxter equation.
To be precise, we need to work with h a transcendental nmber in order to respect the h-adic structure of Drinfeld-Jimbo algebras. We do not adress this question here since we can replace the above choice by arbtrary close transcendental values getting only a function arbitrary close to a continuous function which is enough for the computations we intend to propose.
See for instance [C.K.] page 460.
See for instance[S.H.], pp. 107-108.
See for instance [C.K.], pp. 368-371.
See for instance [A.L.O.-E.B.V.],, chapter 4, sections 1 to 4, pp.127-162.
We refer here to the true mathematical triality that corresponds to outer automorphisms of the so 8 algebra, and not to the one usually quoted in physics and associated to su 3 , that refers only to order 3 elements of the Weyl group of this algebra. See the next subsection for an explanation of the role of the algebra su 3 instead of so 8 in the usual interpretation of the strong interaction.
See for instance [A.L.O.-E.B.V.], table 4, pp. 229-231 for all those real form discussions.
Let us notice that su 2 (resp. su 1 ) is the real compact form of the 2-dimensional (resp. 1-dimensional) linear group that acts on any Cartan subalgebra of g 2 (resp. su 2 ) explains why the gauge invariance group associated with the weak interaction is usually represented by the product SU 1 × SU 2 .
Let us emphasize here that the Lie algebra being defined on the tangent space at the identity element of a Lie group, the non quantized figure is not stable under the adjoint action of the group G on itself.Since the adjoint action defines natural transformations, beginning with the non quantized case would have leaded to global consistency difficulties.That is the reason why we have always considered the quantized case instead of beginning with the simpler non quantized one. See [V.C.-A.P.] pp.182-183 for a way (due to Reshetikhin) that precisely builds the quantization from the adjoint action of the group through an appropriate use of the Baker-Campbell-Hausdorff formula.
See for instance [V.C.-A.P.] pp.
234-238 and pp. 435-439 for the compact case. 23 See for instance [V.C.-A.P.] pp. 433-439.
See for instance [A.C.-V.P]. Th.eorem 13.1.9, page 438.
This fact is easy to check directly by using plethysm. See also [A.C.-V.P.], pp.387 for a reference.
It is interesting to notice that there always exist on the compact form a Haar measure that authorizes convergent integral calculus. The pull back of this calculus on a non compact real form defines canonically a way to make convergent calculus on this non compact real form that is generally itself equipped with a divergent invariant measure. This remark could explain in our context the success of renormalizations methods of QFT, but we did not go deeper into this way.
29 This fact can be confirmed by a well known fact on Lie groups. The group SO4 splits into two copies of the group SO3. Since the multiplication by i in C acts as an orthogonal rotation on the real plan, the Weyl symmetry may be seen as applying such a rotation on the real plan, which corresponds exactly to the Lie bracket[i,j] in the su2 algebra.Applied on any two directions, this means that applying the above involution is equivalent to go from the (-) connection on the sphere that is the usual one without torsion, to the (+) connection that has a torsion defined by the Lie bracket on the tangent space at the neutral element.The compact form then appears as the torsion -free manifold associated with this second one.There is another possibility to define the (+) connection by left-right conjugation, while the compact form is unique. See for details [S.H.], pp. 102
-104.30 This mass may be increased by type III transformations that induce mu and tau
neutrinos. 31 This uniqueness of the non compact real form does not implies that the corresponding particle appears only under one form : indeed, as we have seen before, there can be relativistic type III effects that change the "flavor" of the particles,
This comes directly from the exchange of the direction 0 and 1 (that is direction of the real axis and the direction of the imaginary one) after we take into account the invariant quadratic form and the passage from an euclidean form to a lorentzian one
This refers to the notion of universe as described for instance in [S.M.L.], page 12.
Acknowledgement
The authors are much grateful to Professor Sergiu Klainermann (Princeton University, Department of Mathematics) for many exchanges about the work presented in this paper. At various stage of the development of the theory, Jean-Marc Oury presented the evolutions of his ideas to Professor Sergiu Klainermann, and benefited from his questions and as well as his pointing out aspects of the theory that required further clarifications. |
04101953 | en | [
"math"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04101953/file/strong_density.pdf | Antoine Detaille
A complete answer to the strong density problem in Sobolev spaces with values into compact manifolds
We consider the problem of strong density of smooth maps in the Sobolev space 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩), where 0 < 𝑠 < +∞, 1 ≤ 𝑝 < +∞, 𝑄 𝑚 is the unit cube in ℝ 𝑚 , and 𝒩 is a smooth compact connected Riemannian manifold without boundary. Our main result fully answers the strong density problem in the whole range 0 < 𝑠 < +∞: the space 𝒞 ∞ (𝑄 𝑚 ; 𝒩) is dense in 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩) if and only if 𝜋 [𝑠𝑝] (𝒩) = {0}. This completes the results of Bethuel (𝑠 = 1), Brezis and Mironescu (0 < 𝑠 < 1), and Bousquet, Ponce, and Van Schaftingen (𝑠 = 2, 3, . . . ). We also consider the case of more general domains 𝛺, in the setting studied by Hang and Lin when 𝑠 = 1.
Introduction
We address here the question of the density of smooth maps in Sobolev spaces 𝑊 𝑠,𝑝 (𝛺; 𝒩) of maps with values into a compact manifold 𝒩. Here and in the sequel, 1 ≤ 𝑝 < +∞ and 0 < 𝑠 < +∞. Recall the following well-known fundamental result in the theory of classical real-valued Sobolev spaces: if 𝛺 ⊂ ℝ 𝑚 is a sufficiently smooth open set, then 𝒞 ∞ (𝛺; ℝ) is dense in 𝑊 𝑠,𝑝 (𝛺; ℝ). The reader may consult, for instance, [START_REF] Brezis | Functional analysis, Sobolev spaces and partial differential equations[END_REF] or [START_REF] Willem | Functional analysis, Cornerstones[END_REF] for a proof in the case where 𝛺 is a smooth domain, or [START_REF] Adams | Sobolev spaces[END_REF] in the case where 𝛺 satisfies the weaker segment condition. Here,
𝒞 ∞ (𝛺) = {𝑢 |𝛺 : 𝑢 ∈ 𝒞 ∞ (ℝ 𝑚 )}.
More difficult is the analogue question of the density of smooth maps in Sobolev spaces when the target 𝒩 is a manifold. In what follows, we let 𝒩 be a smooth compact connected Riemannian manifold without boundary, isometrically embedded in ℝ 𝜈 . The latter assumption is not restrictive, since we may always find such an embedding provided that we choose 𝜈 ∈ ℕ sufficiently large; see [START_REF] Nash | 𝐶 1 isometric imbeddings[END_REF] and [START_REF]The imbedding problem for Riemannian manifolds[END_REF]. The natural analogue question is whether 𝒞 ∞ (𝛺; 𝒩) is dense in 𝑊 𝑠,𝑝 (𝛺; 𝒩). Here, the space 𝑊 𝑠,𝑝 (𝛺; 𝒩) is the set of all maps 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; ℝ 𝜈 ) such that 𝑢(𝑥) ∈ 𝒩 for almost every 𝑥 ∈ 𝛺. Due to the presence of the manifold constraint, 𝑊 𝑠,𝑝 (𝛺; 𝒩) is in general not a vector space, but it is nevertheless a metric space endowed with the distance defined by 𝑑 𝑊 𝑠,𝑝 (𝛺) (𝑢, 𝑣) = ∥𝑢 -𝑣 ∥ 𝑊 𝑠,𝑝 (𝛺) .
The space 𝒞 ∞ (𝛺; 𝒩) is defined analogously as the set of all 𝒞 ∞ (𝛺; ℝ 𝜈 ) maps taking their values into 𝒩.
Note that the usual technique for proving density of smooth maps, relying on regularization by convolution, is not applicable in this context, since in general it does not preserve the constraint that the maps take their values into 𝒩. In the range 𝑠𝑝 ≥ 𝑚, however, density always holds. Indeed, in this range, Sobolev maps are continuous, or belong to the set VMO of functions with vanishing mean oscillation. One may therefore proceed as in the classical case, via regularization and nearest point projection onto 𝒩; see [START_REF]Boundary regularity and the Dirichlet problem for harmonic maps[END_REF] and [START_REF] Brezis | Degree theory and BMO. I. Compact manifolds without boundaries[END_REF].
The case 𝑠𝑝 < 𝑚 is way more delicate. Schoen and Uhlenbeck [START_REF]Boundary regularity and the Dirichlet problem for harmonic maps[END_REF] were the first to observe that density may fail in this range, due to the presence of topological obstructions. More precisely, they showed that the map 𝑢 : 𝔹 3 → 𝕊 2 defined by 𝑢(𝑥) = 𝑥 |𝑥| may not be approximated by smooth functions in 𝑊 1,2 (𝔹 3 ; 𝕊 2 ). This was subsequently generalized by Bethuel and Zheng [4, Theorem 2] and finally by Escobedo [START_REF] Escobedo | Some remarks on the density of regular mappings in Sobolev classes of 𝑆 𝑀 -valued functions[END_REF], leading to the conclusion that 𝒞 ∞ (𝛺; 𝒩) is never dense in 𝑊 𝑠,𝑝 (𝛺; 𝒩) when 𝜋 [𝑠𝑝] (𝒩) ≠ {0}.
Here, 𝜋 ℓ (𝒩) is the ℓ -th homotopy group of 𝒩, and [𝑠𝑝] denotes the integer part of 𝑠𝑝. For further use, note that the condition 𝜋 [𝑠𝑝] (𝒩) = {0} means that every continuous map 𝑓 : 𝕊 [𝑠𝑝] → 𝒩 may be extended to a continuous map 𝑔 : 𝔹 [𝑠𝑝]+1 → 𝒩.
A natural question is whether the condition 𝜋 [𝑠𝑝] (𝒩) = {0} is also sufficient for the density of 𝒞 ∞ (𝛺; 𝒩) in 𝑊 𝑠,𝑝 (𝛺; 𝒩). A remarkable result of Bethuel [START_REF] Bethuel | The approximation problem for Sobolev maps between two manifolds[END_REF] asserts that, when 𝑠 = 1, 1 ≤ 𝑝 < 𝑚, and 𝛺 is a cube, the condition 𝜋 [𝑠𝑝] (𝒩) ≠ {0} is the only obstruction to strong density of 𝒞 ∞ (𝛺; 𝒩) in 𝑊 1,𝑝 (𝛺; 𝒩). Bethuel's result has been extended to other values of 𝑠 and 𝑝, but not all (see below). Our first main result provides a complete generalization of Bethuel's result (covering all values of 𝑠 and 𝑝). The case of more general domains is more involved since the topology of the domain also comes into play, as it was first noticed by Hang and Lin [START_REF] Hang | Topology of Sobolev mappings[END_REF]. We investigate this question in Section 8, establishing counterparts of Theorem 1.1 when the domain is a smooth bounded open set, or even a smooth compact manifold of dimension 𝑚; see Theorems 8.3 and 8.4 below.
When 𝜋 [𝑠𝑝] (𝒩) ≠ {0}, density fails, and a natural question in this context is whether one can find a suitable substitute for the class 𝒞 ∞ (𝑄 𝑚 ; 𝒩). This is indeed the case provided that we replace smooth functions on 𝛺 by functions that are smooth on 𝛺 except on some singular set whose dimension depends on [𝑠𝑝]. This direction of research also originates in Bethuel's paper [START_REF] Bethuel | The approximation problem for Sobolev maps between two manifolds[END_REF]. (For subsequent results, see below.)
We define the class ℛ 𝑖 (𝛺; 𝒩) as the set of maps 𝑢 : 𝛺 → 𝒩 which are smooth on 𝛺 \ 𝑇, where 𝑇 is a finite union of 𝑖-dimensional planes, and such that for every 𝑗 ∈ ℕ * and 𝑥 ∈ 𝛺 \ 𝑇, |𝐷 𝑗 𝑢(𝑥)| ≤ 𝐶 1 dist(𝑥, 𝑇) 𝑗 for some constant 𝐶 > 0 depending on 𝑢 and 𝑗. We establish the density of the class ℛ 𝑚-[𝑠𝑝]-1 in the full range 0 < 𝑠 < +∞ when 𝛺 = 𝑄 𝑚 . Theorem 1.2. If 𝑠𝑝 < 𝑚, then ℛ 𝑚-[𝑠𝑝]-1 (𝑄 𝑚 ; 𝒩) is dense in 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩).
We mention that, in some sense, the class ℛ 𝑚-[𝑠𝑝]-1 (𝑄 𝑚 ; 𝒩) is the best dense class in 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩) one can hope for. More precisely, the singular set cannot be taken of smaller dimension: the class ℛ 𝑖 (𝑄 𝑚 ; 𝒩) with 𝑖 < 𝑚 -[𝑠𝑝] -1 is never dense in 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩) if 𝜋 [𝑠𝑝] (𝒩) ≠ {0}; see the discussion in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF].
In addition to its own importance, Theorem 1.2 is crucial in establishing Theorem 1.1. In Section 6, we explain how to deal with more general domains. We show that Theorem 1.2 has a valid counterpart on bounded domains 𝛺 that merely satisfy the segment condition or when the domain is instead a smooth compact manifold of dimension 𝑚; see Theorems 6.3 and 6.4 below.
Theorems 1.1 and 1.2 where known for some values of 𝑠 and 𝑝. As mentioned above, the case 𝑠 = 1 was established by Bethuel in his seminal paper [START_REF] Bethuel | The approximation problem for Sobolev maps between two manifolds[END_REF]. Progress was then made by Brezis and Mironescu [START_REF]Density in 𝑊 𝑠,𝑝 (𝛺; 𝑁)[END_REF] and by Bousquet, Ponce, and Van Schaftingen [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF]. Using an ad hoc method based on homogeneous extension, Brezis and Mironescu were able to completely solve the case 0 < 𝑠 < 1. On the other hand, Bousquet, Ponce, and Van Schaftingen introduced several important tools that are tailored to higher order Sobolev spaces, which allowed them to give a full answer to the strong density problem in the case 𝑠 = 2, 3, . . . Their approach incorporates and adapts major concepts from Bethuel's proof for 𝑠 = 1 (among which the method of good and bad cubes, which lies at the core of the proof) and from Brezis and Li [START_REF] Brezis | Topology and Sobolev spaces[END_REF]. It turns out that this approach in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF] extends to noninteger values of 𝑠. This is the main contribution of our paper. Other special cases were obtained by Bethuel and Zheng [START_REF] Bethuel | Density of smooth functions between two manifolds in Sobolev spaces[END_REF], Escobedo [START_REF] Escobedo | Some remarks on the density of regular mappings in Sobolev classes of 𝑆 𝑀 -valued functions[END_REF], Hajłasz [START_REF] Hajłasz | Approximation of Sobolev mappings[END_REF], Bethuel [START_REF]Approximations in trace spaces defined between manifolds[END_REF], Rivière [START_REF] Rivière | Dense subsets of 𝐻 1/2 (𝑆 2 , 𝑆 1 )[END_REF], Bousquet [START_REF] Bousquet | Topological singularities in 𝑊 𝑠,𝑝 (𝑆 𝑁 , 𝑆 1 )[END_REF], Mucci [START_REF] Mucci | Strong density results in trace spaces of maps between manifolds[END_REF], and Bousquet, Ponce, and Van Schaftingen [START_REF] Bousquet | Density of smooth maps for fractional Sobolev spaces 𝑊 𝑠,𝑝 into ℓ simply connected manifolds when 𝑠 ⩾ 1[END_REF][START_REF]Strong approximation of fractional Sobolev maps[END_REF]. However, the case where 𝑠 > 1 is not an integer and 𝒩 is a general manifold is not covered by these contributions and is the main novelty of Theorem 1.1.
The method of homogeneous extension used in [START_REF]Density in 𝑊 𝑠,𝑝 (𝛺; 𝑁)[END_REF] to settle the case where 0 < 𝑠 < 1 was shown by the authors themselves not to work when 𝑠 = 1; see [START_REF]Density in 𝑊 𝑠,𝑝 (𝛺; 𝑁)[END_REF]Lemma 4.9]. On the contrary, as we explained above, the approach in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF] can be adapted to handle noninteger values of 𝑠. It is our goal here to explain in detail this adapted construction, introducing the modifications and new ideas that are required to make it suitable for the fractional order setting. This does not only prove the density of smooth maps in the remaining case where 𝑠 > 1 is not an integer, but it also provides a unified proof covering the full range 0 < 𝑠 < +∞, including the case 0 < 𝑠 < 1 originally treated via a different approach.
This paper is organized as follows. In Sections 3 to 5, we develop the tools that we need to prove Theorem 1.2, following the approach in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF] and extending the auxiliary results to the noninteger case. With these tools at hand, we proceed in Section 6 with the proof of Theorem 1.2. For the sake of simplicity, we first deal with the model case 𝛺 = 𝑄 𝑚 , before explaining how to handle more general domains. In Section 8, we present the proof of Theorem 1.1 and the counterpart of Theorem 1.1 in general domains. The proofs rely on an additional tool presented in Section 7.
Before delving into technicalities, we start by presenting in Section 2 a sketch of the proof of Theorems 1.2 and 1.1. Our objective is to give an overview of the general strategy of the proof while avoiding giving too much details at this stage. We hope that Section 2 will provide the reader with some intuition on the basic ideas behind the different tools that will be used, and show how each of them fits into the big picture of the proof, before we move to a more detailed presentation in the next sections. Section 2 also gathers the main useful definitions and basic auxiliary results used throughout the paper.
Definitions and sketch of the proof
From now on, we write 𝑠 = 𝑘 + 𝜎 with 𝑘 ∈ ℕ and 𝜎 ∈ [0, 1). We recall that the Sobolev space 𝑊 𝑘,𝑝 (𝛺) is the set of all 𝑢 ∈ 𝐿 𝑝 (𝛺) such that for every 𝑗 ∈ {1, . . . , 𝑘}, the weak derivative 𝐷 𝑗 𝑢 belongs to 𝐿 𝑝 (𝛺). This space is endowed with the norm defined by ∥𝑢∥ 𝑊 𝑘,𝑝 (𝛺) = ∥𝑢∥ 𝐿 𝑝 (𝛺) + 𝑘 𝑗=1 ∥𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝛺) .
When 𝜎 ∈ (0, 1), the fractional Sobolev space 𝑊 𝜎,𝑝 (𝛺) is the set of all measurable maps 𝑢 : 𝛺 → ℝ such that |𝑢| 𝑊 𝜎,𝑝 (𝛺) < +∞, where the Gagliardo seminorm |•| 𝑊 𝜎,𝑝 (𝛺) is defined by
|𝑢| 𝑊 𝜎,𝑝 (𝛺) = ∫ 𝛺 ∫ 𝛺 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 1 𝑝
.
It is endowed with the norm ∥𝑢∥ 𝑊 𝜎,𝑝 (𝛺) = ∥𝑢∥ 𝐿 𝑝 (𝛺) + |𝑢| 𝑊 𝜎,𝑝 (𝛺) .
When 𝜎 ∈ (0, 1) and 𝑘 ≥ 1, the Sobolev space 𝑊 𝑠,𝑝 (𝛺) is the set of all 𝑢 ∈ 𝑊 𝑘,𝑝 (𝛺) such that 𝐷 𝑘 𝑢 ∈ 𝑊 𝜎,𝑝 (𝛺), endowed with the norm ∥𝑢 ∥ 𝑊 𝑠,𝑝 (𝛺) = ∥𝑢∥ 𝑊 𝑘,𝑝 (𝛺) + |𝐷 𝑘 𝑢| 𝑊 𝜎,𝑝 (𝛺) .
When working specifically with the Gagliardo seminorm, we shall often consider implicitly that 𝜎 ≠ 0. We also mention that here, we consider 𝐿 𝑝 (𝛺) maps as measurable functions 𝑢 : 𝛺 → ℝ (and not classes of functions), i.e., we do not identify two maps that are almost everywhere equal. As we will see, this will be of importance in the course of Section 3.
Throughout the paper, we make intensive use of decompositions of domains into suitable families of cubes. For this purpose, we introduce a few notations. Given 𝜂 > 0 and 𝑎 ∈ ℝ 𝑚 , we denote by 𝑄 𝑚 𝜂 (𝑎) the cube of center 𝑎 and radius 𝜂 in ℝ 𝑚 , the radius of a cube being half of the length of its edges. When 𝑎 = 0, we abbreviate 𝑄 𝑚 𝜂 (0) = 𝑄 𝑚 𝜂 . We also abbreviate 𝑄 𝑚 1 = 𝑄 𝑚 . A cubication 𝒦 𝑚 𝜂 of radius 𝜂 > 0 is any subset of 𝑄 𝑚 𝜂 + 2𝜂ℤ 𝑚 . Given ℓ ∈ {0, . . . , 𝑚}, the ℓ -skeleton of 𝒦 𝑚 𝜂 is the set 𝒦 ℓ 𝜂 of all faces of dimension ℓ of all cubes in 𝒦 𝑚 𝜂 . A subskeleton of dimension ℓ of 𝒦 𝑚 𝜂 is any subset of 𝒦 ℓ 𝜂 . Given a skeleton 𝒮 ℓ , we denote by 𝑆 ℓ the union of all elements of 𝒮 ℓ , that is,
𝑆 ℓ = 𝜎 ℓ ∈𝒮 ℓ 𝜎 ℓ .
Given a skeleton 𝒮 ℓ , the dual skeleton of 𝒮 ℓ is the skeleton 𝒯 ℓ * of dimension ℓ * = 𝑚-ℓ -1 consisting in all cubes of the form 𝜎 ℓ * + 𝑎 -𝑥, where 𝜎 ℓ * ∈ 𝒮 ℓ * , 𝑎 is the center and 𝑥 a vertex of a cube of 𝒮 𝑚 with 𝑥 ∈ 𝜎 ℓ * . The dimension ℓ * is the largest possible so that 𝑆 ℓ ∩ 𝑇 ℓ * = ∅. Here,
𝑇 ℓ * = 𝜎 ℓ * ∈𝒯 ℓ * 𝜎 ℓ * .
For further use, we note that 𝑆 ℓ * is a homotopy retract of 𝑆 𝑚 \𝑇 ℓ * ; see e.g. [34, Section 1] or [START_REF] Bousquet | Density of smooth maps for fractional Sobolev spaces 𝑊 𝑠,𝑝 into ℓ simply connected manifolds when 𝑠 ⩾ 1[END_REF]Lemma 2.3].
Given a map Φ : ℝ 𝑚 → ℝ 𝑚 , the geometric support of Φ is defined by Supp Φ = {𝑥 ∈ ℝ 𝑚 : Φ(𝑥) ≠ 𝑥}.
This should not be confused with the analytic support of a map 𝜑 : ℝ 𝑚 → ℝ, defined by supp 𝜑 = {𝑥 ∈ ℝ 𝑚 : 𝜑(𝑥) ≠ 0}.
We now present the sketch of the proof of Theorem 1.2. We also include graphical illustrations of the various constructions involved in the proof, with 𝑚 = 2 and [𝑠𝑝] = 1. As we explained in the introduction, we follow the approach of Bousquet, Ponce, and Van Schaftingen [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF], and we provide the necessary tools and ideas to adapt their method to the fractional setting. Let 𝑢 ∈ 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩). For the sake of simplicity, we assume that 𝑢 is defined in a neighborhood of 𝑄 𝑚 . The starting point is Bethuel's concept of good cubes and bad cubes that we now present. Let 𝒦 𝑚 𝜂 be a cubication of 𝑄 𝑚 , that is, 𝐾 𝑚 𝜂 = 𝑄 𝑚 . Here, 𝜂 > 0 is such that 1/𝜂 ∈ ℕ * . (Actually, for technical reasons, we will need to work on a cubication of a slightly larger cube than 𝑄 𝑚 , but for this informal exposition, let us stick to a cubication of 𝑄 𝑚 for the sake of simplicity.) We fix 0 < 𝜌 < 1 2 and define the family ℰ 𝑚 𝜂 of all bad cubes as the set of cubes 𝜎 𝑚 ∈ 𝒦 𝑚 𝜂 such that
1 𝜂 𝑚-𝑠𝑝 ∥𝐷𝑢∥ 𝑠𝑝 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) > 𝑐 if 𝑠 ≥ 1, or 1 𝜂 𝑚-𝑠𝑝 |𝑢| 𝑝 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) > 𝑐 if 0 < 𝑠 < 1, (2.1)
where 𝑐 > 0 is a small parameter to be determined later on. The remaining cubes are the good cubes. From now on, we shall assume that we are in the case 𝑠 ≥ 1, in order to avoid having to distinguish two cases. (The case 0 < 𝑠 < 1 is similar.) The condition defining the good cubes ensures that 𝑢 does not oscillate too much on such cubes. On the contrary, one cannot control the behavior of 𝑢 on bad cubes, but we can show that there are not too many of them. Indeed, each bad cube contributes with at least 𝑐𝜂 𝑚 𝑠𝑝 -1 to the energy of 𝑢, which limits the number of such cubes.
On Figure 2.1, one finds a possible decomposition of 𝑄 2 in 16 cubes, which corresponds to 𝜂 = 1 4 . Here, the three cubes in red are bad cubes, while green cubes are good cubes. For technical reasons that will become clear later on, it is useful to work on a set slightly larger than the union of bad cubes. We therefore let 𝒰 𝑚 𝜂 be the set of all cubes in 𝒦 𝑚 𝜂 that intersect some bad cube in ℰ 𝑚 𝜂 . This fact is ignored in our graphical illustrations, which are drawn as if 𝒰 𝑚 𝜂 = 𝒦 𝑚 𝜂 . This allows us to keep readable pictures with large cubes. Nevertheless, the reader should keep in mind that all constructions explained below are actually performed not only on the red cubes, but also on all green cubes adjacent to them, and that decompositions could possibly consist in many small cubes.
Figure 2.1: Good and bad cubes
We now turn to the construction of the maps in 𝑢 in the class ℛ 𝑚-[𝑠𝑝]-1 approximating 𝑢. The first tool is the opening, which is explained in Section 3. This technique originates in the work of Brezis and Li [START_REF] Brezis | Topology and Sobolev spaces[END_REF] about the topology of Sobolev spaces of maps between manifolds. We open the map 𝑢 in order to obtain a map 𝑢 op 𝜂 which, on a neighborhood of the [𝑠𝑝]-skeleton 𝑈 𝜂 , and the construction does not increase too much the energy of 𝑢 on this neighborhood.
On Figure 2.2, one finds an illustration of the opening procedure when [𝑠𝑝] = 1. The map 𝑢 is opened on the blue region, where it therefore satisfies VMO estimates. Here we rely on the method of adaptative smoothing, whose principle is to allow the convolution parameter to depend on the point where the convolution is evaluated. This technique was made popular by the work of Schoen and Uhlenbeck [START_REF] Schoen | A regularity theory for harmonic maps[END_REF], where it was used in the study of the regularity of harmonic maps with values into a manifold.
More precisely, given 𝜓 ∈ 𝒞 ∞ (𝑄 𝑚 ), we let
𝜑 𝜓 * 𝑢(𝑥) = ∫ 𝐵 𝑚 1
𝜑(𝑧)𝑢(𝑥 + 𝜓(𝑥)𝑧) d𝑧.
To pursue the proof, we choose a suitable map 𝜓 𝜂 ∈ 𝒞 ∞ (𝐵 𝑚 1 ), whose construction depends on 𝜂 and will be explained later on, and we define 𝑢 sm 𝜂 = 𝜑 𝜓 𝜂 * 𝑢 op 𝜂 . This convolution procedure guarantees that the resulting map 𝑢 sm 𝜂 is smooth, but has the drawback that it need no longer take its values into 𝒩, since the convolution product is in general not compatible with non convex constraints. We therefore need to estimate the distance between 𝑢 sm 𝜂 and 𝒩. By straightforward computations, we write
|𝑢 sm 𝜂 (𝑥) -𝑢 op 𝜂 (𝑧)| ≤ 𝐶 1 ⨏ 𝑄 𝑚 𝜓𝜂 (𝑥) |𝑢 op 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦.
Averaging over all 𝑧 ∈ 𝑄 𝑚 𝜓(𝑥) (𝑥), since
𝑢 op 𝜂 (𝑧) ∈ 𝒩, we deduce that dist (𝑢 sm 𝜂 (𝑥), 𝒩) ≤ 𝐶 1 ⨏ 𝑄 𝑚 𝜓𝜂 (𝑥) ⨏ 𝑄 𝑚 𝜓𝜂 (𝑥) |𝑢 op 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦d𝑧.
Here we see the usefulness of the opening construction performed at the previous step:
since
𝑢 op 𝜂 is a VMO function close to 𝑈 [𝑠𝑝]
𝜂 , the right-hand side of the above estimate may be made arbitrarily small in this region provided that we choose 𝜓 𝜂 (𝑥) sufficiently small. On the good cubes, we pursue the estimate by invoking the Poincaré-Wirtinger inequality to write
dist (𝑢 sm 𝜂 (𝑥), 𝒩) ≤ 𝐶 2 1 𝜓 𝜂 (𝑥) 𝑚 𝑠𝑝 -1 ∥𝐷𝑢 op 𝜂 ∥ 𝐿 𝑠𝑝 (𝑄 𝑚 𝜓𝜂 (𝑥) (𝑥)) ≤ 𝐶 3 1 𝜓 𝜂 (𝑥) 𝑚 𝑠𝑝 -1 ∥𝐷𝑢 ∥ 𝐿 𝑠𝑝 (𝑄 𝑚 𝜓𝜂 (𝑥) (𝑥)) . (2.2)
If we choose 𝜓 𝜂 (𝑥) of order 𝜂, then on the right-hand side of (2.2), we find precisely the energy of 𝑢 which is controlled on the good cubes. Therefore, choosing suitably the constant 𝑐 > 0 in (2.1), on the good cubes, 𝑢 sm 𝜂 will be 𝛿-close to 𝒩, for some given arbitrarily small number 𝛿 > 0. To summarize, we are invited to choose the convolution parameter very small on bad cubes, near the [𝑠𝑝]-skeleton, and of order 𝜂 on good cubes. Between those two regimes, we need a transition region in order to allow 𝜓 𝜂 to change of magnitude, which is precisely the reason to introduce both families 𝒰 𝑚 𝜂 and ℰ 𝑚 𝜂 instead of working directly on bad cubes. The precise way to perform this construction is explained in Section 4, and gathering the estimates on good and bad cubes, we conclude that 𝑢 sm 𝜂 is close to 𝒩 on the good cubes, and on the part of bad cubes close to the [𝑠𝑝]-skeleton.
It therefore remains to deal with the part of bad cubes far from the [𝑠𝑝]-skeleton, where we have no control on the distance between 𝑢 sm 𝜂 and 𝒩 (which corresponds to the red region in Figure 2.2). This is the purpose of the last tool we need, which is called thickening. The method is inspired from the use of homogeneous extension by Bethuel in the case 𝑠 = 1. We illustrate the idea when 𝑠 = 1 and 𝑚 -1 < 𝑝 < 𝑚. Given a map 𝑣 ∈ 𝒞 ∞ (𝑄 𝑚 ), we define 𝑤 on 𝑄 𝑚 by
𝑤(𝑥) = 𝑣 𝑥 |𝑥| ∞ .
Here we recall that
|•| ∞ stands for the ∞-norm in ℝ 𝑚 , defined for 𝑥 = (𝑥 1 , . . . , 𝑥 𝑚 ) ∈ ℝ 𝑚 by |𝑥| ∞ = max 1≤𝑖≤𝑚 |𝑥 𝑖 |.
Using radial integration, we see that 𝑤 ∈ 𝑊 1,𝑝 (𝑄 𝑚 ) and
∥𝐷𝑤∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 ) ≤ 𝐶 4 ∥𝐷𝑣∥ 𝑝 𝐿 𝑝 (𝜕𝑄 𝑚 ) ∫ 1 0 𝑟 𝑚-𝑝-1 d𝑟 ≤ 𝐶 5 ∥𝐷𝑣∥ 𝑝 𝐿 𝑝 (𝜕𝑄 𝑚 ) .
Here we use the assumption 𝑝 < 𝑚. Hence, 𝑤 is a 𝑊 1,𝑝 (𝑄 𝑚 ) map that depends only on the values of 𝑣 on 𝜕𝑄 𝑚 . We may iterate this construction on faces by downward induction on the dimension to construct a map which only depends on the values of 𝑣 on the [𝑝]-skeleton of 𝑄 𝑚 . Two major difficulties arise when we try to adapt this construction to general Sobolev maps and spaces. First, it requires to work with slices of Sobolev maps on sets of zero measure. But more importantly, gluing such constructions on two cubes sharing a common face is a delicate matter. This is already the case when 𝑠 = 1 if 𝑝 < 𝑚 -1, since the resulting maps do not coincide on the whole common face, and gets worse when 𝑠 > 1 + 1 𝑝 as the derivatives do not match at the interface. We bypass this difficulty be working with a more involved version of homogeneous extension, the thickening procedure.
Let 𝒯
[𝑠𝑝] * 𝜂 denote the dual skeleton of 𝒰 [𝑠𝑝] 𝜂 . The homogeneous extension, in the form presented above, associates with a map 𝑣 : 𝑈 , and we will show that the singularities created on 𝑇
[𝑠𝑝] 𝜂 → ℝ 𝜈 a map 𝑤 : 𝑈 𝑚 𝜂 \ 𝑇 [𝑠𝑝] * 𝜂 → ℝ 𝜈 ,
[𝑠𝑝] * 𝜂 by the thickening are sufficiently mild so that 𝑢 th 𝜂 belongs to the class ℛ 𝑚-[𝑠𝑝]-1 (𝑄 𝑚 ; ℝ 𝜈 ).
One finds an illustration of the thickening procedure on Figure 2.3. The values of 𝑢 on the dark blue region are propagated into the light blue region. This process creates point singularities on the centers of bad cubes, which are represented by the intersection of all the black lines. The maps 𝑢 𝜂 = Π • 𝑢 th 𝜂 therefore belong to ℛ 𝑚-[𝑠𝑝]-1 (𝑄 𝑚 ; 𝒩), and they are actually the approximations of 𝑢 that we were looking for. The only step that is required to obtain this conclusion is to show the convergence 𝑢 𝜂 → 𝑢 in 𝑊 𝑠,𝑝 as 𝜂 → 0. This is done in Section 6, and amounts to a careful combination of the estimates obtained at each step of the construction. Except for the adaptative smoothing, all the modifications performed on 𝑢 are localized in a neighborhood of 𝑈 𝑚 𝜂 . The main ingredient to reach the conclusion 𝑢 𝜂 → 𝑢 is therefore the fact that there are not too many bad cubes, and that actually the measure of the union of all bad cubes decays at a sufficiently high rate.
The density of the class ℛ being established, we may then move to the density of smooth maps under the assumption 𝜋 [𝑠𝑝] (𝒩) = {0}. For this, it suffices to show that maps 𝑢 𝜂 of the class ℛ 𝑚-[𝑠𝑝]-1 (𝑄 𝑚 , 𝒩) as constructed in the first part of the proof above may be approximated by smooth maps. Under the assumption 𝜋 [𝑠𝑝] (𝒩) = {0}, for any given arbitrarily small number 𝛿 > 0, one may find a smooth map 𝑢 ex 𝛿 such that 𝑢 ex 𝛿 coincides with 𝑢 𝜂 everywhere on 𝑄 𝑚 except on 𝑇
[𝑠𝑝] * 𝜂 +𝑄 𝑚 𝛿 . This is explained in Section 8, in connection with the notion of extension property introduced by Hang and Lin [START_REF] Hang | Topology of Sobolev mappings[END_REF].
The map 𝑢 ex 𝛿 allows us to remove the singularities of 𝑢 𝜂 , but this topological construction does not allow to conclude that 𝑢 ex 𝛿 is close to 𝑢 𝜂 with respect to the 𝑊 𝑠,𝑝 distance, since 𝑢 ex 𝛿 could have arbitrarily large energy on the set 𝑇
[𝑠𝑝] * 𝜂 + 𝑄 𝑚 𝛿 where it differs from 𝑢 𝜂 . To overcome this issue, we use a scaling argument to obtain a better extension. Again, we illustrate the method on the model case where 𝑠 = 1 and 𝑚 -1 < 𝑝 < 𝑚. Assume that 𝑣 ∈ 𝑊 1,𝑝 (𝑄 𝑚 ) and that 𝑤 ∈ 𝒞 ∞ (𝑄 𝑚 ) coincides with 𝑣 on 𝑄 𝑚 \ 𝑄 𝑚 𝛿 , where 0 < 𝛿 < 1 2 . Given 0 < 𝜏 < 1, we define 𝑤 𝜏 on 𝑄 𝑚 by
𝑤 𝜏 (𝑥) = 𝑤(𝑥) if 𝑥 ∈ 𝑄 𝑚 \ 𝐵 𝑚 2𝛿 , 𝑤 𝑥 𝜏 if 𝑥 ∈ 𝐵 𝑚 𝜏𝛿 , 𝑤 𝑥 |𝑥| 1 2-𝜏 (|𝑥| -𝜏𝛿) + 𝛿 if 𝑥 ∈ 𝐵 𝑚 2𝛿 \ 𝐵 𝑚 𝜏𝛿 .
This corresponds to shrinking 𝑤 from 𝐵 𝑚 𝛿 to 𝐵 𝑚 𝜏𝛿 while keeping it unchanged on 𝑄 𝑚 \𝐵 𝑚 2𝛿 , filling the transition region by linear interpolation. By a change of variable, we estimate
∥𝐷𝑤 𝜏 ∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 ) = ∥𝐷𝑤 𝜏 ∥ 𝑝 𝐿 𝑝 (𝐵 𝑚 𝜏𝛿 ) + ∥𝐷𝑤 𝜏 ∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 \𝐵 𝑚 𝜏𝛿 ) ≤ 𝐶 6 𝜏 𝑚-𝑝 ∥𝐷𝑤∥ 𝑝 𝐿 𝑝 (𝐵 𝑚 𝛿 ) + 𝐶 7 ∥𝐷𝑤∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 \𝐵 𝑚 𝛿 ) .
Since 𝑣 = 𝑤 on 𝑄 𝑚 \ 𝑄 𝑚 𝛿 , we deduce that
∥𝐷𝑤 𝜏 ∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 ) ≤ 𝐶 6 𝜏 𝑚-𝑝 ∥𝐷𝑤∥ 𝑝 𝐿 𝑝 (𝐵 𝑚 𝛿 ) + 𝐶 7 ∥𝐷𝑣∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 \𝐵 𝑚 𝛿 ) .
Choosing 𝜏 sufficiently small -depending on 𝛿 and on 𝑤 -we may therefore make so that ∥𝐷𝑤 𝜏 ∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 ) ≤ 𝐶 8 ∥𝐷𝑣 ∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 ) . In Section 7, we explain the technique of shrinking, which is actually a more involved version of this scaling argument, devised in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF]Section 8] to handle lower order skeletons and higher order regularity.
An illustration of this idea is available on Figure 2.4. The point singularities in Figure 2.3 have been patched with a topological extension, which has been shrinked into the small region in gray to obtain a map with controlled energy.
This allows to proceed with the proof of Theorem 1.1 in Section 8. The strategy is exactly the same as in the model example above: we start with the smooth extension 𝑢 ex 𝛿 provided by topological arguments, we shrink it to a map 𝑢 sh 𝛿,𝜏 , and we use carefully the estimates available for shrinking to choose the parameter 𝜏 > 0 in order to obtain a better extension with control of the energy. As 𝛿 → 0, this provides an approximation of 𝑢 𝜂 by smooth maps with values into 𝒩, which is enough to prove Theorem 1.1 since we already obtained the density of class ℛ.
After this sketch of our proofs, we move to the detailed construction of the different tools that were described above. The proofs being rather long and technical, we hope that this informal presentation will help the reader to identify and keep in mind the We end this section with two lemmas that will be used repeatedly in the sequel. Most of our constructions on cubications will be built blockwise: we start from a building block defined on a cube, and we glue copies of this block on each cube of the skeleton to obtain a map defined on the whole skeleton. When establishing Sobolev estimates for such constructions, integer order estimates on the skeleton are readily obtained from corresponding estimates on each cube by additivity of the integral. On the contrary, the Gagliardo seminorm is not additive due to its nonlocal nature. We bypass this obstruction by relying on the lemmas below. Lemma 2.1. Let 𝛿 > 0 and let 𝛺 = 𝑖∈𝐼 𝛺 𝑖 , where 𝐼 is finite or countable and 𝛺 𝑖 ⊂ ℝ 𝑚 for every 𝑖 ∈ 𝐼. Set 𝛺 𝑖,𝛿 = {𝑥 ∈ 𝛺: dist(𝑥, 𝛺 𝑖 ) < 𝛿}. For every 𝑢 : 𝛺 → ℝ measurable, one has
|𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺) ≤ 𝑖∈𝐼 |𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺 𝑖,𝛿 ) + 𝐶𝛿 -𝜎𝑝 ∥𝑢∥ 𝑝 𝐿 𝑝 (𝛺)
for some constant 𝐶 > 0 depending on 𝑚, 𝜎, and 𝑝.
This lemma acts as a replacement for the additivity for the Gagliardo seminorm. Similar kind of results were already present in the work of Bourdaud concerning the continuity of the composition operator on Sobolev or Besov spaces; see e.g. [START_REF] Bourdaud | Fonctions qui opèrent sur les espaces de Besov et de Triebel[END_REF] and the references therein. The price to pay to have a decomposition of the Gagliardo seminorm is that we need some margin of security between the different parts of the domain on which we split the energy, and that an additional term involving the 𝐿 𝑝 norm of the map under consideration shows up, which deteriorates as the margin of security shrinks. In the sequel, Lemma 2.1 will often be employed by taking the 𝛺 𝑖 to be rectangles, which therefore suggests to have at our disposal estimates on rectangles slightly larger than the 𝛺 𝑖 . Here we use the term rectangle to denote any product of 𝑚 intervals with non-empty interior. We reserve the word cube for the case where all the intervals have the same length.
Proof. Let 𝑥, 𝑦 ∈ 𝛺. By assumption, either 𝑥, 𝑦 ∈ 𝛺 𝑖,𝛿 for some 𝑖 ∈ 𝐼, or |𝑥 -𝑦| ≥ 𝛿. Otherwise stated,
𝛺 × 𝛺 ⊂ {(𝑥, 𝑦) ∈ 𝛺 × 𝛺: |𝑥 -𝑦| ≥ 𝛿} ∪ 𝑖∈𝐼 𝛺 𝑖,𝛿 × 𝛺 𝑖,𝛿 .
Therefore,
|𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺) ≤ 𝑖∈𝐼 |𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺 𝑖,𝛿 ) + ∫ {(𝑥,𝑦)∈𝛺×𝛺:|𝑥-𝑦|≥𝛿} |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦.
We estimate
∫ {(𝑥,𝑦)∈𝛺×𝛺:|𝑥-𝑦|≥𝛿} |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 ≤ 2 𝑝 ∫ 𝛺 |𝑢(𝑥)| 𝑝 ∫ ℝ 𝑚 \𝐵 𝑚 𝛿 (𝑥) 1 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 d𝑥 ≤ 𝐶 1 𝛿 -𝜎𝑝 ∫ 𝛺 |𝑢(𝑥)| 𝑝 d𝑥.
This completes the proof of the lemma.
□
It is also possible to obtain a replacement for the additivity for the Gagliardo seminorm without a term involving the 𝐿 𝑝 norm of the map under consideration. The price to pay is that such an estimate only applies on finite decompositions, hence not covering the case where an infinite number of sets is involved. If 𝑄 ⊂ ℝ 𝑚 is a rectangle, then 𝜆𝑄 is the rectangle having the same center as 𝑄 and sidelengths multiplied by 𝜆. Lemma 2.2. Let 0 < 𝜆 < 1 and 𝑄 ⊂ ℝ 𝑚 be a rectangle. For every 𝛺 ⊂ ℝ 𝑚 such that 𝑄 \ 𝜆𝑄 ⊂ 𝛺 and every 𝑢 : 𝛺 → ℝ measurable, we have
|𝑢| 𝑊 𝜎,𝑝 (𝛺) ≤ 𝐶 |𝑢| 𝑊 𝜎,𝑝 (𝛺∩𝑄) + |𝑢| 𝑊 𝜎,𝑝 (𝛺\𝜆𝑄)
for some constant 𝐶 > 0 depending on 𝑚, 𝜎, 𝑝, 𝜆, and the ratio between the largest and the smallest side of 𝑄. Lemma 2.2 is inspired from [24, Lemma 2.2], and we follow their proof. At the core of the argument lies a very classical averaging argument, which was already present in the proof of Besov's lemma; see e.g. [1, Proof of Lemma 7.44]. A similar idea is also used in the proof of Morrey's embedding. This type of argument will be used in multiple occasions in this paper.
We note that the constant 𝐶 necessarily diverges to +∞ as 𝜆 → 1. Moreover, one cannot deduce an improved version of Lemma 2.1 without the 𝐿 𝑝 norm by applying Lemma 2.2 inductively, since the constant 𝐶 is actually larger than 1. Hence, we may iterate the lemma to obtain an estimate for a decomposition into a finite number of sets, but the constant depends on the number of sets.
∫ 𝛺∩𝜆𝑄 ∫ 𝛺\𝑄 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥 ≤ 2 𝑝-1 ∫ 𝛺∩𝜆𝑄 ∫ 𝛺\𝑄 ⨏ 𝑄\𝜆𝑄 |𝑢(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑧d𝑦d𝑥 + ∫ 𝛺∩𝜆𝑄 ∫ 𝛺\𝑄 ⨏ 𝑄\𝜆𝑄 |𝑢(𝑧) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑧d𝑦d𝑥 . (2.3)
Let 𝑐 > 0 be the length of the smallest side of 𝑄. Since |𝑥 -𝑦| ≥ 𝑐(1-𝜆) whenever 𝑥 ∈ 𝜆𝑄 and 𝑦 ∈ 𝛺 \ 𝑄, first integrating with respect to 𝑦 in the first term on the right-hand side of (2.3), we find
∫ 𝛺∩𝜆𝑄 ∫ 𝛺\𝑄 ⨏ 𝑄\𝜆𝑄 |𝑢(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑧d𝑦d𝑥 ≤ 𝐶 1 1 𝑐 𝜎𝑝 (1 -𝜆) 𝜎𝑝
∫ 𝛺∩𝜆𝑄 ∫ 𝛺\𝑄 ⨏ 𝑄\𝜆𝑄 |𝑢(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑧d𝑦d𝑥 ≤ 𝐶 3 1 (1 -𝜆) 𝜎𝑝 (1 -𝜆 𝑚 ) ∫ 𝛺∩𝜆𝑄 ∫ 𝑄\𝜆𝑄 |𝑢(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑧| 𝑚+𝜎𝑝 d𝑧d𝑥 ≤ 𝐶 4 |𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺∩𝑄) .
For the second term in the right-hand side of (2.3), we start by noting that if 𝑥 ∈ 𝜆𝑄 and 𝑦 ∈ 𝜕(𝑟𝑄) for some 𝑟 ≥ 1, then |𝑥 -𝑦| ≥ 𝑐(𝑟 -𝜆). On the other hand, if 𝑦 ∈ 𝜕(𝑟𝑄) and 𝑧 ∈ 𝑄 \ 𝜆𝑄, then
|𝑦 -𝑧| ≤ 𝐶 5 𝑐(𝑟 + 1) = 𝐶 5 𝑐 𝑟 + 1 𝑟 -𝜆 (𝑟 -𝜆) ≤ 𝐶 6 𝑐(𝑟 -𝜆),
where 𝐶 5 depends on the ratio between the largest and the smallest side of 𝑄. Therefore, for any 𝑥 ∈ 𝜆𝑄, 𝑦 ∈ 𝛺 \ 𝑄, and 𝑧 ∈ 𝑄 \ 𝜆𝑄, we have |𝑦 -𝑧| ≤ 𝐶 6 |𝑥 -𝑦|. Hence, we obtain
∫ 𝛺∩𝜆𝑄 ∫ 𝛺\𝑄 ⨏ 𝑄\𝜆𝑄 |𝑢(𝑧) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑧d𝑦d𝑥 ≤ 𝐶 7 𝜆 𝑚 1 -𝜆 𝑚 ∫ 𝛺\𝑄 ∫ 𝑄\𝜆𝑄 |𝑢(𝑧) -𝑢(𝑦)| 𝑝 |𝑦 -𝑧| 𝑚+𝜎𝑝 d𝑧d𝑦 ≤ 𝐶 8 |𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺\𝜆𝑄) .
Gathering the estimates for both terms in the right-hand side of (2.3) yields the conclusion.
□ 3 Opening
This section is devoted to the opening procedure. We follow the approach of Bousquet, Ponce, and Van Schaftingen [9, Section 2], who adapted to higher order regularity a construction of Brezis and Li [START_REF] Brezis | Topology and Sobolev spaces[END_REF]. The main result of this section is the following fractional counterpart of [ ;
(d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ -𝑢 ∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 ∥𝑢 ∥ 𝐿 𝑝 (𝑈 ℓ +𝑄 𝑚 2𝜌𝜂 ) ;
for some constant 𝐶 > 0 depending on 𝑚, 𝑠, 𝑝, and 𝜌.
Recall that Supp Φ denotes the geometric support of Φ, defined as Supp Φ = {𝑥 ∈ ℝ 𝑚 : Φ(𝑥) ≠ 𝑥}.
Crucial to the proof of Theorem 1.2 are the estimates in (iii) with 𝜔 = 𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 . They imply that the opening procedure does not increase too much the energy of the map 𝑢 where it is modified. Proposition 3.1 will be used in the proof of Theorem 1.2 in order to prove that a map can be opened by paying the price of an arbitrarily small increase of the norm.
The map Φ will be constructed blockwise: for every 𝑑 ∈ {0, . . . , ℓ } and every 𝜎 𝑑 ∈ 𝒰 𝑑 , we construct an opening map Φ 𝜎 𝑑 around the face 𝜎 𝑑 , and then we suitably combine those maps together to yield the desired map Φ. The construction of the building block Φ 𝜎 𝑑 is performed in Proposition 3.2 below. Before giving a precise statement, we first introduce, for the convenience of the reader, some additional notation.
The construction of the map Φ provided by Proposition 3.2 involves four parameters 0 < 𝜌 < 𝑟 < 𝑟 < 𝜌 < 1. These parameters being fixed, we introduce the rectangles
𝑄 1 = 𝑄 1,𝜂 = 𝑄 𝑑 (1-𝜌)𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 , 𝑄 2 = 𝑄 2,𝜂 = 𝑄 𝑑 (1-𝑟)𝜂 × 𝑄 𝑚-𝑑 𝑟𝜂 , 𝑄 3 = 𝑄 3,𝜂 = 𝑄 𝑑 (1-𝑟)𝜂 × 𝑄 𝑚-𝑑 𝑟𝜂 and 𝑄 4 = 𝑄 4,𝜂 = 𝑄 𝑑 (1-𝜌)𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 . (3.1)
The rectangle 𝑄 1 is the place where the opening construction is actually performed: the map Φ only depends on the first 𝑑 variables on 𝑄 1 . The rectangle 𝑄 2 contains the support of the map Φ, that is, Φ coincides with the identity outside of 𝑄 2 . The region between 𝑄 1 and the exterior of 𝑄 2 serves as a transition region between both regimes. From now on we shall keep using the notation 𝑄 1 , . . . , 𝑄 4 for the sake of conciseness and because it makes more apparent the inclusion relations between the four rectangles: observe that 𝑄 1 ⊂ 𝑄 2 ⊂ 𝑄 3 ⊂ 𝑄 4 . The dependence with respect to the parameters 𝜌, 𝑟, 𝑟, 𝜌 and 𝜂 will be implicit. Proposition 3.2. Let 𝑑 ∈ {0, . . . , 𝑚 -1}, 𝜂 > 0, and 0 < 𝜌 < 𝑟 < 𝑟 < 𝜌 < 1. For every 𝑢 ∈ 𝑊 𝑠,𝑝 (𝑄 4 ; ℝ 𝜈 ), there exists a smooth map Φ : 𝑄 4 → 𝑄 4 such that (i) Φ(𝑥 ′ , 𝑥 ′′ ) = (𝑥 ′ , 𝜁(𝑥)) for every 𝑥 = (𝑥 ′ , 𝑥 ′′ ) ∈ 𝑄 4 , where 𝜁 : 𝑄 4 → 𝑄 𝑚-𝑑 𝜌𝜂 is smooth;
(ii) for every
𝑥 ′ ∈ 𝑄 𝑑 (1-𝜌)𝜂 , Φ is constant on {𝑥 ′ } × 𝑄 𝑚-𝑑 𝜌𝜂 ;
(iii) Supp Φ ⊂ 𝑄 2 and Φ(𝑄 2 ) ⊂ 𝑄 2 ;
(iv) 𝑢 • Φ ∈ 𝑊 𝑠,𝑝 (𝑄 3 ; ℝ 𝜈 ), and moreover, the following estimates hold: for some constant 𝐶 > 0 depending on 𝑚, 𝑠, 𝑝, 𝜌, 𝑟, 𝑟, and 𝜌.
a) if 0 < 𝑠 < 1, then |𝑢 • Φ| 𝑊 𝑠,𝑝 (𝑄 3 ) ≤ 𝐶|𝑢| 𝑊 𝑠,𝑝 (𝑄 4 ) ; b) if 𝑠 ≥ 1,
We comment on the domains involved in the estimates of item (iv) in Proposition 3.2 above. We need estimates on the rectangle 𝑄 3 instead of the smaller rectangle 𝑄 2 containing the support of Φ, in order to have enough room to apply Lemmas 2.1 and 2.2 as substitutes for the additivity of the integral when proving the fractional estimates in Proposition 3.1. Moreover, we only control the energy on 𝑄 3 by the energy on the larger rectangle 𝑄 4 due to the averaging process involved in the proof of Proposition 3.2, as we will see later on.
Taking Proposition 3.2 granted for the moment, we proceed with the proof of Proposition 3.1. Before providing a detailed rigorous proof, we sketch the argument.
We first open the map 𝑢 around each vertex of 𝒰 0 by applying Proposition 3.2 with 𝑑 = 0 and using parameters 𝜌 = 2𝜌 and 𝜌 = 𝜌 0 < 2𝜌. This produces a map 𝑢 0 which is constant on cubes of radius 𝜌 0 𝜂 around each vertex of 𝒰 0 . We next open the map 𝑢 0 around each edge of 𝒰 1 using Proposition 3.2 with 𝑑 = 1, 𝜌 = 𝜌 0 , and 𝜌 = 𝜌 1 < 𝜌 0 . One may see that the geometric supports of the building blocks around each face do not overlap, so that we may glue them together to obtain a well-defined map on the whole 𝛺. This construction yields a map 𝑢 1 which is constant on all (𝑚 -1)-cubes of radius 𝜌 1 𝜂 which are orthogonal to the edges of 𝒰 1 , provided that they lie at distance at least 𝜌 0 𝜂 from the endpoints of the edges. But the map 𝑢 1 is constructed from the map 𝑢 0 which was constant on the cubes of radius 𝜌 0 𝜂 centered at the vertices of 𝒰 ℓ . Hence we conclude that the map 𝑢 1 is constant on all (𝑚 -1)-cubes of radius 𝜌 1 𝜂 which are orthogonal to the edges of 𝒰 ℓ . We then pursue this construction by induction until we reach the desired dimension, which yields a map Φ as in Proposition 3.1.
An illustration of this construction on one cube for 𝑚 = 2 and ℓ = 1 is presented in Figure 3.1. On the left part of the figure, one sees the result of opening around vertices. The map 𝑢 becomes constant on the dark blue squares, and is left unchanged on the white region, the light blue region serving as a transition. The central part of the figure shows the opening step around edges. The map 𝑢 becomes constant on the segments orthogonal to the edges of 𝒰 1 that are sufficiently far from the vertices, some of which being represented in black. The regions involved in the construction at the previous step, when opening around the vertices, are depicted in light colors, to show how all the regions are located relatively to each other. One sees that the opening regions around vertices and edges connect perfectly. The right part of the figure shows the combination of both steps. The map 𝑢 becomes constant on all segments orthogonal to the edges of 𝒰 1 . The construction sketched above is strongly inspired by [9, Proposition 2.1], but nevertheless significantly different from [9, Proposition 2.1]. Indeed, in our approach, at each step of the iterative process, the sets on which we apply opening around each face (the building blocks of our global construction) do not overlap. Hence, gathering the constructions made around each face yields a globally well-defined map on the whole 𝛺, regardless of the map we started with at the beginning of the step. For instance, the map 𝑢 1 described in the above sketch is well-defined on the whole 𝛺, regardless of the form of the map 𝑢 0 . On the other hand, the construction in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF] relies on the fact that, at the 𝑑-th step of the iterative process, we work with a map that has already been opened around the 𝑖-faces for 𝑖 < 𝑑. Indeed the constructions made at step 𝑑 are not compatible near the lower dimensional faces, where they overlap. Our approach simplifies the proof of Sobolev estimates, especially in the fractional case where one needs some margin of security to apply Lemmas 2.1 and 2.2, but also for the case of integer order estimates (already treated in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF]).
Proof of Proposition 3.1. As announced, we construct a family of maps (Φ 𝑑 ) 0≤𝑑≤ℓ by induction. For the convenience of notation we set Φ -1 = id. Assuming that the maps Φ -1 , . . . , Φ 𝑑-1 have already been constructed, we set
𝑢 𝑑 = 𝑢 • Φ -1 • • • • • Φ 𝑑-1 . Let (𝜌 𝑑 ) 0≤𝑑≤ℓ ,
(𝑟 𝑑 ) 0≤𝑑≤ℓ and (𝑟 𝑑 ) 0≤𝑑≤ℓ be decreasing sequences such that
𝜌 = 𝜌 ℓ < 𝑟 ℓ < 𝑟 ℓ < 𝜌 ℓ -1 < • • • < 𝜌 𝑑 < 𝑟 𝑑 < 𝑟 𝑑 < 𝜌 𝑑-1 < • • • < 𝜌 0 < 𝑟 0 < 𝑟 0 < 2𝜌.
For every 𝑑 ∈ {0, . . . , ℓ } and every 𝜎 𝑑 ∈ 𝒰 𝑑 , there is an isometry 𝑇 𝜎 𝑑 of ℝ 𝑚 mapping 𝑄 𝑑 𝜂 × {0} 𝑚-ℓ onto 𝜎 𝑑 . Via this isometry, we apply Proposition 3.2 to 𝑢 𝑑 around 𝜎 𝑑 with parameters 𝜌 = 𝜌 𝑑 , 𝑟 = 𝑟 𝑑 , 𝑟 = 𝑟 𝑑 and 𝜌 = 𝜌 𝑑-1 -with the convention that 𝜌 -1 = 2𝜌 -in order to obtain a map Φ 𝜎 𝑑 : 𝑇 𝜎 𝑑 (𝑄 4 ) → 𝑇 𝜎 𝑑 (𝑄 4 ) such that, for every 𝑥 ′ ∈ 𝜎 𝑑 with dist (𝑥 ′ , 𝜕𝜎 𝑑 ) > 𝜌 𝑑-1 , Φ 𝜎 𝑑 is constant on the cube orthogonal to 𝜎 𝑑 of radius 𝜌 𝑑 𝜂 passing through 𝑥 ′ . We then define Φ 𝑑 :
ℝ 𝑚 → ℝ 𝑚 by Φ 𝑑 (𝑥) = Φ 𝜎 𝑑 (𝑥) if 𝑥 ∈ 𝑇 𝜎 𝑑 (𝑄 4 ), 𝑥 otherwise.
This map is well-defined since Supp Φ 𝜎 𝑑 ⊂ 𝑇 𝜎 𝑑 (𝑄 2 ) and 𝑇 𝜎 𝑑
1 (𝑄 2 ) ∩ 𝑇 𝜎 𝑑 2 (𝑄 2 ) = ∅ if 𝜎 𝑑 1 ≠ 𝜎 𝑑 2 . Finally, we set Φ = Φ 0 • • • • • Φ ℓ .
By induction and using the definition of the maps Φ 𝜎 𝑑 provided by Proposition 3.2, we observe that Φ satisfies properties (i) and (ii). We now turn to properties (iii) and (iv). Let 𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 ⊂ 𝜔 ⊂ 𝛺. Notice that it suffices to prove property (iii) with Φ replaced by Φ 𝑑 and 𝑢 replaced by 𝑢 𝑑 , as one may then conclude by induction.
We start with the estimates for integer order derivatives. Let 𝑗 ∈ {1, . . . , 𝑘} and 𝑑 ∈ {0, . . . , ℓ }. By the additivity of the integral, we have
∥𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )∥ 𝑝 𝐿 𝑝 (𝜔) ≤ 𝜎 𝑑 ∈𝒰 𝑑 ∥𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )∥ 𝑝 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 3 )) + ∥𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )∥ 𝑝 𝐿 𝑝 𝜔\ 𝜎 𝑑 ∈𝒰 𝑑 𝑇 𝜎 𝑑 (𝑄 3 ) . Since Supp Φ 𝜎 𝑑 ⊂ 𝑇 𝜎 𝑑 (𝑄 2 ) ⊂ 𝑇 𝜎 𝑑 (𝑄 3 ), we find ∥𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )∥ 𝐿 𝑝 𝜔\ 𝜎 𝑑 ∈𝒰 𝑑 𝑇 𝜎 𝑑 (𝑄 3 ) = ∥𝐷 𝑗 𝑢 𝑖 ∥ 𝐿 𝑝 𝜔\ 𝜎 𝑑 ∈𝒰 𝑑 𝑇 𝜎 𝑑 (𝑄 3 ) ,
while the estimate given by Proposition 3.2 yields
𝜂 𝑗 ∥𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 3 )) = 𝜂 𝑗 ∥𝐷 𝑗 (𝑢 𝑑 • Φ 𝜎 𝑑 )∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 3 )) ≤ 𝐶 1 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 𝑑 ∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )) .
Combining both above estimates and using the fact that the number of overlaps between one given set of the form 𝑇 𝜎 𝑑 (𝑄 4 ) and all the other such sets is bounded from above by a number depending only on 𝑚, we deduce that
𝜂 𝑗 ∥𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 2 𝑗 𝑖=1
𝜂 𝑖 ∥𝐷 𝑖 𝑢 𝑑 ∥ 𝐿 𝑝 (𝜔) for every 𝑗 ∈ {1, . . . , 𝑘}.
Since Supp Φ ⊂ 𝑈 ℓ +𝑄 𝑚 2𝜌𝜂 , the estimate (b) of point (iv) follows directly from estimate (b) of point (iii) using again the additivity of the integral. The estimates for the 𝐿 𝑝 norm of 𝑢 • Φ (estimates (d)) are proven similarly.
The estimates for the Gagliardo seminorm are proved similarly, replacing the additivity of the integral by Lemma 2.1. Indeed, if 𝑘 ≥ 1, this lemma ensures that
|𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )| 𝑝 𝑊 𝜎,𝑝 (𝜔) ≤ 𝜎 𝑑 ∈𝒰 𝑑 |𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )| 𝑝 𝑊 𝜎,𝑝 (𝑇 𝜎 𝑑 (𝑄 3 )) + |𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )| 𝑝 𝐿 𝑝 (𝜔\Supp Φ 𝑑 ) + 𝐶 3 𝜂 -𝜎𝑝 ∥𝐷 𝑗 (𝑢 𝑑 • Φ 𝑑 )∥ 𝑝 𝐿 𝑝 (𝜔) for every 𝑗 ∈ {1, . . . , 𝑘}.
Note that here, we made use of the fact that the distance between the support of the map provided by Proposition 3.2 and the complement of 𝑄 3 is bounded from below by a constant multiple of 𝜂. Estimate (c) of point (iii) then follows as for the integer order estimate. To obtain the estimate (c) for point (iv), we observe that actually dist (Supp Φ, 𝜔 \ (𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 )) is bounded from below by a constant multiple of 𝜂. We conclude by making again use of Lemma 2.1 along with the integer order estimate that we already obtained. Indeed, we have
|𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑢| 𝑝 𝑊 𝜎,𝑝 (𝜔) ≤ |𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑢| 𝑝 𝑊 𝜎,𝑝 (𝑈 ℓ +𝑄 𝑚 2𝜌𝜂 ) + |𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑢| 𝑝 𝑊 𝜎,𝑝 (𝜔\Supp Φ) + 𝜂 -𝜎𝑝 ∥𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑢∥ 𝑝 𝐿 𝑝 (𝜔) .
The first term is upper bounded using the triangle inequality and estimate (c) of (iii), the second one vanishes by definition of the geometric support and the third one is the integer order term that we already estimated (item (b) of (iv)). The case 0 < 𝑠 < 1 is handled in the same way, replacing 𝐷 𝑗 𝑢 by 𝑢.
□
We now turn to the proof of Proposition 3.2. Consider some fixed Borel map 𝑢 : 𝑄 𝑚 → 𝒩 such that 𝑢 ∈ 𝑊 𝑠,𝑝 (𝑄 𝑚 ). In order to prove that there exists some map Φ (depending on 𝑢) such that 𝑢 • Φ ∈ 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩) along with the corresponding estimates, it will be convenient to rely on some genericity arguments using the framework of Fuglede maps, in a formalism developed by Bousquet, Ponce, and Van Schaftingen in [START_REF]Generic topological screening and approximation of Sobolev maps[END_REF]; our presentation is limited to the tools instrumental in our proofs. The results below are taken from [START_REF]Generic topological screening and approximation of Sobolev maps[END_REF], sometimes with slight modifications. Nevertheless, we reproduce the proofs here for the convenience of the reader.
We start with the following lemma, suited for 𝐿 𝑝 regularity, which gives a criterion to detect a family of maps 𝛾 such that composition with 𝛾 is compatible with 𝐿 𝑝 convergence.
Lemma 3.3. Let (𝑋 , 𝒳, 𝜇) be a measure space, 𝑢 : 𝑋 → ℝ a measurable map which does not vanish 𝜇-almost everywhere, and (𝑢 𝑛 ) 𝑛∈ℕ a sequence of maps in 𝐿 𝑝 (𝑋 , 𝜇) such that 𝑢 𝑛 → 𝑢 in 𝐿 𝑝 (𝑋 , 𝜇). There exists a summable function 𝑤 : 𝑋 → [0, +∞] satisfying ∫ 𝑋 𝑤 d𝜇 > 0 and a subsequence (𝑢 𝑛 𝑖 ) 𝑖∈ℕ such that for every measure space (𝑌, 𝒴, 𝜆) and every measurable map
𝛾 : 𝑌 → 𝑋 satisfying 𝑤 • 𝛾 ∈ 𝐿 1 (𝑌, 𝜆), we have 𝑢 𝑛 𝑖 • 𝛾 ∈ 𝐿 𝑝 (𝑌, 𝜆), 𝑢 𝑛 𝑖 • 𝛾 → 𝑢 • 𝛾 in 𝐿 𝑝 (𝑌, 𝜆),
and
∫ 𝑌 |𝑢 • 𝛾| 𝑝 d𝜆 ≤ 2 ∫ 𝑌 𝑤 • 𝛾 d𝜆 ∫ 𝑋 𝑤 d𝜇 ∫ 𝑋 |𝑢| 𝑝 d𝜇.
We insist on the fact that the map 𝑤 depends on 𝑢. Even modifying 𝑢 on a null set may change the map 𝑤 given by Lemma 3.3.
Proof. Choose a sequence (𝜅 𝑖 ) 𝑖∈ℕ diverging to +∞ such that 𝜅 𝑖 ≥ 1 for every 𝑖 ∈ ℕ. Then extract a subsequence (𝑢 𝑛 𝑖 ) 𝑖∈ℕ so that
∥𝑢 ∥ 𝐿 𝑝 (𝑋 ,𝜇) + 𝑖∈ℕ 𝜅 𝑖 ∥𝑢 𝑛 𝑖 -𝑢∥ 𝐿 𝑝 (𝑋 ,𝜇) < 2 1 𝑝 ∥𝑢 ∥ 𝐿 𝑝 (𝑋 ,𝜇)
and define 𝑤 : 𝑋 → [0, +∞] by
𝑤 = |𝑢| + 𝑖∈ℕ 𝜅 𝑖 |𝑢 𝑛 𝑖 -𝑢| 𝑝 .
We deduce from the triangle inequality and Fatou's lemma that 𝑤 is summable with
∫ 𝑋 𝑤 d𝜇 1 𝑝 = ∥𝑤 1 𝑝 ∥ 𝐿 𝑝 (𝑋 ,𝜇) ≤ ∥𝑢∥ 𝐿 𝑝 (𝑋 ,𝜇) + 𝑖∈ℕ 𝜅 𝑖 ∥𝑢 𝑛 𝑖 -𝑢 ∥ 𝐿 𝑝 (𝑋 ,𝜇) < 2 1 𝑝 ∥𝑢∥ 𝐿 𝑝 (𝑋 ,𝜇) . (3.2) Since 𝜅 𝑖 ≥ 1,
∫ 𝑌 |𝑢 • 𝛾| 𝑝 d𝜆 ≤ ∫ 𝑌 𝑤 • 𝛾 d𝜆.
Combining the above inequality with (3.2) provides us with the desired estimate, and therefore concludes the proof.
□
Using the previous lemma, we may now obtain a criterion to detect a family of maps 𝛾 such that composition with 𝛾 is compatible with 𝑊 𝑘,𝑝 regularity, along with the corresponding estimates. Once again, note that the map 𝑤 given by the lemma below depends on 𝑢. Lemma 3.4. Let 𝛺 ⊂ ℝ 𝑚 be an open set and 𝑢 ∈ 𝑊 𝑘,𝑝 (𝛺). There exists a summable map
𝑤 : 𝛺 → [0, +∞] such that ∫ 𝛺 𝑤 = 1
and such that for every open set 𝜔 ⊂ ℝ 𝑀 and every map 𝛾 ∈ 𝒞 ∞ (𝜔; 𝛺) with bounded derivatives, if 𝑤 • 𝛾 is summable, then we have 𝑢 • 𝛾 ∈ 𝑊 𝑘,𝑝 (𝜔), the derivatives 𝐷 𝑗 (𝑢 • 𝛾) are given by the classical Faà di Bruno formula, and
∥𝐷 𝑗 𝑢 • 𝛾∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 ∫ 𝜔 𝑤 • 𝛾 1 𝑝 ∥𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝛺) for every 𝑗 ∈ {0, . . . , 𝑘},
for some constant 𝐶 > 0 depending on 𝑚, 𝑀, 𝑘, and 𝑝.
We note for further use that, under the assumptions of Lemma 3.4, applying the Faà di Bruno formula, we may estimate 𝐷 𝑗 (𝑢 • 𝛾) as follows:
∥𝐷 𝑗 (𝑢 • 𝛾)∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 ∫ 𝜔 𝑤 • 𝛾 1 𝑝 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 ∥𝐷 𝑡 1 𝛾∥ 𝐿 ∞ (𝜔) • • • ∥𝐷 𝑡 𝑖 𝛾∥ 𝐿 ∞ (𝜔) ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝛺) . (3.3)
We also make an important remark about a measurability issue. In Lemma 3.3, we worked with arbitrary measure spaces. On the other hand, here we implicitly assume that ℝ and ℝ 𝑚 are endowed with the Borel 𝜎-algebra (and not the Lebesgue 𝜎-algebra) in order to ensure that continuous maps are measurable.
Proof. We may assume, without loss of generality, that 𝑢 and its 𝑘 first derivatives are not almost everywhere equal to 0. Let (𝑢 𝑛 ) 𝑛∈ℕ be a sequence of smooth maps converging to 𝑢 in 𝑊 𝑘,𝑝 (𝛺). We apply inductively Lemma 3.3 to 𝐷 𝑖 𝑢 for 𝑖 ∈ {0, . . . , 𝑘} to obtain summable maps 𝑤 𝑖 : 𝛺 → [0, +∞] satisfying ∫ 𝛺 𝑤 𝑖 > 0 and a subsequence (𝑢 𝑛 𝑙 ) 𝑙∈ℕ such that, for every measurable map 𝛾 : 𝜔 → 𝛺 such that 𝑤 𝑖 • 𝛾 is summable,
𝐷 𝑖 𝑢 𝑛 𝑙 • 𝛾 → 𝐷 𝑖 𝑢 • 𝛾 in 𝐿 𝑝 (𝜔), and ∫ 𝜔 |𝐷 𝑖 𝑢 • 𝛾| 𝑝 ≤ 2 ∫ 𝜔 𝑤 𝑖 • 𝛾 ∫ 𝛺 𝑤 𝑖 ∫ 𝛺 |𝐷 𝑖 𝑢| 𝑝 . Let 𝑤 = 1 𝑘 + 1 𝑘 𝑖=0 𝑤 𝑖 ∫ 𝛺 𝑤 𝑖 .
It is readily seen that
∫ 𝛺 𝑤 = 1.
Observe also that
𝑤 𝑖 ≤ (𝑘 + 1)𝑤 ∫ 𝛺 𝑤 𝑖 . Therefore, if 𝑤 • 𝛾 is summable, we find that 𝐷 𝑖 𝑢 • 𝛾 ∈ 𝐿 𝑝 (𝜔) with ∫ 𝜔 |𝐷 𝑖 𝑢 • 𝛾| 𝑝 ≤ 2(𝑘 + 1) ∫ 𝜔 𝑤 • 𝛾 ∫ 𝛺 |𝐷 𝑖 𝑢| 𝑝 . (3.4)
If in addition 𝛾 is smooth and has bounded derivatives, since
𝐷 𝑖 𝑢 𝑛 𝑙 • 𝛾 → 𝐷 𝑖 𝑢 • 𝛾 in 𝐿 𝑝 (𝜔), 𝐷 𝑖 (𝑢 𝑛 𝑙 • 𝛾) converges in 𝐿 𝑝 (𝜔)
to a map which coincides with the function one would obtain by applying the Faà di Bruno formula to compute 𝐷 𝑖 (𝑢 • 𝛾). Hence, the closure property for Sobolev spaces ensures that 𝑢 • 𝛾 ∈ 𝑊 𝑘,𝑝 (𝜔) and that the Faà di Bruno formula actually applies. The estimates for 𝐷 𝑗 𝑢 • 𝛾 are already contained in inequality (3.4), and therefore the proof is complete.
□
After dealing with integer order Sobolev spaces, we present the next lemma, which contains the construction of a detector for maps preserving fractional Sobolev regularity under composition.
𝑤(𝑥) = ∫ 𝛺 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦.
Assume moreover that there exists 𝑐 > 0 such that |𝐵 𝑚 𝜆 (𝑧) ∩ 𝛺| ≥ 𝑐𝜆 𝑚 for every 𝑧 ∈ 𝛺 and 0 < 𝜆 ≤ 1 2 diam 𝛺. For every open set 𝜔 ⊂ ℝ 𝑀 and every Lipschitz map
𝛾 : 𝜔 → 𝛺, if 𝑤 • 𝛾 is summable, then we have 𝑢 • 𝛾 ∈ 𝑊 𝜎,𝑝 (𝜔) with |𝑢 • 𝛾| 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶|𝛾| 𝜎 𝒞 0,1 (𝜔) ∫ 𝜔 𝑤 • 𝛾 1 𝑝
for some constant 𝐶 > 0 depending on 𝑚, 𝑀, 𝜎, 𝑝, and 𝑐.
We recall that |𝛾| 𝒞 0,1 (𝜔) denotes the Lipschitz seminorm of 𝛾, defined by
|𝛾| 𝒞 0,1 (𝜔) = sup 𝑥,𝑦∈𝜔 𝑥≠𝑦 |𝛾(𝑥) -𝛾(𝑦)| |𝑥 -𝑦| .
In contrast to what happens for integer order Sobolev spaces, here we have an explicit expression for 𝑤 depending on 𝑢. It is useful to observe that
∫ 𝛺 𝑤 = |𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺) .
We also comment on the assumption on the volumes of balls in 𝛺, which will be crucial during the proof. It is in particular satisfied if 𝛺 is a cube. Indeed, in this case, any ball centered at a point of 𝛺 with radius less that 1 2 diam 𝛺 has at least one quadrant in 𝛺, which implies that 𝛺 satisfies the assumptions of Lemma 3.5. To prove Proposition 3.2, we only need to apply Lemma 3.5 on cubes, but later on in Section 7, we will need to use a similar technique on more general domains, whence our motivation for already presenting a more general statement here.
Proof. For every 𝑥, 𝑦 ∈ 𝜔, we let
ℬ 𝑥,𝑦 = 𝐵 𝑚 |𝛾(𝑥)-𝛾(𝑦)| 𝛾(𝑥)+𝛾(𝑦) 2 ∩ 𝛺. We write |𝑢 • 𝛾(𝑥) -𝑢 • 𝛾(𝑦)| 𝑝 ≤ 𝐶 1 ⨏ ℬ 𝑥,𝑦 |𝑢 • 𝛾(𝑥) -𝑢(𝑧)| 𝑝 d𝑧 + ⨏ ℬ 𝑥,𝑦 |𝑢(𝑧) -𝑢 • 𝛾(𝑦)| 𝑝 d𝑧 . Note that 𝐵 𝑚 |𝛾(𝑥)-𝛾(𝑦)| 2 (𝛾(𝑥)) ∩ 𝛺 ⊂ ℬ 𝑥,𝑦 .
Since
|𝛾(𝑥)-𝛾(𝑦)| 2 ≤ 1 2 diam 𝛺, we deduce that |ℬ 𝑥,𝑦 | ≥ 𝐶 2 |𝛾(𝑥) -𝛾(𝑦)| 𝑚 .
Moreover, we observe that for every 𝑧 ∈ ℬ 𝑥,𝑦 , we have
|𝛾(𝑥) -𝑧| ≤ 𝛾(𝑥) + 𝛾(𝑦) 2 -𝑧 + 1 2 |𝛾(𝑥) -𝛾(𝑦)| ≤ 3 2 |𝛾(𝑥) -𝛾(𝑦)|,
and similarly |𝛾(𝑦) -𝑧| ≤ 3 2 |𝛾(𝑥) -𝛾(𝑦)|. Hence, |𝑢 • 𝛾(𝑥) -𝑢 • 𝛾(𝑦)| 𝑝 ≤ 𝐶 3 ∫ ℬ 𝑥,𝑦 |𝑢 • 𝛾(𝑥) -𝑢(𝑧)| 𝑝 |𝛾(𝑥) -𝑧| 𝑚 d𝑧 + ∫ ℬ 𝑥,𝑦 |𝑢(𝑧) -𝑢 • 𝛾(𝑦)| 𝑝 |𝛾(𝑦) -𝑧| 𝑚 d𝑧 .
Dividing by |𝑥 -𝑦| 𝑀+𝜎𝑝 and integrating over 𝜔 × 𝜔, we deduce that
|𝑢 • 𝛾| 𝑝 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 4 ∫ 𝜔 ∫ 𝜔 ∫ ℬ 𝑥,𝑦 |𝑢 • 𝛾(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑀+𝜎𝑝 |𝛾(𝑥) -𝑧| 𝑚 d𝑧d𝑦d𝑥.
We use Tonelli's theorem to deduce that
|𝑢 • 𝛾| 𝑝 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 4 ∫ 𝛺 ∫ 𝜔 ∫ 𝒴 𝑥,𝑧 |𝑢 • 𝛾(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑀+𝜎𝑝 |𝛾(𝑥) -𝑧| 𝑚 d𝑦d𝑥d𝑧,
where 𝒴 𝑥,𝑧 is the set of all 𝑦 ∈ 𝜔 such that 𝑧 ∈ ℬ 𝑥,𝑦 , that is,
𝒴 𝑥,𝑧 = {𝑦 ∈ 𝜔: |𝛾(𝑥) + 𝛾(𝑦) -2𝑧| < 2|𝛾(𝑥) -𝛾(𝑦)|}.
Observe that
𝒴 𝑥,𝑧 ⊂ 𝑦 ∈ ℝ 𝑀 : |𝛾(𝑥) -𝑧| < 3 2 |𝛾(𝑥) -𝛾(𝑦)| ⊂ 𝑦 ∈ ℝ 𝑀 : |𝛾(𝑥) -𝑧| < 3 2 |𝛾| 𝒞 0,1 (𝜔) |𝑥 -𝑦| = ℝ 𝑀 \ 𝐵 𝑀 𝑟 (𝑥),
where
𝑟 = 𝑟(𝑥, 𝑧) = 2|𝛾(𝑥) -𝑧| 3|𝛾| 𝒞 0,1 (𝜔)
.
Hence,
|𝑢 • 𝛾| 𝑝 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 4 ∫ 𝛺 ∫ 𝜔 ∫ ℝ 𝑀 \𝐵 𝑀 𝑟 (𝑥) |𝑢 • 𝛾(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑀+𝜎𝑝 |𝛾(𝑥) -𝑧| 𝑚 d𝑦d𝑥d𝑧 ≤ 𝐶 5 |𝛾| 𝜎𝑝 𝒞 0,1 (𝜔) ∫ 𝛺 ∫ 𝜔 |𝑢 • 𝛾(𝑥) -𝑢(𝑧)| 𝑝 |𝛾(𝑥) -𝑧| 𝑚+𝜎𝑝 d𝑥d𝑧,
which concludes the proof.
□
Now that we have at our disposal a criterion to detect a family of maps that preserve membership in Sobolev spaces after composition, it would be useful to know if, given a detector 𝑤 associated to a fixed map 𝑢 ∈ 𝑊 𝑠,𝑝 , we may actually construct many smooth maps 𝛾 such that 𝑤 • 𝛾 is summable. This is based on a genericity argument, and is the purpose of the next lemma, whose proof relies on an averaging argument initially due to Federer and Fleming [START_REF] Federer | Normal and integral currents[END_REF]. Our presentation and proof are taken from [9, Lemma 2.5]. Lemma 3.6. Let 𝜔, 𝛺, and 𝑃 ⊂ ℝ 𝑚 be measurable sets, with 0 < |𝑃| < +∞. Let Φ : 𝜔+𝑃 → 𝛺 and 𝑤 : 𝛺 → [0, +∞] be measurable maps. For every 𝑎 ∈ 𝑃, define the map Φ 𝑎 : 𝜔 → ℝ 𝑚 by Φ 𝑎 (𝑥) = Φ(𝑥 -𝑎) + 𝑎. Assume that, for every 𝑎 ∈ 𝑃 and 𝑥 ∈ 𝜔, Φ 𝑎 (𝑥) ∈ 𝛺. There exists a subset 𝐴 ⊂ 𝑃 of positive measure such that, for every 𝑎 ∈ 𝐴, we have
∫ 𝜔 𝑤 • Φ 𝑎 ≤ 𝐶 |𝜔 + 𝑃| |𝑃| ∫ 𝛺 𝑤 for some constant 𝐶 > 0.
The constant 𝐶 in the above estimate does not depend on the different parameters involved in the statement of the lemma. However, as we shall see in the proof, the measure of the set 𝐴 may be taken arbitrarily close to |𝑃| provided that we enlarge 𝐶 accordingly.
Proof. We are going to estimate the average
⨏ 𝑃 ∫ 𝜔 𝑤 • Φ 𝑎 d𝑎.
By a change of variable by translation and Tonelli's theorem, we compute that
∫ 𝑃 ∫ 𝜔 𝑤 • Φ 𝑎 d𝑎 = ∫ 𝑃 ∫ 𝜔+𝑎 𝑤(Φ(𝑦) + 𝑎) d𝑦 d𝑎 ≤ ∫ 𝜔+𝑃 ∫ 𝑃∩(𝑦-𝜔) 𝑤(Φ(𝑦) + 𝑎) d𝑎 d𝑦 ≤ ∫ 𝜔+𝑃 ∫ 𝛺 𝑤(𝑥) d𝑥 d𝑦 = |𝜔 + 𝑃| ∫ 𝛺 𝑤. Therefore, ⨏ 𝑃 ∫ 𝜔 𝑤 • Φ 𝑎 d𝑎 ≤ |𝜔 + 𝑃| |𝑃| ∫ 𝛺 𝑤.
Hence, for every 0 < 𝜃 < 1, there exists a subset 𝐴 ⊂ 𝑃 with measure |𝐴| ≥ 𝜃|𝑃| such that, for every 𝑎 ∈ 𝐴, we have
∫ 𝜔 𝑤 • Φ 𝑎 ≤ 1 1 -𝜃 |𝜔 + 𝑃| |𝑃| ∫ 𝛺 𝑤,
and the proof of the lemma is complete.
□
With all these tools at our disposal, we are now ready to prove Proposition 3.2. We start by constructing one model map Φ satisfying the geometric properties in the conclusion of the proposition. Then we use the previous lemmas to show that Φ 𝑎 satisfies all the conclusions of Proposition 3.2 for some 𝑎 ∈ ℝ 𝑚 .
Proof of Proposition 3.2. We use the notation introduced in (3.1). We start with the construction of the model map Φ. Let 𝜆 > 0 be such that
𝜆 < min 𝑟 -𝜌 2 , 𝑟 -𝑟 2 , 𝜌 -𝑟 2 .
We define Φ : 𝑄 4,1 → 𝑄 4,1 by
Φ(𝑥 ′ , 𝑥 ′′ ) = 𝑥 ′ , 𝜙 𝑥 ′ 𝜂 , 𝑥 ′′ 𝜂 𝑥 ′′ ,
where
𝜙 : 𝑄 4,1 → [0, 1] is a smooth function such that (a) for 𝑥 ∈ 𝑄 1,1 + 𝐵 𝑚 𝜆 , 𝜙(𝑥) = 0; (b) for 𝑥 ∈ (𝑄 4,1 \ 𝑄 2,1 ) + 𝐵 𝑚 𝜆 , 𝜙(𝑥) = 1.
Recall that the 𝑄 𝑖,1 are the rectangles defined in (3.1) with parameter 𝜂 = 1. By scaling, we have
∥𝐷 𝑗 Φ∥ 𝐿 ∞ (𝑄 4 ) ≤ 𝐶 1 𝜂 1-𝑗 for every 𝑗 ∈ {1, . . . , 𝑘 + 1}.
Now we set Φ 𝑎 (𝑥) = Φ(𝑥 -𝑎) + 𝑎 for every 𝑎 ∈ 𝐵 𝑚 𝜆𝜂 . By construction, Φ 𝑎 satisfies the geometric properties (i) to (iii) for every 𝑎 ∈ 𝐵 𝑚 𝜆𝜂 .
We now turn to the Sobolev estimates (iv). In the case where 𝑘 = 0, we apply Lemma 3.5 to 𝑢, with 𝛺 = 𝑄 4 . Let 𝑤 : 𝑄 4 → [0, +∞] be the corresponding detector. By Lemma 3.6 with 𝜔 = 𝑄 3 , 𝛺 = 𝑄 4 , and 𝑃 = 𝐵 𝑚 𝜆𝜂 , there exists 𝑎 ∈ 𝐵 𝑚 𝜆𝜂 such that
∫ 𝑄 3 𝑤 • Φ 𝑎 ≤ 𝐶 2 |𝑄 3 + 𝐵 𝑚 𝜆𝜂 | |𝐵 𝑚 𝜆𝜂 | ∫ 𝑄 4
𝑤.
Since 𝑄 3 has sides whose length is proportional to 𝜂, this implies that
∫ 𝑄 3 𝑤 • Φ 𝑎 ≤ 𝐶 3 ∫ 𝑄 4 𝑤. (3.5) Therefore, 𝑢 • Φ 𝑎 ∈ 𝑊 𝑠,𝑝 (𝑄 4 ) and |𝑢 • Φ 𝑎 | 𝑊 𝑠,𝑝 (𝑄 3 ) ≤ 𝐶 4 |Φ 𝑎 | 𝑠 𝒞 0,1 (𝑄 3 ) ∫ 𝑄 3 𝑤 • Φ 𝑎 1 𝑝
.
Combining the estimate on the derivative of Φ 𝑎 , equation (3.5) and the remark following Lemma 3.5, we conclude that
|𝑢 • Φ 𝑎 | 𝑊 𝑠,𝑝 (𝑄 3 ) ≤ 𝐶 5 |𝑢| 𝑊 𝑠,𝑝 (𝑄 4 ) .
The 𝐿 𝑝 estimate is obtained as in the case 𝑘 ≥ 1 below, and this concludes the proof when 0 < 𝑠 < 1.
If now 𝑘 ≥ 1, we apply Lemma 3.4 to 𝑢 to obtain a detector 𝑤 0 : 𝑄 4 → [0, +∞] and we apply Lemma 3.5 to 𝐷 𝑗 𝑢 for every 𝑗 ∈ {1, . . . , 𝑘} to obtain a detector 𝑤 𝑗 : 𝑄 4 → [0, +∞]. (In the case where 𝜎 = 0, we skip this second step and only construct 𝑤 0 . In the sequel we continue to speak about 𝑤 𝑗 for 𝑗 ∈ {0, . . . , 𝑘}, it is implicit that when 𝜎 = 0 we only consider 𝑤 0 .) Then we invoke Lemma 3.6 to find some 𝑎 ∈ 𝐵 𝑚 𝜆𝜂 such that
∫ 𝑄 3 𝑤 𝑗 • Φ 𝑎 ≤ 𝐶 6 |𝑄 3 + 𝐵 𝑚 𝜆𝜂 | |𝐵 𝑚 𝜆𝜂 | ∫ 𝑄 4
𝑤 𝑗 for every 𝑗 ∈ {0, . . . , 𝑘}.
It is indeed possible to choose the same 𝑎 simultaneously for each 𝑤 𝑗 since the set 𝐴 in Lemma 3.6 can be chosen of measure arbitrarily close of |𝐵 𝑚 𝜆𝜂 |. For the integer order derivatives, using the estimates on the derivatives of Φ 𝑎 and the fact that
∫ 𝑄 4 𝑤 0 = 1, we immediately deduce that 𝑢 • Φ 𝑎 ∈ 𝑊 𝑘,𝑝 (𝑄 3 ), ∥𝐷 𝑗 (𝑢 • Φ 𝑎 )∥ 𝐿 𝑝 (𝑄 3 ) ≤ 𝐶 7 𝑗 𝑖=1 𝜂 𝑖-𝑗 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝑄 4 ) , and ∥𝑢 • Φ 𝑎 ∥ 𝐿 𝑝 (𝑄 3 ) ≤ ∥𝑢∥ 𝐿 𝑝 (𝑄 4 ) .
In the case 𝑘 = 0, we may still obtain the 𝐿 𝑝 estimate at order 0 above, constructing the detector 𝑤 0 with the help of Lemma 3.3 instead of Lemma 3.4 and using again the fact that we may choose a suitable 𝑎 for several detectors simultaneously. Dealing with fractional order derivatives requires additional computations. We continue to work with 𝑎 as in (3.6). Using the Faà di Bruno formula -which is indeed valid for 𝑢 • Φ 𝑎 by Lemma 3.4 -and the multilinearity of the differential, we find
|𝐷 𝑗 (𝑢 • Φ 𝑎 )(𝑥) -𝐷 𝑗 (𝑢 • Φ 𝑎 )(𝑦)| 𝑝 ≤ 𝐶 8 𝑗 𝑖=1 |𝐷 𝑖 𝑢 • Φ 𝑎 (𝑥) -𝐷 𝑖 𝑢 • Φ 𝑎 (𝑦)| 𝑝 𝜂 (𝑖-𝑗)𝑝 + 𝑗 𝑡=1 |𝐷 𝑖 𝑢 • Φ 𝑎 (𝑥)| 𝑝 𝜂 (𝑖-1-𝑗+𝑡)𝑝 |𝐷 𝑡 Φ 𝑎 (𝑥) -𝐷 𝑡 Φ 𝑎 (𝑦)| 𝑝 . (3.7)
When dividing (3.7) by |𝑥 -𝑦| 𝑚+𝜎𝑝 and integrating over 𝑄 3 × 𝑄 3 , the first term on the right-hand side gives
𝜂 (𝑖-𝑗)𝑝 |𝐷 𝑖 𝑢 • Φ 𝑎 | 𝑝 𝑊 𝜎,𝑝 (𝑄 3 )
. As in the case 0 < 𝑠 < 1, we may estimate it as
𝜂 (𝑖-𝑗)𝑝 |𝐷 𝑖 𝑢 • Φ 𝑎 | 𝑝 𝑊 𝜎,𝑝 (𝑄 3 ) ≤ 𝐶 9 𝜂 (𝑖-𝑗)𝑝 |𝐷 𝑖 𝑢| 𝑝 𝑊 𝜎,𝑝 (𝑄 4 )
. For the second term on the right-hand side of (3.7), we use an optimization argument. For every 𝑟 > 0, we write
∫ 𝑄 3 |𝐷 𝑡 Φ 𝑎 (𝑥) -𝐷 𝑡 Φ 𝑎 (𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 ≤ 𝐶 10 ∫ 𝐵 𝑚 𝑟 (𝑥) 𝜂 -𝑡𝑝 1 |𝑥 -𝑦| 𝑚+𝜎𝑝-𝑝 d𝑦 + ∫ ℝ 𝑚 \𝐵 𝑚 𝑟 (𝑥) 𝜂 (1-𝑡)𝑝 1 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 ≤ 𝐶 11 (𝜂 -𝑡𝑝 𝑟 𝑝-𝜎𝑝 + 𝜂 (1-𝑡)𝑝 𝑟 -𝜎𝑝 ).
Letting 𝑟 = 𝜂, we find
∫ 𝑄 3 |𝐷 𝑡 Φ 𝑎 (𝑥) -𝐷 𝑡 Φ 𝑎 (𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 ≤ 𝐶 12 𝜂 (1-𝑡-𝜎)𝑝 .
Therefore,
∫ 𝑄 3 ∫ 𝑄 3 |𝐷 𝑖 𝑢 • Φ 𝑎 (𝑥)| 𝑝 𝜂 (𝑖-1-𝑗+𝑡)𝑝 |𝐷 𝑡 Φ 𝑎 (𝑥) -𝐷 𝑡 Φ 𝑎 (𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 ≤ 𝐶 13 𝜂 (𝑖-𝑗-𝜎)𝑝 ∫ 𝑄 3 |𝐷 𝑖 𝑢 • Φ 𝑎 (𝑥)| 𝑝 d𝑥 ≤ 𝐶 14 𝜂 (𝑖-𝑗-𝜎)𝑝 ∫ 𝑄 4 |𝐷 𝑖 𝑢| 𝑝 ,
where the last inequality follows from Lemma 3.4. Gathering the estimates for both terms in (3.7) yields the desired fractional estimate and concludes the proof.
□
We conclude this section with two additional results which are the counterparts of [9, Addendum 1 and 2 to Proposition 2.1] in the context of fractional order estimates. From now on, we place ourselves under the assumptions of Proposition 3.1. The first proposition ensures that the opening procedure does not increase too much the energy on one given cube. Proposition 3.7. Let 𝒦 𝑚 be a cubication containing 𝒰 𝑚 .
(a) If 𝑠 ≥ 1 and if 𝑢 ∈ 𝑊 1,𝑠𝑝 (𝐾 𝑚 + 𝑄 𝑚 2𝜌𝜂 ; ℝ 𝜈 ), then the map Φ : ℝ 𝑚 → ℝ 𝑚 provided by Proposition 3.1 can be chosen with the additional property that 𝑢 • Φ ∈ 𝑊 1,𝑠𝑝 (𝐾 𝑚 + 𝑄 𝑚 𝜌𝜂 ; ℝ 𝜈 ), and for every 𝜎 𝑚 ∈ 𝒦 𝑚 ,
∥𝐷(𝑢 • Φ)∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 ′ ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 )
for some constant 𝐶 ′ > 0 depending on 𝑚, 𝑠, 𝑝, and 𝜌.
(b) If 0 < 𝑠 < 1, then the map Φ : ℝ 𝑚 → ℝ 𝑚 provided by Proposition 3.1 can be chosen with the additional property that 𝑢 • Φ ∈ 𝑊 𝑠,𝑝 (𝐾 𝑚 + 𝑄 𝑚 𝜌𝜂 ; ℝ 𝜈 ), and for every
𝜎 𝑚 ∈ 𝒦 𝑚 , |𝑢 • Φ| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 ′ |𝑢| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 )
for some constant 𝐶 ′ > 0 depending on 𝑚, 𝑠, 𝑝, and 𝜌.
Proof. (a) In the case 𝑠 ≥ 1, since the choice of the parameter 𝑎 involved in the construction of the map provided by Proposition 3.2 is made over a set of positive measure, according to the remark following Lemma 3.6, we may assume that the maps Φ 𝜎 𝑑 involved in the construction of the map Φ satisfy in addition the conclusion of Proposition 3.2 with parameters 1 and 𝑠𝑝.
We keep the notation used in the proof of Proposition 3.1. Let 𝑑 ∈ {0, . . . , ℓ }. We are going to prove that
∥𝐷(𝑢 𝑑 • Φ 𝑑 )∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 𝜌 𝑑 𝜂 ) ≤ 𝐶 ′ ∥𝐷𝑢 𝑑 ∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 𝜌 𝑑-1 𝜂 ) ,
and the conclusion will follow by induction. By our additional assumption on the maps Φ 𝜎 𝑑 , we have
∥𝐷(𝑢 𝑑 • Φ 𝑑 )∥ 𝐿 𝑠𝑝 (𝑇 𝜎 𝑑 (𝑄 3 )) ≤ ∥𝐷(𝑢 𝑑 • Φ 𝑑 )∥ 𝐿 𝑠𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )) ≤ 𝐶 1 ∥𝐷𝑢 𝑑 ∥ 𝐿 𝑠𝑝 (𝑇 𝜎 𝑑 (𝑄 4 ))
for every 𝜎 𝑑 ∈ 𝒰 𝑑 . We conclude by using the fact that
Supp Φ 𝑑 ⊂ 𝜎 𝑑 ∈𝒰 𝑑 𝑇 𝜎 𝑑 (𝑄 2 ) ⊂ 𝜎 𝑑 ∈𝒰 𝑑 𝑇 𝜎 𝑑 (𝑄 3 )
along with the additivity of the integral. (b) The proof of the case 0 < 𝑠 < 1 is identical, except that we replace the additivity of the integral by Lemma 2.2. Here we use the fact that the number of 𝑑-faces of a given cube depends only on 𝑑 and 𝑚, and that the geometric support of Φ 𝜎 𝑑 is contained in 𝑇 𝜎 𝑑 (𝑄 2 ), which is slightly smaller than 𝑇 𝜎 𝑑 (𝑄 3 ). This justifies the application of Lemma 2.2.
□
The second proposition gives VMO-type estimates for the opened map. As we mentioned in our informal presentation, such estimates are one of the main features of the opening procedure, and they follow from the fact that 𝑢 • Φ behaves locally as a map of ℓ variables in 𝑈 ℓ + 𝑄 𝑚 𝜌𝜂 .
Proposition 3.8. Under the assumptions of Propositions 3.1 and 3.7, the map Φ : ℝ 𝑚 → ℝ 𝑚 satisfies the following estimates:
(a) if 𝑠 ≥ 1, then (i) it holds that lim 𝑟→0 sup 𝑄 𝑚 𝑟 (𝑎)⊂𝑈 ℓ +𝑄 𝑚 𝜌𝜂 𝑟 ℓ 𝑠𝑝 -1 ⨏ 𝑄 𝑚 𝑟 (𝑎) ⨏ 𝑄 𝑚 𝑟 (𝑎) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| d𝑥d𝑦 = 0;
(ii) for every 𝜎 𝑚 ∈ 𝒰 𝑚 and every
𝑄 𝑚 𝑟 (𝑎) ⊂ 𝑈 ℓ + 𝑄 𝑚 𝜌𝜂 with 𝑎 ∈ 𝜎 𝑚 , ⨏ 𝑄 𝑚 𝑟 (𝑎) ⨏ 𝑄 𝑚 𝑟 (𝑎) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| d𝑥d𝑦 ≤ 𝐶 ′′ 𝑟 1-ℓ 𝑠𝑝 𝜂 𝑚-ℓ 𝑠𝑝 ∥𝐷𝑢 ∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) ; (b) if 0 < 𝑠 < 1, then (i) it holds that lim 𝑟→0 sup 𝑄 𝑚 𝑟 (𝑎)⊂𝑈 ℓ +𝑄 𝑚 𝜌𝜂 𝑟 ℓ 𝑝 -𝑠 ⨏ 𝑄 𝑚 𝑟 (𝑎) ⨏ 𝑄 𝑚 𝑟 (𝑎) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| d𝑥d𝑦 = 0;
(ii) for every 𝜎 𝑚 ∈ 𝒰 𝑚 and every
𝑄 𝑚 𝑟 (𝑎) ⊂ 𝑈 ℓ + 𝑄 𝑚 𝜌𝜂 with 𝑎 ∈ 𝜎 𝑚 , ⨏ 𝑄 𝑚 𝑟 (𝑎) ⨏ 𝑄 𝑚 𝑟 (𝑎) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| d𝑥d𝑦 ≤ 𝐶 ′′ 𝑟 𝑠-ℓ 𝑝 𝜂 𝑚-ℓ 𝑝 |𝑢| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) ;
for some constant 𝐶 ′′ > 0 depending on 𝑚, 𝑠, 𝑝, and 𝜌.
Proof. We start with the proof of items (i). Let 𝑎 ∈ ℝ 𝑚 and 𝑟 > 0 be such that 𝑄 𝑚 𝑟 (𝑎) ⊂ 𝑈 ℓ + 𝑄 𝑚 𝜌𝜂 , and write 𝑄 𝑚 𝑟 (𝑎) = 𝑄 ℓ 𝑟 (𝑎 ′ ) × 𝑄 𝑚-ℓ 𝑟 (𝑎 ′′ ). Then 𝑎 ∈ 𝑈 ℓ + 𝑄 𝑚 𝜌𝜂-𝑟 , and hence there exists 𝜏 ℓ ∈ 𝒰 ℓ such that 𝑄 𝑚 𝑟 (𝑎) ⊂ 𝜏 ℓ + 𝑄 𝑚 𝜌𝜂 . We may assume that 𝜏 ℓ = 𝑄 𝑚 𝜂 × {0} 𝑚-ℓ . Recall that the map Φ is constant on each (𝑚ℓ )-dimensional cube orthogonal to 𝑄 ℓ (1+𝜌)𝜂 × {0} 𝑚-ℓ . Hence we may define 𝑣 :
𝑄 ℓ (1+𝜌)𝜂 → ℝ 𝜈 by 𝑣(𝑥 ′ ) = 𝑢 • Φ(𝑥 ′ , 𝑎 ′′ ).
Using Proposition 3.7, we deduce that
𝑣 ∈ 𝑊 1,𝑠𝑝 (𝑄 ℓ (1+𝜌)𝜂 ; ℝ 𝜈 ) in the 𝑠 ≥ 1 case, respec- tively 𝑣 ∈ 𝑊 𝑠,𝑝 (𝑄 ℓ (1+𝜌)𝜂 ; ℝ 𝜈 ) in the 0 < 𝑠 < 1 case. We next note that ⨏ 𝑄 𝑚 𝑟 (𝑎) ⨏ 𝑄 𝑚 𝑟 (𝑎) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| d𝑥d𝑦 = ⨏ 𝑄 ℓ 𝑟 (𝑎 ′ ) ⨏ 𝑄 ℓ 𝑟 (𝑎 ′ ) |𝑣(𝑥 ′ ) -𝑣(𝑦 ′ )| d𝑥 ′ d𝑦 ′ .
The Poincaré-Wirtinger inequality implies that
⨏ 𝑄 ℓ 𝑟 (𝑎 ′ ) ⨏ 𝑄 ℓ 𝑟 (𝑎 ′ ) |𝑣(𝑥 ′ ) -𝑣(𝑦 ′ )| d𝑥 ′ d𝑦 ′ ≤ 𝐶 1 𝑟 1-ℓ 𝑠𝑝 ∥𝐷𝑣∥ 𝐿 𝑠𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ )) if 𝑠 ≥ 1, respectively ⨏ 𝑄 ℓ 𝑟 (𝑎 ′ ) ⨏ 𝑄 ℓ 𝑟 (𝑎 ′ ) |𝑣(𝑥 ′ ) -𝑣(𝑦 ′ )| d𝑥 ′ d𝑦 ′ ≤ 𝐶 2 𝑟 𝑠-ℓ 𝑝 |𝑣| 𝑊 𝑠,𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ ))
if 0 < 𝑠 < 1. It then suffices to invoke Lebesgue's lemma to obtain both items (i).
We now turn to the proof of items (ii). When 𝑠 ≥ 1, we observe that
∥𝐷(𝑢 • Φ)∥ 𝐿 𝑠𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ )×𝑄 𝑚-ℓ 𝜌𝜂 (𝑎 ′′ )) = (2𝜌𝜂) 𝑚-ℓ 𝑠𝑝 ∥𝐷𝑣 ∥ 𝐿 𝑠𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ )) , (3.8)
and hence, assuming in addition that 𝑎 ∈ 𝜎 𝑚 with 𝜎 𝑚 ∈ 𝒰 𝑚 , we have
∥𝐷𝑣∥ 𝐿 𝑠𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ )) = 1 (2𝜌𝜂) 𝑚-ℓ 𝑠𝑝 ∥𝐷(𝑢 • Φ)∥ 𝐿 𝑠𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ )×𝑄 𝑚-ℓ 𝜌𝜂 (𝑎 ′′ )) ≤ 1 (2𝜌𝜂) 𝑚-ℓ 𝑠𝑝 ∥𝐷(𝑢 • Φ)∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 𝜌𝜂 ) .
Thus,
⨏ 𝑄 𝑚 𝑟 (𝑎) ⨏ 𝑄 𝑚 𝑟 (𝑎) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| d𝑥d𝑦 ≤ 𝐶 3 𝑟 1-ℓ 𝑠𝑝 (2𝜌𝜂) 𝑚-ℓ 𝑠𝑝 ∥𝐷(𝑢 • Φ)∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 𝜌𝜂 ) .
Proposition 3.7 implies that
∥𝐷(𝑢 • Φ)∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 4 ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) ,
which yields the desired conclusion.
For the case 0 < 𝑠 < 1, we follow the same path, replacing inequality (3.8) by the fact that
|𝑣| 𝑊 𝑠,𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ )) = 1 (2𝜌𝜂) 𝑚-ℓ 𝑝 ∫ 𝑄 𝑚-ℓ 𝜌𝜂 (𝑎 ′′ ) |𝑢 • Φ(•, 𝑥 ′′ )| 𝑝 𝑊 𝑠,𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ )) d𝑥 ′′ 1 𝑝 ≤ 𝐶 5 1 𝜂 𝑚-ℓ 𝑝 |𝑢| 𝑊 𝑠,𝑝 (𝑄 ℓ 𝑟 (𝑎 ′ )×𝑄 𝑚-ℓ 𝜌𝜂 (𝑎 ′′ )) .
This concludes the proof of the proposition.
□ 4 Adaptative smoothing
In this section, we present the adaptative smoothing, which consists in a regularization by convolution, 𝑥 ↦ → 𝜑 𝜓(𝑥) * 𝑢(𝑥), where the parameter 𝜓 of convolution is allowed to depend on the point 𝑥 where the regularized map is calculated. Already implicit in the proof of the 𝐻 = 𝑊 theorem [START_REF] Meyers | 𝐻 = 𝑊[END_REF], this method was made popular by Schoen and Uhlenbeck [30, Section 3]. The approach we follow here is an adaptation, suited to fractional Sobolev spaces, of the one in [9, Section 3]. We now become more specific. Let 𝜑 be a mollifier, i.e., 𝜑 ∈ 𝒞 ∞ c (𝐵 𝑚 1 ), 𝜑 ≥ 0 in 𝐵 𝑚 1 , 𝜑 is radial, and
∫ 𝐵 𝑚 1 𝜑 = 1.
Let 𝑢 ∈ 𝐿 1 loc (𝛺), and consider a map 𝜓 ∈ 𝒞 ∞ (𝛺; (0, +∞)). For every 𝑥 ∈ 𝛺 satisfying dist (𝑥, 𝜕𝛺) ≥ 𝜓(𝑥), we may define
𝜑 𝜓 * 𝑢(𝑥) = ∫ 𝐵 𝑚 1 𝜑(𝑧)𝑢(𝑥 + 𝜓(𝑥)𝑧) d𝑧.
A change of variable yields
𝜑 𝜓 * 𝑢(𝑥) = 1 𝜓(𝑥) 𝑚 ∫ 𝐵 𝑚 𝜓(𝑥) (𝑥) 𝜑 𝑦 -𝑥 𝜓(𝑥) 𝑢(𝑦) d𝑦. (4.1)
In particular, 𝜑 𝜓 * 𝑢 is smooth.
Let us first note a straightforward inequality. Let 𝜔 ⊂ {𝑥 ∈ 𝛺: dist (𝑥, 𝜕𝛺) ≥ 𝜓(𝑥)}, so that 𝜑 𝜓 * 𝑢 is well-defined on 𝜔. For any 𝑥 ∈ 𝜔, we write
𝜑 𝜓 * 𝑢(𝑥) -𝑢(𝑥) = ∫ 𝐵 𝑚 1 𝜑(𝑧)(𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝑢(𝑥)) d𝑧.
Therefore, by Minkowski's inequality, we find
∥𝜑 𝜓 * 𝑢 -𝑢∥ 𝐿 𝑝 (𝜔) ≤ ∫ 𝐵 𝑚 1 𝜑(𝑧) ∫ 𝜔 |𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝑢(𝑥)| 𝑝 d𝑥 1 𝑝 d𝑧 ≤ sup 𝑣∈𝐵 𝑚 1 ∥𝜏 𝜓𝑣 (𝑢) -𝑢∥ 𝐿 𝑝 (𝜔) , (4.2)
where 𝜏 𝜓𝑣 (𝑢)(𝑥) = 𝑢(𝑥 + 𝜓(𝑥)𝑣). Our main task in this section will be to obtain estimates in the spirit of 4.2 for maps in 𝑊 𝑠,𝑝 (𝛺; ℝ 𝜈 ).
Before stating the main result of this section, we pause to explain the role of an important assumption. In the sequel, we will assume that ∥𝐷𝜓∥ 𝐿 ∞ (𝛺) < 1. We illustrate the usefulness of this condition in the simpler context of 𝐿 𝑝 estimates. We start by using Minkowski's inequality to write
∥𝜑 𝜓 * 𝑢 ∥ 𝐿 𝑝 (𝜔) ≤ ∫ 𝐵 𝑚 1 𝜑(𝑧) ∫ 𝜔 |𝑢(𝑥 + 𝜓(𝑥)𝑧)| 𝑝 d𝑥 1 𝑝 d𝑧.
Next we use the change of variable 𝑤 = 𝑥 + 𝜓(𝑥)𝑧. Note that the map Ψ : 𝜔 → 𝛺 defined by Ψ(𝑥) = 𝑥 + 𝜓(𝑥)𝑧 satisfies 𝐷Ψ(𝑥) = id +𝐷𝜓(𝑥) ⊗ 𝑧. Therefore, by rank-one perturbation of the identity (see e.g. [29, Section 3.8]), we deduce that
jac Ψ = |det (id +𝐷𝜓 ⊗ 𝑧)| = |1 + 𝐷𝜓 • 𝑧| ≥ 1 -∥𝐷𝜓∥ 𝐿 ∞ (𝛺) for 𝑧 ∈ 𝐵 𝑚 1 .
Thanks to the assumption ∥𝐷𝜓∥ 𝐿 ∞ (𝛺) < 1, the linear map 𝐷𝛹 (𝑥) is invertible for 𝑧 ∈ 𝐵 𝑚 1 , so that the above change of variable is well-defined with Jacobian less than
1 1-∥𝐷𝜓∥ 𝐿 ∞ (𝜔) . We conclude that ∥𝜑 𝜓 * 𝑢 ∥ 𝐿 𝑝 (𝜔) ≤ 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 1 𝑝 ∥𝑢∥ 𝐿 𝑝 (𝛺) . (4.3)
We now state the main result of this section, which is the counterpart of [9, Proposition 3.2] in the context of fractional Sobolev spaces.
Proposition 4.1. Let 𝜑 ∈ 𝒞 ∞ c (𝐵 𝑚 1
) be a mollifier and let 𝜓 ∈ 𝒞 ∞ (𝛺) be a nonnegative function such that ∥𝐷𝜓∥ 𝐿 ∞ (𝛺) < 1. For every 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; ℝ 𝜈 ) and every open set 𝜔 ⊂ {𝑥 ∈ 𝛺: dist (𝑥, 𝜕𝛺) > 𝜓(𝑥)}, we have 𝜑 𝜓 * 𝑢 ∈ 𝑊 𝑠,𝑝 (𝜔; ℝ 𝜈 ), and moreover, the following estimates hold:
(i) (a) if 0 < 𝑠 < 1, then |𝜑 𝜓 * 𝑢| 𝑊 𝑠,𝑝 (𝜔) ≤ 𝐶 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 2 𝑝 |𝑢| 𝑊 𝑠,𝑝 (𝛺) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 (𝜑 𝜓 * 𝑢)∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 1 𝑝 𝑗 𝑖=1
𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝛺) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 (𝜑 𝜓 * 𝑢)| 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 2 𝑝 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝛺) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝛺) ; (ii) (a) if 0 < 𝑠 < 1, then |𝜑 𝜓 * 𝑢 -𝑢| 𝑊 𝑠,𝑝 (𝜔) ≤ sup 𝑣∈𝐵 𝑚 1 |𝜏 𝜓𝑣 (𝑢) -𝑢| 𝑊 𝑠,𝑝 (𝜔) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 (𝜑 𝜓 * 𝑢) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝜔) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑗 ∥𝜏 𝜓𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝜔) + 𝐶 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 1 𝑝 𝑗 𝑖=1
𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐴) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 (𝜑 𝜓 * 𝑢) -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝜔) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑗+𝜎 |𝜏 𝜓𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝜔) + 𝐶 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 2 𝑝 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝐴) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝐴) ;
for some constant 𝐶 > 0 depending on 𝑚, 𝑠, and 𝑝, where
𝐴 = 𝑥∈𝜔∩supp 𝐷𝜓 𝐵 𝑚 𝜓(𝑥) (𝑥)
and 𝜂 > 0 satisfies 𝜂 𝑗 ∥𝐷 𝑗 𝜓∥ 𝐿 ∞ ≤ 𝜂 for every 𝑗 ∈ {2, . . . , 𝑘 + 1}.
Proof. The proof of item (i) is completely analogous to the proof of item (ii) and uses the same ingredients. Hence we focus on item (ii), and we explain in the end what should be changed in order to get (i).
We start with the integer order estimate in the case 𝑠 ≥ 1. By the Faà di Bruno formula, for every 𝑥 ∈ 𝜔, we have
𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑥) = ∫ 𝐵 𝑚 1 𝜑(𝑧)𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧)[(id +𝐷𝜓(𝑥) ⊗ 𝑧) 𝑗 ] d𝑧 + 𝑗-1 𝑖=1 𝑛(𝑖,𝑗) 𝑙=1 ∫ 𝐵 𝑚 1 𝜑(𝑧)𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑥, 𝑧)] d𝑧, (4.4)
where 𝑛(𝑖, 𝑗) ∈ ℕ * and 𝐿 𝑖,𝑙,𝑗 (𝑥, 𝑧) is a linear mapping ℝ 𝑗×𝑚 → ℝ 𝑖×𝑚 depending on 𝜓 and its derivatives. More precisely, each entry of 𝐿 𝑖,𝑙,𝑗 (𝑥, 𝑧) is either id +𝐷𝜓(𝑥) ⊗ 𝑧 or 𝐷 𝑡 𝜓(𝑥) ⊗ 𝑧 for some 𝑡 ∈ {2, . . . , 𝑗}, and the sum over all 𝑖 components of 𝐿 𝑖,𝑙,𝑗 (𝑥, 𝑧) of the order of the derivative of 𝜓 appearing in this component is 𝑗. Moreover, since 𝑖 < 𝑗, at least one entry of 𝐿 𝑖,𝑗,𝑙 (𝑥, 𝑧) has the form 𝐷 𝑡 𝜓(𝑥), and thus the second integral in (4.4) lives only on supp 𝐷𝜓. Therefore, taking into account the assumption ∥𝐷 𝑡 𝑢 ∥ 𝐿 ∞ ≤ 𝜂 1-𝑡 , we deduce that
|𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑥, 𝑧)]| ≤ 𝐶 1 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)|𝜂 𝑖-𝑗 𝜒 supp 𝐷𝜓 (𝑥).
On the other hand, we note that, by 𝑗-linearity of 𝐷 𝑗 𝑢, we may write 𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧)[(id +𝐷𝜓(𝑥) ⊗ 𝑧) 𝑗 ] as the sum of 𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) and 2 𝑗 -1 terms which are the composition of 𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) with a 𝑗-linear map ℝ 𝑗×𝑚 → ℝ 𝑗×𝑚 whose entries are either id or 𝐷𝜓(𝑥) ⊗ 𝑧, with at least one of them being the latter. Hence, since
∥𝐷𝜓∥ 𝐿 ∞ < 1, each of those 2 𝑗 -1 last terms is bounded by |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧)|𝜒 supp 𝐷𝜓 (𝑥). For instance, if 𝑗 = 2, then 𝐷 2 𝑢(𝑥 + 𝜓(𝑥)𝑧)[(id +𝐷𝜓(𝑥) ⊗ 𝑧) 2 ] = 𝐷 2 𝑢(𝑥 + 𝜓(𝑥)𝑧) + 𝐷 2 𝑢(𝑥 + 𝜓(𝑥)𝑧)[id, 𝐷𝜓(𝑥) ⊗ 𝑧] + 𝐷 2 𝑢(𝑥 + 𝜓(𝑥)𝑧)[𝐷𝜓(𝑥) ⊗ 𝑧, id] + 𝐷 2 𝑢(𝑥 + 𝜓(𝑥)𝑧)[𝐷𝜓(𝑥) ⊗ 𝑧, 𝐷𝜓(𝑥) ⊗ 𝑧].
We observe that indeed, the three last terms are obtained by composition of 𝐷 2 𝑢(𝑥 + 𝜓(𝑥)𝑧) with a bilinear map, at least one of whose entries being 𝐷𝜓(𝑥) ⊗ 𝑧.
As a consequence, for every 𝑥 ∈ 𝜔, we may write
|𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑥) -𝐷 𝑗 𝑢(𝑥)| ≤ ∫ 𝐵 𝑚 1 𝜑(𝑧)|𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥)| d𝑧 + 𝐶 2 𝑗 𝑖=1 𝜂 𝑖-𝑗 𝜒 supp 𝐷𝜓 (𝑥) ∫ 𝐵 𝑚 1 𝜑(𝑧)|𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)| d𝑧.
Minkowski's inequality ensures that
∥𝐷 𝑗 (𝜑 𝜓 * 𝑢) -𝐷 𝑗 𝑢 ∥ 𝐿 𝑝 (𝜔) ≤ ∫ 𝐵 𝑚 1 𝜑(𝑧) ∫ 𝜔 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥)| 𝑝 d𝑥 1 𝑝 + 𝐶 𝑗 𝑖=1 𝜂 𝑖-𝑗 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)| 𝑝 d𝑥 1 𝑝 d𝑧.
For the first term, we note that
∫ 𝜔 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥)| 𝑝 d𝑥 1 𝑝 ≤ sup 𝑣∈𝐵 𝑚 1 ∥𝜏 𝜓𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝜔) .
For the second term, we use the change of variable 𝑤 = 𝑥 + 𝜓(𝑥)𝑧 that we considered before. Taking into account the definition of the set 𝐴, we have 𝑤 ∈ 𝐵 𝜓(𝑥) (𝑥) ⊂ 𝐴, and therefore
∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)| 𝑝 d𝑥 1 𝑝 ≤ 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 1 𝑝 ∫ 𝐴 |𝐷 𝑖 𝑢(𝑤)| 𝑝 d𝑤 1 𝑝
.
We obtain the desired estimate by using the fact that 𝜑 has integral equal to 1.
The estimate in the fractional case 0 < 𝑠 < 1 is straightforward. Indeed, we first write
|𝜑 𝜓 * 𝑢(𝑥)-𝑢(𝑥)-𝜑 𝜓 * 𝑢(𝑦)+𝑢(𝑦)| ≤ ∫ 𝐵 𝑚 1 𝜑(𝑧)|𝑢(𝑥 +𝜓(𝑥)𝑧)-𝑢(𝑥)-𝑢(𝑦 +𝜓(𝑦)𝑧)+𝑢(𝑦)| d𝑧.
Minkowski's inequality then implies that
|𝜑 𝜓 * 𝑢 -𝑢| 𝑊 𝑠,𝑝 (𝜔) ≤ ∫ 𝐵 𝑚 1 𝜑(𝑧) ∫ 𝜔 ∫ 𝜔 |𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝑢(𝑥) -𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦 1 𝑝 d𝑧 ≤ sup 𝑣∈𝐵 𝑚 1 |𝜏 𝜓𝑣 (𝑢) -𝑢| 𝑊 𝑠,𝑝 (𝜔) .
For the fractional estimate when 𝑠 ≥ 1, we again use equation (4.4) and the observations following this equation. Let 𝑥, 𝑦 ∈ 𝜔. We proceed by distinction of cases, using the multilinearity of the differential.
Case 1: 𝑥, 𝑦 ∈ supp 𝐷𝜓. For the terms with 𝑖 < 𝑗, using the 𝑗-linearity of 𝐷 𝑗 𝑢, we estimate
|𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑥, 𝑧)] -𝐷 𝑖 𝑢(𝑦 + 𝜓(𝑦)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑦, 𝑧)]| ≤ 𝐶 3 𝑗 𝑡=1 𝜂 𝑖-1-𝑗+𝑡 |𝐷 𝑡 𝜓(𝑥) -𝐷 𝑡 𝜓(𝑦)||𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)| + 𝜂 𝑖-𝑗 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑖 𝑢(𝑦 + 𝜓(𝑦)𝑧)| .
On the other hand, for the term involving the derivative of order 𝑗 of 𝑢, we have
|𝐷 𝑗 𝑢(𝑥 +𝜓(𝑥)𝑧)[(id +𝐷𝜓(𝑥)⊗ 𝑧) 𝑗 ]-𝐷 𝑗 𝑢(𝑥)-𝐷 𝑗 𝑢(𝑦 +𝜓(𝑦)𝑧)[(id +𝐷𝜓(𝑦)⊗ 𝑧) 𝑗 ]+𝐷 𝑗 𝑢(𝑦)| ≤ |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝐷 𝑗 𝑢(𝑦)| + 𝐶 4 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧)| + 𝐶 5 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧)||𝐷𝜓(𝑥) -𝐷𝜓(𝑦)|. Now, for 𝑡 ∈ {1, . . . , 𝑗}, we estimate ∫ 𝜔 |𝐷 𝑡 𝜓(𝑥) -𝐷 𝑡 𝜓(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 ≤ 𝜂 -𝑡𝑝 ∫ 𝐵 𝑚 𝑟 (𝑥) 1 |𝑥 -𝑦| 𝑚+(𝜎-1)𝑝 d𝑦 + 𝐶 6 𝜂 (1-𝑡)𝑝 ∫ ℝ 𝑚 \𝐵 𝑚 𝑟 (𝑥) 1 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 ≤ 𝐶 7 𝜂 -𝑡𝑝 𝑟 (1-𝜎)𝑝 + 𝜂 (1-𝑡)𝑝 𝑟 -𝜎𝑝 for every 𝑟 > 0. Letting 𝑟 = 𝜂 yields ∫ 𝜔 |𝐷 𝑡 𝜓(𝑥) -𝐷 𝑡 𝜓(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 ≤ 𝐶 8 𝜂 (1-𝑡-𝜎)𝑝 . (4.5)
Therefore, using Minkowski's inequality on the expression obtained from (4.4), we deduce that
∫ 𝜔∩supp 𝐷𝜓 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑥) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑦) + 𝐷 𝑗 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 1 𝑝 ≤ ∫ 𝐵 𝑚 1 𝜑(𝑧) ∫ 𝜔∩supp 𝐷𝜓 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝐷 𝑗 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 1 𝑝 + 𝐶 9 𝑗 𝑖=1 𝜂 𝑖-𝑗 ∫ 𝜔∩supp 𝐷𝜓 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑖 𝑢(𝑦 + 𝜓(𝑦)𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 1 𝑝 + 𝐶 10 𝑗 𝑖=1 𝜂 𝑖-𝑗-𝜎 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)| 𝑝 d𝑥 1 𝑝 d𝑧.
Case 2: without loss of generality, 𝑥 ∈ supp 𝐷𝜓 and 𝑦 ∉ supp 𝐷𝜓. In this case, since each 𝐿 𝑖,𝑙,𝑗 (𝑦, 𝑧) has at least one entry equal to 𝐷 𝑡 𝜓(𝑦), we find
𝐷 𝑖 𝑢(𝑦 + 𝜓(𝑦)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑦, 𝑧)] = 0 = 𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑦, 𝑧)].
Hence,
|𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑥, 𝑧)] -𝐷 𝑖 𝑢(𝑦 + 𝜓(𝑦)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑦, 𝑧)]| ≤ 𝑗 𝑡=1 𝜂 𝑖-1-𝑗+𝑡 |𝐷 𝑡 𝜓(𝑥) -𝐷 𝑡 𝜓(𝑦)||𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)|.
On the other hand, we have
|𝐷 𝑗 𝑢(𝑥 +𝜓(𝑥)𝑧)[(id +𝐷𝜓(𝑥)⊗ 𝑧) 𝑗 ]-𝐷 𝑗 𝑢(𝑥)-𝐷 𝑗 𝑢(𝑦 +𝜓(𝑦)𝑧)[(id +𝐷𝜓(𝑦)⊗ 𝑧) 𝑗 ]+𝐷 𝑗 𝑢(𝑦)| ≤ |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝐷 𝑗 𝑢(𝑦)| + 𝐶 11 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧)||𝐷𝜓(𝑥) -𝐷𝜓(𝑦)|.
We then argue as in Case 1, using (4.5) to deal with the terms containing |𝐷 𝑡 𝜓(𝑥) -𝐷 𝑡 𝜓(𝑦)|, and we deduce that
∫ 𝜔\supp 𝐷𝜓 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑥) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑦) + 𝐷 𝑗 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 1 𝑝 ≤ ∫ 𝐵 𝑚 1 𝜑(𝑧) ∫ 𝜔\supp 𝐷𝜓 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝐷 𝑗 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 1 𝑝 + 𝐶 12 𝑗 𝑖=1
𝜂 d𝑧.
For the first term, we observe once again that
∫ 𝜔 ∫ 𝜔 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝐷 𝑗 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 1 𝑝 ≤ sup 𝑣∈𝐵 𝑚 1 |𝜏 𝜓𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝜔) .
For the third term, we use the change of variable 𝑤 = 𝑥 + 𝜓(𝑥)𝑧, and we find
∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)| 𝑝 d𝑥 ≤ 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) ∫ 𝐴 |𝐷 𝑖 𝑢(𝑤)| 𝑝 d𝑤 ≤ 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 2 ∥𝐷 𝑖 𝑢∥ 𝑝 𝐿 𝑝 (𝐴) .
For the second term, we make use of the change of variable 𝑤 = 𝑥 + 𝜓(𝑥)𝑧 and w = 𝑦 + 𝜓(𝑦)𝑧. Observe that |𝑤 -w| ≤ 2|𝑥 -𝑦|, and hence
∫ 𝜔∩supp 𝐷𝜓 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑖 𝑢(𝑦 + 𝜓(𝑦)𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 ≤ 𝐶 14 1 (1 -∥𝐷𝜓∥ 𝐿 ∞ (𝜔) ) 2 ∫ 𝐴 ∫ 𝐴 |𝐷 𝑖 𝑢(𝑤) -𝐷 𝑖 𝑢( w)| 𝑝 |𝑤 -w| 𝑚+𝜎𝑝 d𝑤d w.
Using the fact that 𝜑 has integral equal to 1, this concludes the proof of the fractional estimate when 𝑠 ≥ 1. The proof of assertion (i) follows the same strategy. The only change is that, instead of grouping the term 𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) coming from the first term in (4.4) with the 𝐷 𝑗 𝑢, we have to estimate it as all the other terms. Unlike the 2 𝑗 -1 terms involving a derivative of order 𝑗 of 𝑢, this term does not vanish outside of the support of 𝐷𝜓. This explains the presence of the norm on the whole 𝛺 in estimates (i).
□
Adaptative smoothing is a very useful tool to approximate a 𝑊 𝑠,𝑝 map by smooth maps, but it has a major drawback in the context of Sobolev spaces with values into manifolds. Indeed, if 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; 𝒩), in general 𝜑 𝜓 * 𝑢 does not take values into 𝒩, since the convolution product is in general not compatible with the constraint. Therefore, it will be crucial in the proof of Theorem 1.2 to be able to estimate the distance between the smoothed maps and the manifold. We close this section by a discussion devoted to this purpose, which also sheds light on how to use the estimates obtained during the opening procedure in the previous section.
We work in a slightly more general setting, assuming that 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; ℝ 𝜈 ) is such that 𝑢(𝑥) ∈ 𝐹 for almost every 𝑥 ∈ 𝛺, where 𝐹 ⊂ ℝ 𝜈 is an arbitrary closed set. We place ourselves under the assumptions of Propositions 3.1, 3.7, and 3.8. We denote by Φ op 𝜂 the map provided by Proposition 3.1 and we set 𝑢
op 𝜂 = 𝑢 • Φ op 𝜂 . Let 𝑢 sm 𝜂 = 𝜑 𝜓 𝜂 * 𝑢 op 𝜂
, where 𝜑 is a fixed mollifier, and the variable regularization parameter 𝜓 𝜂 is to be chosen later on, depending on 𝜂.
Let 0 < 𝜌 < 𝜌 be fixed, and assume that 𝒰 𝑚 𝜂 is a subskeleton of some skeleton 𝒦 𝑚 𝜂 such that 𝐾 𝑚 𝜂 ⊂ 𝜔. To fix the ideas, one may keep in mind that 𝐾 𝑚 𝜂 = 𝜔 in the case where 𝜔 can be decomposed as a finite union of cubes of radius 𝜂. We consider a subskeleton
ℰ 𝑚 𝜂 of 𝒰 𝑚 𝜂 such that 𝐸 𝑚 𝜂 ⊂ Int 𝑈 𝑚 𝜂 (4.6)
in the relative topology of 𝐾 𝑚 𝜂 . (Later on in the proof of Theorem 1.2, ℰ 𝑚 𝜂 will be the class of all bad cubes.)
Given a set 𝑆 ⊂ ℝ 𝜈 , the directed Hausdorff distance from 𝑆 to 𝐹 is defined as
Dist 𝐹 (𝑆) = sup{dist (𝑥, 𝐹) : 𝑥 ∈ 𝑆}.
Our objective is to show that, for a suitable choice of 𝜓 𝜂 and 𝑟 > 0, we have
Dist 𝐹 (𝑢 sm 𝜂 ((𝐾 𝑚 \ 𝑈 𝑚 𝜂 ) ∪ (𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 ))) ≤ max max 𝜎 𝑚 ∈𝒦 𝑚 𝜂 \ℰ 𝑚 𝜂 𝐶 1 1 𝜂 𝑚 𝑠𝑝 -1 ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) , sup 𝑥∈𝑈 ℓ 𝜂 +𝑄 𝑚 𝜌𝜂 𝐶 2 ⨏ 𝑄 𝑚 𝑟 (𝑥) ⨏ 𝑄 𝑚 𝑟 (𝑥) |𝑢 op 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦d𝑧 (4.7) if 𝑠 ≥ 1, respectively Dist 𝐹 (𝑢 sm 𝜂 ((𝐾 𝑚 \ 𝑈 𝑚 𝜂 ) ∪ (𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 ))) ≤ max max 𝜎 𝑚 ∈𝒦 𝑚 𝜂 \ℰ 𝑚 𝜂 𝐶 1 1 𝜂 𝑚 𝑝 -𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) , sup 𝑥∈𝑈 ℓ 𝜂 +𝑄 𝑚 𝜌𝜂 𝐶 2 ⨏ 𝑄 𝑚 𝑟 (𝑥) ⨏ 𝑄 𝑚 𝑟 (𝑥) |𝑢 op 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦d𝑧 (4.8)
if 0 < 𝑠 < 1. We note that, in order to make the right-hand side of (4.7), respectively (4.8), small, we need to take 𝑟 sufficiently small, and also to have control on the 𝐿 𝑠𝑝 norm of 𝐷𝑢, respectively the 𝑊 𝑠,𝑝 norm of 𝑢, on the cubes in 𝒦 𝑚 𝜂 \ ℰ 𝑚 𝜂 . This will be our motivation for the choice of good and bad cubes in the proof of Theorem 1.2.
We proceed with the proof of (4. If 𝑄 𝑚 𝜓 𝜂 (𝑥) (𝑥) ⊂ 𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 and ℓ ≤ 𝑠𝑝, Proposition 3.8 ensures that the right-hand side of (4.9) can be made arbitrarily small if we take 𝜓 𝜂 (𝑥) sufficiently small. This invites us to choose 𝜓 𝜂 to be very small in a neighborhood of 𝑈 ℓ .
On the other hand, the Poincaré-Wirtinger inequality ensures that if 0 < 𝑠 < 1. These estimates are only useful in the region where we can control the 𝐿 𝑠𝑝 norm of 𝐷𝑢 or the 𝑊 𝑠,𝑝 norm of 𝑢, that is, on the good cubes. On the other hand, since 𝑠𝑝 < 𝑚, (4.10) and (4.11) suggest that 𝜓 𝜂 should not be too small. We now pause to explain the construction of a function 𝜓 𝜂 suited for our approximation results. As explained in Section 2, we distinguish between three regimes. In 𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 , we take 𝜓 𝜂 very small, according to Proposition 3.8. On the good cubes, we take 𝜓 𝜂 of order 𝜂, in order to apply (4.10), respectively (4.11). Between these two regimes, we need a transition region in order for 𝜓 𝜂 to change of magnitude. Here the second part of Proposition 3.8 comes into play. Indeed, if 𝑥 ∈ 𝜎 𝑚 for some 𝜎 𝑚 ∈ 𝒰 for some constant 𝐶 8 > 0 depending only on 𝑚.
dist (𝑢 sm 𝜂 (𝑥), 𝐹) ≤ 𝐶 4 1 𝜓 𝜂 (𝑥) 𝑚 𝑠𝑝 -1 ∥𝐷𝑢 op 𝜂 ∥ 𝐿 𝑠𝑝 (𝑄 𝑚 𝜓(𝑥) (𝑥)) (4.10
Now we pick 0 < 𝑟 < 𝑡 and we let
𝜓 𝜂 = 𝑡𝜁 𝜂 + 𝑟(1 -𝜁 𝜂 ).
Therefore, 𝜓 𝜂 = 𝑟 on 𝐸 𝑚 𝜂 and 𝜓 𝜂 = 𝑡 on 𝐾 𝑚 𝜂 \ 𝑈 𝑚 𝜂 . As we observed, we will need to take 𝑟 very small, while keeping 𝑡 of order 𝜂. We choose
𝑡 = min 𝜅 𝐶 8 , 𝜌 -𝜌 𝜂 (4.14)
for some fixed 0 < 𝜅 < 1. Therefore, 𝜂 for which we may estimate its distance to 𝑢 in 𝑊 𝑠,𝑝 . Moreover, even though 𝑢 sm 𝜂 does not necessarily take values into 𝐹, we are able to control the distance between 𝑢 sm 𝜂 and 𝐹 everywhere on the cubication 𝐾 𝑚 𝜂 , except on the cubes in 𝒰 𝑚 𝜂 , far from their ℓ -skeleton. Therefore, our next step is to be able to modify 𝑢 sm 𝜂 into a new map which, on the cubes in 𝒰 𝑚 𝜂 , depends only on the values of 𝑢 sm 𝜂 near the ℓ -skeleton of the cubes, while controlling the 𝑊 𝑠,𝑝 distance between 𝑢 sm 𝜂 and this new map.
𝜂 𝑗 ∥𝐷 𝑗 𝜓 𝜂 ∥ 𝐿 ∞ ≤
Thickening
This section is devoted to the thickening procedure. As we explained in Section 2, this technique is reminiscent of the homogeneous extension method, which was used by Bethuel to deal with the case 𝑠 = 1; see [START_REF] Bethuel | The approximation problem for Sobolev maps between two manifolds[END_REF]. This approach is valid for 𝑊 𝑠,𝑝 maps with 𝑠 < 1 + 1 𝑝 (but not beyond 𝑠 = 1 + 1 𝑝 ). In order to deal with 𝑊 𝑠,𝑝 maps with arbitrary 𝑠, a new tool, thickening, is needed. Its construction was performed by Bousquet, Ponce, and Van Schaftingen in [9, Section 4], which also contains the analytic estimates that make thickening instrumental in the proof of Theorem 1.2 for integer 𝑠. In this section, we establish the fractional counterparts of the estimates in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF]. The main feature of this section is the need for new techniques, taking into account the geometry of the thickening maps, that we develop in order to obtain fractional estimates. This will become transparent, e.g., in the proof of estimates (a) and (c) in Proposition 5.3, relying crucially on estimate (5.4). Proposition 5.1. Let 𝛺 ⊂ ℝ 𝑚 be open, ℓ ∈ {0, . . . , 𝑚 -1}, 𝜂 > 0, 0 < 𝜌 < 1, 𝒮 𝑚 be a cubication in ℝ 𝑚 of radius 𝜂, 𝒰 𝑚 be a subskeleton of 𝒮 𝑚 such that 𝑈 𝑚 + 𝑄 𝑚 𝜌𝜂 ⊂ 𝛺, and 𝒯 ℓ * be the dual skeleton of 𝒰 ℓ . There exists a smooth map Φ :
ℝ 𝑚 \ 𝑇 ℓ * → ℝ 𝑚 such that (i) Φ is injective; (ii) for every 𝜎 𝑚 ∈ 𝒮 𝑚 , Φ(𝜎 𝑚 \ 𝑇 ℓ * ) ⊂ 𝜎 𝑚 \ 𝑇 ℓ * ; (iii) Supp Φ ⊂ 𝑈 𝑚 + 𝑄 𝑚 𝜌𝜂 and Φ(𝑈 𝑚 \ 𝑇 ℓ * ) ⊂ 𝑈 ℓ + 𝑄 𝑚 𝜌𝜂 ;
(iv) for every 𝑗 ∈ ℕ * and for every 𝑥 ∈ (𝑈 𝑚 + 𝑄 𝑚 𝜌𝜂 ) \ 𝑇 ℓ * ,
|𝐷 𝑗 Φ(𝑥)| ≤ 𝐶 𝜂 dist(𝑥, 𝑇 ℓ * ) 𝑗
for some constant 𝐶 > 0 depending on 𝑗, 𝑚 and 𝜌.
If in addition ℓ + 1 > 𝑠𝑝, then for every 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; ℝ 𝜈 ), we have 𝑢 • Φ ∈ 𝑊 𝑠,𝑝 (𝛺; ℝ 𝜈 ), and moreover, the following estimates hold:
(a) if 0 < 𝑠 < 1, then 𝜂 𝑠 |𝑢 • Φ -𝑢| 𝑊 𝑠,𝑝 (𝛺) ≤ 𝐶 ′ 𝜂 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) + ∥𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝛺) ≤ 𝐶 ′ 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝛺) ≤ 𝐶 ′ 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ -𝑢∥ 𝐿 𝑝 (𝛺) ≤ 𝐶 ′ ∥𝑢∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ;
for some constant 𝐶 ′ > 0 depending on 𝑚, 𝑠, 𝑝, and 𝜌.
We emphasize that, unlike for opening in Section 3, the map Φ constructed in Proposition 5.1 above is independent of the map 𝑢 ∈ 𝑊 𝑠,𝑝 it shall be composed with.
Similarly to opening, crucial to the proof of Theorem 1.2 is the fact that the thickening procedure increases the energy of the map 𝑢 at most by a constant factor in the region where 𝑢 is modified. This, in turn, implies that the distance between 𝑢 and 𝑢 • Φ is controlled by the energy of 𝑢 on 𝑈 𝑚 + 𝑄 𝑚 𝜌𝜂 , as stated in the conclusion of Proposition 5.1. In the proof of Theorem 1.2, this will be used in combination with the fact that the measure of the set 𝑈 𝑚 + 𝑄 𝑚 𝜌𝜂 tends to 0 as 𝜂 → 0 in order to ensure that 𝑢 • Φ is close to 𝑢 when 𝜂 is sufficiently small.
As for the opening, Proposition 5.1 is proved blockwise: we first construct, in Proposition 5.2, a map, still denoted Φ, which thickens each face of 𝒯 ℓ * . We then suitably glue those maps to obtain a thickening map as in Proposition 5.1. Before giving the description of the building blocks used in the proof of Proposition 5.1, we introduce some additional notation similarly to what we did for opening. The construction of the map in Proposition 5.2 below involves three parameters 0 < 𝜌 < 𝜌 < 𝜌 < 1. These parameters being fixed, we define the rectangles
𝑄 1 = 𝑄 𝑑 (1-𝜌)𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 , 𝑄 2 = 𝑄 𝑑 (1-𝜌)𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 , 𝑄 3 = 𝑄 𝑑 (1-𝜌)𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 . (5.1)
Note that 𝑄 1 ⊂ 𝑄 2 ⊂ 𝑄 3 . We also set 𝑇 = {0} 𝑑 × 𝑄 𝑚-𝑑 𝜌𝜂 , the part of the dual skeleton contained in 𝑄 3 . The rectangle 𝑄 3 contains the geometric support of Φ, that is, Φ = id outside of 𝑄 3 . The rectangle 𝑄 2 is the region where the thickening procedure is fully performed: the set 𝑇 ∩ 𝑄 2 is entirely mapped outside of 𝑄 1 , in 𝑄 2 \ 𝑄 1 . The region 𝑄 3 \ 𝑄 2 serves as a transition, on which the map Φ becomes less and less singular, until it reaches the exterior of 𝑄 3 where it coincides with the identity.
This section is organized as follows. First we describe the geometric construction of the building blocks for thickening. Then we prove the analytic estimates satisfied by the composition of a map 𝑢 ∈ 𝑊 𝑠,𝑝 with those building blocks. Finally, we explain the construction of the global thickening map based on the aforementioned building blocks, and we prove all properties stated in the conclusion of Proposition 5.1.
We start by stating the geometric properties satisfied by the building blocks, which do not depend on the map 𝑢 to which thickening is applied. The map Φ constructed in Proposition 5.2 is exactly the map given by [9, Proposition 4.3]. Hence, we shall not give a complete proof of Proposition 5.2, but we will limit ourselves to recall, for the convenience of the reader, the main steps in the construction of the map Φ.
The main difference with [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF] is the proof of the Sobolev estimates. In [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF], they were obtained on the whole 𝛺 as a corollary of the geometric properties of the map Φ, by the use of the change of variable theorem. This approach does not seem to work to deal with the Gagliardo seminorm. Hence, we first establish the estimates for the building blocks, and we deduce the global estimates by gluing, as for opening. To do so, we need to take into account some additional features of the map Φ, that are part of its construction in [9, Proof of Proposition 4.3] but do not appear in the conclusion of Proposition 4.3 in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF].
The construction of the map Φ involves another map 𝜁 : ℝ 𝑚 → ℝ, which we describe hereafter. For (𝑥 ′ , 𝑥 ′′ ) ∈ ℝ 𝑑 × ℝ 𝑚-𝑑 , we define
𝜁(𝑥 ′ , 𝑥 ′′ ) = |𝑥 ′ | 2 + 𝜂 2 𝜃 𝑥 ′′ 𝜂 . (5.2)
In [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF], 𝜃 :
ℝ 𝑚-𝑑 → [0, 1] is an arbitrary smooth map such that 𝜃(𝑥 ′′ ) = 0 if 𝑥 ′′ ∈ 𝑄 𝑚-𝑑 𝜌 and 𝜃(𝑥 ′′ ) = 1 if 𝑥 ′′ ∈ ℝ 𝑚-𝑑 \ 𝑄 𝑚-𝑑 𝜌 .
For our purposes, we need to be more precise in our choice of 𝜃. We would like to choose 𝜃 to be nondecreasing with respect to cubes, that is, 𝜃(𝑥 ′′ ) depends only on the ∞-norm of 𝑥 ′′ and 𝜃(𝑥 ′′ ) ≤ 𝜃(𝑦 ′′ ) if |𝑥 ′′ | ∞ ≤ |𝑦 ′′ | ∞ . However, this is not possible if we want 𝜃 to be smooth, since the ∞-norm is not differentiable. Nevertheless, we may choose 𝜃 sufficiently close to be nondecreasing with respect to cubes for our purposes, by replacing the ∞-norm by some 𝑞-norm for 𝑞 sufficiently large.
More precisely, we take 1 < 𝑞 < +∞ sufficiently large, depending on 𝜌 and 𝜌, so that there exists 0 < 𝑟 1 < 𝑟 2 satisfying
𝑄 𝑚-𝑑 𝜌 ⊂ {𝑥 ′′ ∈ ℝ 𝑚-𝑑 : |𝑥 ′′ | 𝑞 < 𝑟 1 } ⊂ {𝑥 ′′ ∈ ℝ 𝑚-𝑑 : |𝑥 ′′ | 𝑞 < 𝑟 2 } ⊂ 𝑄 𝑚-𝑑 𝜌 .
This is indeed possible since 𝑄 𝑚-𝑑 𝜌 and 𝑄 𝑚-𝑑 𝜌 are respectively the balls of radius 𝜌 and 𝜌 with respect to the ∞-norm in ℝ 𝑚-𝑑 , and since the 𝑞-norm converges uniformly on compact sets to the ∞-norm as 𝑞 → +∞. We then pick a nondecreasing smooth map θ : ℝ + → [0, 1] such that θ(𝑟) = 0 if 0 ≤ 𝑟 ≤ 𝑟 1 and θ(𝑟) = 1 if 𝑟 ≥ 𝑟 2 . We finally set 𝜃(𝑥 ′′ ) = θ(|𝑥 ′′ | 𝑞 ). Since 1 < 𝑞 < +∞, 𝜃 is smooth on ℝ 𝑚-𝑑 , and, by our choice of 𝑞, 𝑟 1 , and 𝑟 2 , we indeed have 𝜃(𝑥 ′′ ) = 0 if 𝑥 ′′ ∈ 𝑄 𝑚-𝑑 𝜌 and 𝜃(𝑥
′′ ) = 1 if 𝑥 ′′ ∈ ℝ 𝑚-𝑑 \ 𝑄 𝑚-𝑑 𝜌 .
With the description of the map 𝜁 at our disposal, we are now ready to state Proposition 5.2. Recall that the rectangles 𝑄 𝑖 in (5.1) depend on 𝑑 and 𝜂. Proposition 5.2. Let 𝑑 ∈ {1, . . . , 𝑚}, 𝜂 > 0, and 0 < 𝜌 < 𝜌 < 𝜌. There exists a smooth function Φ :
ℝ 𝑚 \ 𝑇 → ℝ 𝑚 such that (i) Φ is injective; (ii) Supp Φ ⊂ 𝑄 3 ; (iii) Φ(𝑄 2 \ 𝑇) ⊂ 𝑄 2 \ 𝑄 1 ; (iv) for every 𝑥 ∈ 𝑄 3 \ 𝑇, |𝐷 𝑗 Φ(𝑥)| ≤ 𝐶 𝜂 𝜁 𝑗 (𝑥)
for every 𝑗 ∈ ℕ * for some constant 𝐶 > 0 depending on 𝑗, 𝑚, 𝜌, 𝜌, and 𝜌;
(v) for every 𝑥 ∈ ℝ 𝑚 \ 𝑇, jac Φ(𝑥) ≥ 𝐶 ′ 𝜂 𝛽 𝜁 𝛽 (𝑥) for every 0 < 𝛽 < 𝑑, for some constant 𝐶 ′ > 0 depending on 𝛽, 𝑚, 𝜌, 𝜌, and 𝜌.
Proof. As we already mentioned, the desired map Φ is provided by [9, Proposition 4.3]. Hence, we limit ourselves to briefly recall its construction for the convenience of the reader, and we refer to [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF] for the complete proof of its properties. For technical reasons, we start by constructing an intermediate map Ψ : ℝ 𝑚 \ 𝑇 → ℝ 𝑚 as follows; see [9, Lemma 4.5]. We define
𝐵 1 = 𝐵 𝑑 (1-𝜌)𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 , 𝐵 2 = 𝐵 𝑑 (1-𝜌)𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 , 𝐵 3 = 𝐵 𝑑 (1-𝜌)𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 .
The map Ψ is constructed to satisfy the conclusion of Proposition 5.2 with the rectangles 𝑄 𝑖 replaced by the corresponding cylinders 𝐵 𝑖 for 𝑖 ∈ {1, 2, 3}. Since 𝐵 𝑖 ⊂ 𝑄 𝑖 , it will then suffice to compose Ψ with a suitable diffeomorphism Θ : ℝ 𝑚 → ℝ 𝑚 that dilates 𝐵 1 to a set containing 𝑄 1 in order to obtain the desired map Φ.
We choose a smooth map 𝜑 :
(0, +∞) → [1, +∞) such that (a) for 0 < 𝑟 ≤ 1 -𝜌, 𝜑(𝑟) = 1 -𝜌 𝑟 1 + 𝑏 ln( 1 𝑟 ) ; (b) for 𝑟 ≥ 1 -𝜌, 𝜑(𝑟) = 1;
(c) the function 𝑟 ∈ (0, +∞) ↦ → 𝑟𝜑(𝑟) is increasing. This is possible provided that we choose 𝑏 > 0 such that
(1 -𝜌) 1 + 𝑏 ln 1 1-𝜌 < 1 -𝜌.
Then we define 𝜆 :
ℝ 𝑚 \ 𝑇 → [1, +∞) by 𝜆(𝑥) = 𝜑 𝜁(𝑥) 𝜂 ,
and finally
Ψ(𝑥 ′ , 𝑥 ′′ ) = (𝜆(𝑥 ′ , 𝑥 ′′ )𝑥 ′ , 𝑥 ′′ ).
The injectivity of Ψ is a consequence of assumption (c) on 𝜑. The fact that Supp Ψ ⊂ 𝐵 3 relies on assumption (b) on 𝜑, since we may observe that 𝜁(𝑥) ≥ (1 -𝜌)𝜂 if 𝑥 ∈ ℝ 𝑚 \ 𝐵 3 , and therefore 𝜆(𝑥) = 1. Combining the observation that, using (c) again,
𝑟𝜑(𝑟) ≥ lim 𝑟→0 𝑟𝜑(𝑟) = 1 -𝜌 with the fact that 𝜁(𝑥) = |𝑥 ′ | if 𝑥 = (𝑥 ′ , 𝑥 ′′ ) ∈ 𝐵 2 , we find that Ψ(𝐵 2 \𝑇) ⊂ 𝐵 2 \ 𝐵 1 .
In order to obtain (iv) on 𝐵 3 \ 𝑇, we estimate |𝐷 𝑗 𝜆(𝑥)| with the help of the Faà di Bruno formula, and then conclude using Leibniz's rule. The proof of estimate (v) is more delicate. The Jacobian of Ψ may be explicitly evaluated as the determinant of a rank-one perturbation of a diagonal map, as we did in the proof of (4.3), and one then uses the properties of 𝜑 and 𝜁 to get the required lower bound on the obtained expression. We refer the reader to [9, Lemma 4.5] for the details.
It remains to correct the fact that we worked with the cylinders 𝐵 𝑖 instead of the rectangles 𝑄 𝑖 . This essentially amount to construct a suitable deformation of ℝ 𝑚 with bounded derivatives and a suitable lower bound on the Jacobian. We let Θ : ℝ 𝑚 → ℝ 𝑚 be a diffeomorphism whose geometric support is contained in 𝑄 3 , which maps 𝐵 2 \ 𝐵 1 on a set contained in 𝑄 2 \ 𝑄 1 -that is, Θ dilates 𝐵 1 on a set containing 𝑄 1 -and satisfies the estimates
𝜂 𝑗-1 |𝐷 𝑗 Θ| ≤ 𝐶 1 and 0 < 𝐶 2 ≤ jac Θ ≤ 𝐶 3 on ℝ 𝑚 .
We refer the reader to [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF]Lemma 4.4] for the precise construction of this diffeomorphism.
Finally, we let Φ = Θ • Ψ. We observe that this construction satisfies the geometric properties (i) to (iii). The estimate (v) on the Jacobian readily follows from the composition formula for the Jacobian. To get (iv), we invoke the Faà di Bruno formula to compute
|𝐷 𝑗 Φ(𝑥)| ≤ 𝐶 4 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 |𝐷 𝑖 Θ(𝑥)||𝐷 𝑡 1 Ψ(𝑥)| • • • |𝐷 𝑡 𝑖 Ψ(𝑥)| ≤ 𝐶 5 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 𝜂 1-𝑖 𝜂 𝜁 𝑡 1 (𝑥) • • • 𝜂 𝜁 𝑡 𝑖 (𝑥) ≤ 𝐶 6 𝜂 𝜁 𝑗 (𝑥)
.
This concludes the proof of the proposition.
□
Now that we have the building block Φ, we move to the Sobolev estimates satisfied by the composition 𝑢 • Φ. for some constant 𝐶 > 0 depending on 𝑠, 𝑚, 𝑝, 𝑐, 𝑐 ′ , 𝜌, 𝜌, and 𝜌.
We comment on the assumptions (5.3) on 𝜔. In this section, 𝜔 will be a rectangle whose sidelengths are constant multiples of 𝜂. However, in Section 7, we will use Proposition 7.3, which is very similar to Proposition 5.3, with a more complicated 𝜔. This is why we stated Proposition 5.3 in a rather general form.
The assumption that 𝜔 is contained in some ball having radius of order 𝜂 is purely technical. It ensures that estimate (iv) of Proposition 5.2 still applies for 𝑥 ∈ 𝜔 \ 𝑇, possibly increasing the constant. Indeed, since Supp Φ ⊂ 𝑄 3 , this estimate clearly still applies when 𝑥 ∉ 𝑄 3 , but the constant deteriorates as |𝑥 ′ | → +∞, since then Φ = id while 𝜁(𝑥) → +∞. We could bypass this restriction, but it would not be useful since anyway we intend to apply Proposition 5.3 to domains 𝜔 satisfying this requirement.
To prove Proposition 5.3, we need a technical lemma. As it was the case for opening, in the proof of the fractional Sobolev estimates, we will need to estimate terms of the form |𝐷 𝑡 Φ(𝑥) -𝐷 𝑡 Φ(𝑦)|. However, unlike for the opening, we cannot upper bound such terms by a simple application of the mean value theorem along a segment connecting 𝑥 and 𝑦, since such a segment could potentially get very close to -or even cross -the dual skeleton 𝑇 where Φ is singular. The following lemma provides us with a suitable path along which to apply the mean value theorem. Lemma 5.4. For every 𝑥, 𝑦 ∈ ℝ 𝑚 \ 𝑇, there exists a Lipschitz path 𝛾 :
[0, 1] → ℝ 𝑚 \ 𝑇 from 𝑥 to 𝑦 such that |𝛾| 𝒞 0,1 ([0,1]) ≤ 𝐶|𝑥 -𝑦|
for some constant 𝐶 > 0 depending only on 𝑚, and such that 𝜁 ≥ min(𝜁(𝑥), 𝜁(𝑦)) along 𝛾, where 𝜁 is the map defined in (5.2).
Proof. We recall the well-known fact that, given 𝑥, 𝑦 on a sphere, there exists a Lipschitz path on this sphere connecting those two points, with Lipschitz constant less that 𝐶 1 |𝑥 -𝑦|. Indeed, it suffices to take the shortest arc of great circle joining 𝑥 to 𝑦. The same fact holds for any 𝑞-sphere with 1 ≤ 𝑞 ≤ +∞. This can be deduced from the Euclidean case using the changing norm projection defined by 𝑥 ↦ → |𝑥| 𝑞 |𝑥| 2 𝑥, which is a Lipschitz map.
The desired path is then obtained as follows. If 𝑥 = (𝑥 ′ , 𝑥 ′′ ) and 𝑦 = (𝑦 ′ , 𝑦 ′′ ), we first go from 𝑥 to (𝑦 ′ , 𝑥 ′′ ) by following successively an arc of great circle and a straight line in the first 𝑑 components, while keeping the 𝑚 -𝑑 last components fixed. Then we go from (𝑦 ′ , 𝑥 ′′ ) to 𝑦 by following a path on a 𝑞-sphere as above, where 𝑞 is the parameter used in the definition of 𝜁, followed by a straight line in the 𝑚 -𝑑 last components, while keeping the first 𝑑 components fixed. By construction, using the observations above, this path has Lipschitz constant less than 𝐶 2 |𝑥 -𝑦|. Moreover, since 𝜁 only depends on the 2-norm of the 𝑑 first components and on the 𝑞-norm of the 𝑚 -𝑑 last components and is increasing with respect to both these parameters, we conclude that the constructed path has all the expected properties.
□
We may now prove Proposition 5.3.
Proof of Proposition 5.3. The integer order estimates were obtained in [9, Corollary 4.2]. Since the proof in the fractional case relies, in part, on the calculations in the integer case, we reproduce here, for the convenience of the reader, the proof in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF]. When 𝑠 ≥ 1, we have 𝑑 ≥ 𝑠𝑝 ≥ 1, and hence the dimension of 𝑇 is less than 𝑚 -𝑑 -1 ≤ 𝑚 -2. Therefore, in order to prove that 𝑢 • Φ ∈ 𝑊 𝑘,𝑝 (𝜔; ℝ 𝜈 ), it suffices to prove that
∫ 𝜔\𝑇 |𝐷 𝑗 (𝑢 • Φ)| 𝑝 < +∞ for every 𝑗 ∈ {0, . . . , 𝑘}.
By the Faà di Bruno formula, we estimate for every 𝑗 ∈ {1, . . . , 𝑘} and 𝑥 ∈ 𝜔 \ 𝑇
|𝐷 𝑗 (𝑢 • Φ)(𝑥)| 𝑝 ≤ 𝐶 1 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 |𝐷 𝑡 1 Φ(𝑥)| 𝑝 • • • |𝐷 𝑡 𝑖 Φ(𝑥)| 𝑝 .
Let 0 < 𝛽 < 𝑑. Using the estimates on the derivatives and the Jacobian of Φ, we find
|𝐷 𝑡 𝑙 Φ| ≤ 𝐶 2 (jac Φ) 𝑡 𝑙 𝛽 𝜂 𝑡 𝑙 -1 ,
and therefore
|𝐷 𝑗 (𝑢 • Φ)(𝑥)| 𝑝 ≤ 𝐶 3 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 (jac Φ(𝑥)) 𝑡 1 𝑝 𝛽 𝜂 (𝑡 1 -1)𝑝 • • • (jac Φ(𝑥)) 𝑡 𝑖 𝑝 𝛽 𝜂 (𝑡 𝑖 -1)𝑝 ≤ 𝐶 4 𝑗 𝑖=1 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 (jac Φ(𝑥)) 𝑗𝑝 𝛽 𝜂 (𝑗-𝑖)𝑝
.
Since 𝑗𝑝 ≤ 𝑠𝑝 < 𝑑, we may choose 𝛽 = 𝑗𝑝. Hence,
|𝐷 𝑗 (𝑢 • Φ)(𝑥)| 𝑝 ≤ 𝐶 4 𝑗 𝑖=1 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 jac Φ(𝑥) 𝜂 (𝑗-𝑖)𝑝 .
Since Φ is injective and Supp Φ ⊂ 𝑄 3 , we have Φ(𝜔 \ 𝑇) ⊂ 𝜔. Hence, the change of variable theorem ensures that
∫ 𝜔\𝑇 𝜂 𝑗𝑝 |𝐷 𝑗 (𝑢 • Φ)| 𝑝 ≤ ∫ 𝜔\𝑇 𝐶 4 𝑗 𝑖=1 𝜂 𝑖𝑝 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 jac Φ(𝑥) d𝑥 ≤ 𝐶 4 𝑗 𝑖=1 ∫ 𝜔 𝜂 𝑖𝑝 |𝐷 𝑖 𝑢| 𝑝 .
The proof of the zero order estimate (valid in the full range 0 < 𝑠 < 1) is straightforward using the same change of variable, noting that in particular, jac Φ ≥ 𝐶 5 > 0. In particular, we have 𝑢 • Φ ∈ 𝑊 𝑘,𝑝 (𝜔; ℝ 𝜈 ).
We now turn to the proof of the fractional estimate in the case 0 < 𝑠 < 1.
Step 1: Mean value-type estimate. We prove that, for every 𝑥, 𝑦 ∈ 𝜔 \ 𝑇,
|Φ(𝑥) -Φ(𝑦)| |𝑥 -𝑦| ≤ 𝐶 6 𝜂 𝜁(𝑦)
.
(5.4)
It suffices to consider the case when 𝜁(𝑥) ≤ 𝜁(𝑦). First assume that 𝜁(𝑦) ≤ 2𝜁(𝑥). In this case, we use the mean value theorem with the path 𝛾 provided by Lemma 5.4 along with the estimate satisfied by 𝐷Φ to write
|Φ(𝑥) -Φ(𝑦)| |𝑥 -𝑦| ≤ 𝐶 7 𝜂 𝜁(𝑥) ≤ 𝐶 8 𝜂 𝜁(𝑦)
.
Consider now the case where 2𝜁(𝑥) ≤ 𝜁(𝑦). Observe that we have 𝜁(𝑦) -𝜁(𝑥) ≤ 𝐶 9 |𝑥 -𝑦| -this can be seen as a consequence of the triangle inequality for the Euclidean norm. Hence,
𝐶 9 |𝑥 -𝑦| ≥ 𝜁(𝑦) -𝜁(𝑥) ≥ 1 2 𝜁(𝑦).
On the other hand, since 𝜔 ⊂ 𝐵 𝑚 𝑐𝜂 , we have |Φ(𝑥) -Φ(𝑦)| ≤ 𝐶 10 𝜂. This concludes the proof of (5.4).
Step 2: Averaging. We write
∬ (𝜔\𝑇)×(𝜔\𝑇) 𝜁(𝑥)≤𝜁(𝑦) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦 ≤ 𝐶 11 ∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ⨏ ℬ 𝑥,𝑦 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑥d𝑦,
where we have defined
ℬ 𝑥,𝑦 = 𝐵 𝑚 |Φ(𝑥)-Φ(𝑦)| Φ(𝑥) + Φ(𝑦) 2 ∩ 𝜔.
We observe that
𝐵 𝑚 |Φ(𝑥)-Φ(𝑦)| 2 (Φ(𝑥)) ∩ 𝜔 ⊂ ℬ 𝑥,𝑦 .
Therefore, since
|Φ(𝑥)-Φ(𝑦)| 2 ≤ 1 2 diam 𝜔, we find |ℬ 𝑥,𝑦 | ≥ 𝐶 12 |Φ(𝑥) -Φ(𝑦)| 𝑚 .
Here, we have used the volume assumption (5.3). Hence,
∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ⨏ ℬ 𝑥,𝑦 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑥d𝑦 ≤ 𝐶 13 ∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ∫ ℬ 𝑥,𝑦 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -Φ(𝑦)| 𝑚 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑥d𝑦 ≤ 𝐶 14 ∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ∫ ℬ 𝑥,𝑦 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑥d𝑦,
where we made use of the fact that |Φ(𝑥) -𝑧| ≤ 3 2 |Φ(𝑥) -Φ(𝑦)| whenever 𝑧 ∈ ℬ 𝑥,𝑦 . Invoking Tonelli's theorem, we find
∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ∫ ℬ 𝑥,𝑦 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑥d𝑦 = ∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ∫ 𝒴 𝑥,𝑧 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑦d𝑧d𝑥,
where 𝒴 𝑥,𝑧 is the set of all those 𝑦 ∈ 𝜔 \ 𝑇 such that 𝑧 ∈ ℬ 𝑥,𝑦 , that is,
𝒴 𝑥,𝑧 = {𝑦 ∈ 𝜔 \ 𝑇: |Φ(𝑥) + Φ(𝑦) -2𝑧| ≤ 2|Φ(𝑥) -Φ(𝑦)|}.
Since |Φ(𝑥) + Φ(𝑦) -2𝑧| ≥ 2|Φ(𝑥) -𝑧| -|Φ(𝑥) -Φ(𝑦)|, we find, using (5.4),
𝒴 𝑥,𝑧 ⊂ 𝑦 ∈ ℝ 𝑚 : |Φ(𝑥) -𝑧| ≤ 3 2 |Φ(𝑥) -Φ(𝑦)| ⊂ 𝑦 ∈ ℝ 𝑚 : |Φ(𝑥) -𝑧| ≤ 𝐶 15 𝜂 𝜁(𝑥) |𝑥 -𝑦| . Therefore, ∫ 𝒴 𝑥,𝑧 1 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑦 ≤ 𝐶 16 𝜂 𝑠𝑝 𝜁(𝑥) 𝑠𝑝 |Φ(𝑥) -𝑧| 𝑠𝑝 .
We conclude that
∬ (𝜔\𝑇)×(𝜔\𝑇) 𝜁(𝑥)≤𝜁(𝑦) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦 ≤ 𝐶 17 ∫ 𝜔\𝑇 ∫ 𝜔 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝑠𝑝 𝜂 𝑠𝑝 𝜁(𝑥) 𝑠𝑝 d𝑧d𝑥.
Step 3: Change of variable. Since 0 < 𝑠𝑝 < 𝑑, we may apply estimate (v) of Proposition 5.2 with 𝛽 = 𝑠𝑝. Taking into account the fact that Φ is injective and Φ(𝜔 \ 𝑇) ⊂ 𝜔, we deduce from the change of variable theorem that
|𝑢 • Φ| 𝑝 𝑊 𝑠,𝑝 (𝜔) ≤ 𝐶 18 ∫ 𝜔\𝑇 ∫ 𝜔 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝑠𝑝 jac Φ(𝑥) d𝑧d𝑥 ≤ 𝐶 18 ∫ 𝜔 ∫ 𝜔 |𝑢(𝑦) -𝑢(𝑧)| 𝑝 |𝑦 -𝑧| 𝑚+𝑠𝑝 d𝑧d𝑦.
This concludes the proof in the case 0 < 𝑠 < 1.
We finish with the fractional estimate in the case 𝑠 ≥ 1.
Step 1: Estimate of |𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦)|. Consider 𝑥, 𝑦 ∈ 𝜔 \ 𝑇 such that, without loss of generality, 𝜁(𝑥) ≤ 𝜁(𝑦). As in the previous sections, using the Faà di Bruno formula, the multilinearity of the differential, and the estimates on the derivatives of Φ, we write
|𝐷 𝑗 (𝑢 • Φ)(𝑥) -𝐷 𝑗 (𝑢 • Φ)(𝑦)| ≤ 𝐶 19 𝑗 𝑖=1 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢 • Φ(𝑦)| 𝜂 𝑖 𝜁(𝑦) 𝑗 + 𝑗 𝑡=1 |𝐷 𝑖 𝑢 • Φ(𝑥)||𝐷 𝑡 Φ(𝑥) -𝐷 𝑡 Φ(𝑦)| 𝜂 𝑖-1
𝜁(𝑥) 𝑗-𝑡 . (5.5)
Step 2: Estimate of the second term in (5.5). We proceed as we did in the proofs of Propositions 3.
|𝐷 𝑖 𝑢 • Φ(𝑥)| 𝑝 |𝐷 𝑡 Φ(𝑥) -𝐷 𝑡 Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 (𝑖-1)𝑝 𝜁(𝑥) (𝑗-𝑡)𝑝 d𝑥d𝑦 ≤ 𝐶 21 ∫ 𝜔\𝑇 |𝐷 𝑖 𝑢 • Φ(𝑥)| 𝑝 𝜂 𝑖𝑝 𝜁(𝑥) (𝑗+𝜎)𝑝 d𝑥.
Now, by the estimate satisfied by jac Φ, we have, for 0 < 𝛽 < 𝑑,
∫ 𝜔\𝑇 |𝐷 𝑖 𝑢 • Φ(𝑥)| 𝑝 𝜂 𝑖𝑝 𝜁(𝑥) (𝑗+𝜎)𝑝 d𝑥 ≤ 𝐶 22 𝜂 𝑖𝑝-(𝑗+𝜎)𝑝 ∫ 𝜔\𝑇 |𝐷 𝑖 𝑢 • Φ(𝑥)| 𝑝 (jac Θ(𝑥)) (𝑗+𝜎)𝑝 𝛽 d𝑥.
Since 𝑑 > 𝑠𝑝 ≥ (𝑗 + 𝜎)𝑝, we may choose 𝛽 = (𝑗 + 𝜎)𝑝. We conclude by using the change of variable theorem that
∫ 𝜔\𝑇 |𝐷 𝑖 𝑢 • Φ(𝑥)| 𝑝 𝜂 𝑖𝑝 𝜁(𝑥) (𝑗+𝜎)𝑝 d𝑥 ≤ 𝐶 23 𝜂 (𝑖-𝑗-𝜎)𝑝 ∫ 𝜔 |𝐷 𝑖 𝑢| 𝑝 .
Step 3: Estimate of the first term in (5.5): averaging. We use the same methodology as for the case 0 < 𝑠 < 1. Hence, we only write the main steps of the reasoning. We write
∬ (𝜔\𝑇)×(𝜔\𝑇) 𝜁(𝑥)≤𝜁(𝑦) |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢 • Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑦) 𝑗𝑝 d𝑥d𝑦 ≤ ∬ (𝜔\𝑇)×(𝜔\𝑇) 𝜁(𝑥)≤𝜁(𝑦) ⨏ ℬ 𝑥,𝑦 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑦) 𝑗𝑝 d𝑧d𝑥d𝑦 + ∬ (𝜔\𝑇)×(𝜔\𝑇) 𝜁(𝑥)≤𝜁(𝑦) ⨏ ℬ 𝑥,𝑦 |𝐷 𝑖 𝑢(𝑧) -𝐷 𝑖 𝑢 • Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑦) 𝑗𝑝 d𝑧d𝑥d𝑦 ≤ ∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ⨏ ℬ 𝑥,𝑦 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑥) 𝑗𝑝 d𝑧d𝑥d𝑦.
Observe that here it is important that we wrote estimate (5.5) with 1 𝜁(𝑦) on the first term in the right-hand side, so that we may further upper bound 1 𝜁(𝑦) by 1 𝜁(𝑥) . We then pursue exactly as in the case 0 < 𝑠 < 1. Using the volume assumption (5.3), we find
∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ⨏ ℬ 𝑥,𝑦 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑥) 𝑗𝑝 d𝑧d𝑥d𝑦 ≤ 𝐶 24 ∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ∫ ℬ 𝑥,𝑦 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑥) 𝑗𝑝 d𝑧d𝑥d𝑦.
Relying on Tonelli's theorem, we deduce that
∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ∫ ℬ 𝑥,𝑦 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑥) 𝑗𝑝 d𝑧d𝑥d𝑦 = ∫ 𝜔\𝑇 ∫ 𝜔\𝑇 ∫ 𝒴 𝑥,𝑧 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑥) 𝑗𝑝 d𝑦d𝑧d𝑥.
|𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢 • Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑦) 𝑗𝑝 d𝑥d𝑦 ≤ 𝐶 27 𝜂 (𝑖-𝑗)𝑝 ∫ 𝜔 ∫ 𝜔 |𝐷 𝑖 𝑢(𝑦) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑧d𝑦.
Gathering the estimates for both terms in (5.5) we obtain the desired conclusion, hence finishing the proof of the proposition.
□
Now that we have constructed the building block for the thickening procedure, we are ready to proceed with the proof of Proposition 5.1. We start by presenting an informal explanation of the construction to clarify the method.
We first apply thickening around the vertices of the dual skeleton 𝒯 ℓ * , which are actually the centers of the cubes in 𝒰 𝑚 , with parameters 0 < 𝜌 𝑚 < 𝜏 𝑚-1 < 𝜌 𝑚-1 . This maps the complement of the center of each cube on a neighborhood of the faces of the cube. Then we apply thickening around the edges of the dual skeleton, which are segments of lines passing through the center of the (𝑚 -1)-faces of 𝒰 𝑚 , with parameters 𝜌 𝑚-1 < 𝜏 𝑚-2 < 𝜌 𝑚-2 . This maps the part of the complement of the edges of 𝒯 ℓ * lying at distance at most 𝜌 𝑚-1 of the (𝑚 -1)-faces of 𝒰 𝑚 on a neighborhood of the (𝑚 -2)-faces of 𝒰 𝑚 . But since at the previous step the complement of the centers of the cubes was already mapped in a neighborhood of the faces of width 𝜌 𝑚-1 , we deduce that the whole complement of the 1-skeleton of 𝒯 ℓ * is mapped on a neighborhood of 𝒰 𝑚-2 . We pursue this procedure by induction until we reach dimension ℓ * with respect to the dual skeleton -which corresponds to dimension ℓ with respect to 𝒰 𝑚 -and this produces the required map Φ. The map Φ is defined by downward induction. For 𝑑 = 𝑚, we let Φ 𝑑 = id. Then, if 𝑑 ∈ {ℓ + 1, . . . , 𝑚}, given 𝜎 𝑑 ∈ 𝒰 𝑑 , we identify 𝜎 𝑑 with 𝑄 𝑑 𝜂 × {0} 𝑚-𝑑 and 𝑇 (𝑑-1) * ∩ (𝜎 𝑑 + 𝑄 𝑚 𝜏 𝑑-1 𝜂 ) with {0} 𝑑 × 𝑄 𝑚-𝑑 𝜏 𝑑-1 𝜂 , and we let Φ 𝜎 𝑑 be the map given by Proposition 5.2 applied around 𝜎 𝑑 with parameters 𝜌 = 𝜌 𝑑 , 𝜌 = 𝜏 𝑑-1 , and 𝜌 = 𝜌 𝑑-1 . We let Ψ 𝑑 : ℝ 𝑚 \ 𝑇 (𝑑-1) * → ℝ 𝑚 be defined by
< 𝜌 𝑚 < 𝜏 𝑚-1 < 𝜌 𝑚-1 < • • • < 𝜌 ℓ +1 < 𝜏 ℓ < 𝜌 ℓ = 𝜌.
Ψ 𝑑 (𝑥) = Φ 𝜎 𝑑 (𝑥) if 𝑥 ∈ 𝑇 𝜎 𝑑 (𝑄 3 ) for some 𝜎 𝑑 ∈ 𝒰 𝑑 , 𝑥 otherwise,
where 𝑇 𝜎 𝑑 is an isometry mapping 𝑄 𝑑 𝜂 × {0} 𝑚-𝑑 on 𝜎 𝑑 . Finally, we define Φ 𝑑-1 = Ψ 𝑑 • Φ 𝑑 . The desired map is given by Φ = Φ ℓ .
As we mentioned, properties (i) to (iv) are already contained in [9, Proposition 4.1], so that we only need to prove estimates (a) to (d). We first prove estimates with Φ replaced by Ψ 𝑑 for every 𝑑 ∈ {ℓ + 1, . . . , 𝑚}. We let 𝜔 = 𝑄 𝑑
(1-𝜌 𝑚 )𝜂 × 𝑄 𝑚-𝑑 𝜌𝜂 . Note that 𝜔 satisfies the assumptions of Proposition 5.3. In particular, we have 𝑄 3 ⊂ 𝜔 ⊂ 𝑈 𝑚 + 𝑄 𝑚 𝜌𝜂 for every 𝑑 ∈ {ℓ + 1, . . . , 𝑚}. We apply Proposition 5.3 to find that, for every 𝜎 𝑑 ∈ 𝒰 𝑑 , the following estimates hold:
(a) if 0 < 𝑠 < 1, then |𝑢 • Φ 𝜎 𝑑 | 𝑊 𝑠,𝑝 (𝑇 𝜎 𝑑 (𝜔)) ≤ 𝐶 1 |𝑢| 𝑊 𝑠,𝑝 (𝑇 𝜎 𝑑 (𝜔)) ;
(b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗 ∥𝐷 𝑗 (𝑢 • Φ 𝜎 𝑑 )∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝜔)) ≤ 𝐶 2 𝑗 𝑖=1
𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝜔)) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ 𝜎 𝑑 )| 𝑊 𝜎,𝑝 (𝑇 𝜎 𝑑 (𝜔)) ≤ 𝐶 3 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝜔)) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑇 𝜎 𝑑 (𝜔)) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ 𝜎 𝑑 ∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝜔)) ≤ 𝐶 4 ∥𝑢 ∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝜔)) .
Using the additivity of the integral for integer order estimates and Lemma 2.1 for fractional order estimates, we find that (a) if 0 < 𝑠 < 1, then
|𝑢 • Ψ 𝑑 | 𝑝 𝑊 𝑠,𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 5 𝜎 𝑑 ∈𝒰 𝑑 |𝑢 • Φ 𝜎 𝑑 | 𝑝 𝑊 𝑠,𝑝 (𝑇 𝜎 𝑑 (𝜔)) + 𝐶 6 |𝑢 • Ψ 𝑑 | 𝑝 𝑊 𝑠,𝑝 ((𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 )\Supp Ψ 𝑑 ) + 𝐶 7 𝜂 -𝑠𝑝 ∥𝑢 • Ψ 𝑑 ∥ 𝑝 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, ∥𝐷 𝑗 (𝑢 • Ψ 𝑑 )∥ 𝑝 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 8 𝜎 𝑑 ∈𝒰 𝑑 ∥𝐷 𝑗 (𝑢 • Φ 𝜎 𝑑 )∥ 𝑝 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝜔)) + 𝐶 9 ∥𝐷 𝑗 (𝑢 • Ψ 𝑑 )∥ 𝑝 𝐿 𝑝 ((𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 )\Supp Ψ 𝑑 ) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
|𝐷 𝑗 (𝑢 • Ψ 𝑑 )| 𝑝 𝑊 𝜎,𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 10 𝜎 𝑑 ∈𝒰 𝑑 |𝐷 𝑗 (𝑢 • Φ 𝜎 𝑑 )| 𝑝 𝑊 𝜎,𝑝 (𝑇 𝜎 𝑑 (𝜔)) + 𝐶 11 |𝐷 𝑗 (𝑢 • Ψ 𝑑 )| 𝑝 𝑊 𝜎,𝑝 ((𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 )\Supp Ψ 𝑑 ) + 𝐶 12 𝜂 -𝜎𝑝 ∥𝐷 𝑗 (𝑢 • Ψ 𝑑 )∥ 𝑝 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 • Ψ 𝑑 ∥ 𝑝 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 13 𝜎 𝑑 ∈𝒰 𝑑 ∥𝑢 • Φ 𝜎 𝑑 ∥ 𝑝 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝜔)) + 𝐶 14 ∥𝑢 • Ψ 𝑑 ∥ 𝑝 𝐿 𝑝 ((𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 )\Supp Ψ 𝑑 ) .
Combining both sets of estimates, by downward induction, we deduce that (a) if 0 < 𝑠 < 1, then We next prove that the map 𝑢 𝜂 actually belongs to the class ℛ 𝑚-[𝑠𝑝]-1 (𝐾 𝑚 𝜂 ; 𝒩). This follows from property (iv) in Proposition 5.1. Indeed, since 𝑚 -[𝑠𝑝]-1 = ℓ * , the singular set of 𝑢 th 𝜂 , and hence of 𝑢 𝜂 , is as in the definition of ℛ 𝑚-[𝑠𝑝]-1 (𝐾 𝑚 𝜂 ; 𝒩). Therefore, it only remains to prove the estimates on the derivatives of 𝑢 𝜂 . Since 𝑢 sm 𝜂 and Π are smooth, we deduce from the Faà di Bruno formula that
𝜂 𝑠 |𝑢 • Φ| 𝑊 𝑠,𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 15 𝜂 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) + ∥𝑢∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ; (b) if 𝑠 ≥ 1,
|𝐷 𝑗 𝑢 𝜂 | ≤ 𝐶 1 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 |𝐷 𝑖 (Π • 𝑢 sm 𝜂 )||𝐷 𝑡 1 Φ th 𝜂 | • • • |𝐷 𝑡 𝑖 Φ th 𝜂 | ≤ 𝐶 2 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 |𝐷 𝑡 1 Φ th 𝜂 | • • • |𝐷 𝑡 𝑖 Φ th 𝜂 |.
By property (iv) in Proposition 5.1, we conclude that, for 𝑥 ∈ (𝑈 𝑚 𝜂 + 𝑄 𝑚 𝜌𝜂 ) \ 𝑇 ℓ * 𝜂 ,
|𝐷 𝑗 𝑢 𝜂 (𝑥)| ≤ 𝐶 3 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 𝜂 dist(𝑥, 𝑇 ℓ * ) 𝑡 1 • • • 𝜂 dist(𝑥, 𝑇 ℓ * ) 𝑡 𝑖 ≤ 𝐶 4 𝜂 𝑖 dist(𝑥, 𝑇 ℓ * ) 𝑗 .
(5.9)
Combining (5.9) with the fact that, clearly, 𝑢 𝜂 is smooth outside 𝑈 𝑚 𝜂 + 𝑄 𝑚 𝜌𝜂 , we find that 𝑢 𝜂 belongs indeed to ℛ 𝑚-[𝑠𝑝]-1 (𝐾 𝑚 𝜂 ; 𝒩). With all these observations and tools at our disposal, we are finally ready to proceed with the proof of Theorem 1.2. It only remains to explain carefully how to implement the aforementioned steps and to check that the estimates obtained at each step combine to yield 𝑢 𝜂 → 𝑢 in 𝑊 𝑠,𝑝 as 𝜂 → 0.
Density of class ℛ
This section is devoted to the proof of the density of the class ℛ 𝑚-[𝑠𝑝]-1 (𝛺; 𝒩) in 𝑊 𝑠,𝑝 (𝛺; 𝒩). For the sake of clarity, we start by proving the result when the domain 𝛺 is a cube, which is the case covered by Theorem 1.2 stated in the introduction. In a second step, we explain how to deal with more general domains.
As we explained, the major part of the work that remains to be done is to suitably estimate the 𝑊 𝑠,𝑝 distance between the maps 𝑢 𝜂 and 𝑢. As the reader may have noticed, the Sobolev estimates obtained in Sections 3 to 5 deteriorate as 𝜂 → 0. For instance, the term involving the 𝐿 𝑝 norm of 𝐷 𝑖 𝑢 in the estimate of the 𝑗-order derivative blows up at rate 𝜂 𝑖-𝑗 . As we shall see in the proof, this blow-up is compensated by the fact that the measure of the set 𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 decays sufficiently fast as 𝜂 → 0. For the integer order terms, this is exploited by a combination of the Hölder and Gagliardo-Nirenberg inequalities. The treatment of fractional order terms is more involved, and we bring ourselves back to the integer order setting with the help of the following lemma. Lemma 6.1. Let 𝛺 ⊂ ℝ 𝑚 be a convex set and let 𝜔 ⊂ 𝛺. For every 𝑞, 𝑟 ≥ 𝑝 and every
𝑢 ∈ 𝑊 1,1 loc (𝛺), |𝑢| 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 |𝜔| 1 𝑝 -𝜎 𝑟 -1-𝜎 𝑞 ∥𝐷𝑢∥ 𝜎 𝐿 𝑟 (𝛺) ∥𝑢∥ 1-𝜎 𝐿 𝑞 (𝜔)
for some constant 𝐶 > 0 depending only on 𝑚.
Proof. By density, we may assume that 𝑢 ∈ 𝒞 ∞ (𝛺). We once again rely on an optimization technique. For every 𝜌 > 0, we write
|𝑢| 𝑝 𝑊 𝜎,𝑝 (𝜔) ≤ ∫ 𝜔 ∫ 𝜔\𝐵 𝑚 𝜌 (𝑥) |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥 + ∫ 𝜔 ∫ 𝜔∩𝐵 𝑚 𝜌 (𝑥) |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥.
The first term is readily estimated as
∫ 𝜔 ∫ 𝜔\𝐵 𝑚 𝜌 (𝑥) |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥 ≤ 𝐶 1 𝜌 -𝜎𝑝 ∫ 𝜔 |𝑢| 𝑝 ≤ 𝐶 1 𝜌 -𝜎𝑝 |𝜔| 1- 𝑝 𝑞 ∫ 𝜔 |𝑢| 𝑞 𝑝 𝑞 .
For the second term, we start by using the mean value theorem along with Jensen's inequality to find
∫ 𝜔 ∫ 𝜔∩𝐵 𝑚 𝜌 (𝑥) |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥 ≤ ∫ 1 0 ∫ 𝜔 ∫ 𝜔∩𝐵 𝑚 𝜌 (𝑥) |𝐷𝑢(𝑥 + 𝑡(𝑦 -𝑥))| 𝑝 |𝑥 -𝑦| 𝑚+(𝜎-1)𝑝 d𝑦d𝑥d𝑡.
Here we use the convexity of 𝛺 to ensure that 𝑥 + 𝑡(𝑦 -𝑥) ∈ 𝛺 for every 𝑥, 𝑦 ∈ 𝜔 and 𝑡 ∈ [0, 1]. We use the change of variable ℎ = 𝑦 -𝑥 and Tonelli's theorem to deduce that
∫ 𝜔 ∫ 𝜔∩𝐵 𝑚 𝜌 (𝑥) |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥 ≤ ∫ 1 0 ∫ (𝜔-𝜔)∩𝐵 𝑚 𝜌 ∫ 𝜔∩(𝜔-ℎ) |𝐷𝑢(𝑥 + 𝑡 ℎ)| 𝑝 | ℎ| 𝑚+(𝜎-1)𝑝 d𝑥dℎd𝑡.
By convexity of 𝛺, if 𝑥 ∈ 𝜔 ∩ (𝜔ℎ), we have 𝑥 + 𝑡 ℎ ∈ 𝛺 for every 𝑡 ∈ [0, 1]. Moreover, the measure of the set (𝜔 ∩ (𝜔ℎ)) + 𝑡 ℎ is less than |𝜔|. Hence,
∫ 𝜔 ∫ 𝜔∩𝐵 𝑚 𝜌 (𝑥) |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥 ≤ |𝜔| 1-𝑝 𝑟 ∫ 1 0 ∫ (𝜔-𝜔)∩𝐵 𝑚 𝜌 1 | ℎ| 𝑚+(𝜎-1)𝑝 ∫ 𝛺 |𝐷𝑢(𝑧)| 𝑟 d𝑧 𝑝 𝑟 dℎd𝑡.
We conclude that
∫ 𝜔 ∫ 𝜔∩𝐵 𝑚 𝜌 (𝑥) |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥 ≤ 𝐶 2 𝜌 (1-𝜎)𝑝 |𝜔| 1-𝑝 𝑟 ∫ 𝛺 |𝐷𝑢| 𝑟 𝑝 𝑟
.
We may assume that 𝐷𝑢 does not vanish identically, otherwise there is nothing to prove. We insert
𝜌 = |𝜔| 1 𝑟 -1 𝑞 ∥𝑢∥ 𝐿 𝑞 (𝜔) ∥𝐷𝑢∥ 𝐿 𝑝 (𝛺) ,
and we find
|𝑢| 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 3 |𝜔| 1 𝑝 -𝜎 𝑟 -1-𝜎 𝑞 ∥𝑢 ∥ 1-𝜎 𝐿 𝑞 (𝜔) ∥𝐷𝑢∥ 𝜎 𝐿 𝑟 (𝛺)
. The proof of the lemma is complete.
□
We finally prove Theorem 1.2. Recall that 𝛺 = 𝑄 𝑚 . Note that, in Sections 3 to 5, no assumptions were required on 𝛺. During the proof, we shall carefully indicate whenever restrictions on 𝛺 are needed. Then, we shall explain how the proof should be modified when 𝛺 is not a cube, which will lead to a counterpart of Theorem 1.2 for more general domains 𝛺.
Proof of Theorem 1.2. Let 𝑢 ∈ 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩). Note that, for every 𝛾 > 0, the map 𝑢 𝛾 defined by 𝑢 𝛾 (𝑥) = 𝑢 𝑥 1+2𝛾 belongs to 𝑊 𝑠,𝑝 (𝑄 𝑚 1+2𝛾 ) and satisfies 𝑢 𝛾 → 𝑢 in 𝑊 𝑠,𝑝 (𝑄 𝑚 ) as 𝛾 → 0. Therefore, we may assume that 𝑢 ∈ 𝑊 𝑠,𝑝 (𝑄 𝑚 1+2𝛾 ; 𝒩). Here we used the fact that 𝛺 = 𝑄 𝑚 , but we could work instead with any domain on which such a dilation argument may be implemented.
Let 0 < 𝜂 < 𝛾 and 0 < 𝜌 < 1 2 , so that 2𝜌𝜂 < 𝛾. Guided by the observations at the end of Sections 4 and 5, we define the following families of cubes. We let 𝒦 𝑚 𝜂 be a cubication of 𝑄 𝑚 1+𝛾 , that is, 𝐾 𝑚 𝜂 = 𝑄 𝑚 1+𝛾 . This uses that 𝑄 𝑚 1+𝛾 is a cube, but the important fact is that 𝑄 𝑚 ⊂ 𝐾 𝑚 𝜂 ⊂ 𝑄 𝑚 1+𝛾 . Then, following (5.8), we construct the set of bad cubes ℰ 𝑚 𝜂 as the family of all cubes 𝜎 𝑚 ∈ 𝒦 𝑚 𝜂 such that
∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) ≤ 𝜂 𝑚 𝑠𝑝 -1 𝐶 𝜄 if 𝑠 ≥ 1, (6.1)
respectively
|𝑢| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) ≤ 𝜂 𝑚 𝑝 -𝑠 𝐶 𝜄 if 0 < 𝑠 < 1, (6.2)
where 𝜄 > 0 is the radius of a tubular neighborhood of 𝒩. We also define 𝒰 𝑚 𝜂 to be the set of all cubes in 𝒦 𝑚 𝜂 intersecting a cube in ℰ 𝑚 𝜂 . Doing so, we indeed have 𝐸 𝑚 𝜂 ⊂ Int 𝑈 𝑚 𝜂 in the relative topology of 𝐾 𝑚 𝜂 . We apply opening to the map 𝑢 choosing ℓ = [𝑠𝑝]. Let Φ op 𝜂 : ℝ 𝑚 → ℝ 𝑚 be the smooth map provided by Proposition 3.1 applied to 𝑢 with 𝛺 = 𝑄 𝑚 1+2𝛾 , and define
𝑢 op 𝜂 = 𝑢 • Φ op 𝜂 .
Hence, we find that (a) if 0 < 𝑠 < 1, then
𝜂 𝑠 |𝑢 op 𝜂 -𝑢| 𝑊 𝑠,𝑝 (𝑄 𝑚 1+2𝛾 ) ≤ 𝐶 1 𝜂 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝑈 ℓ 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + ∥𝑢∥ 𝐿 𝑝 (𝑈 ℓ 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 𝑢 op 𝜂 -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+2𝛾 ) ≤ 𝐶 2 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 ℓ 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 𝑢 op 𝜂 -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝑄 𝑚 1+2𝛾 ) ≤ 𝐶 3 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 ℓ 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 ℓ 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 op 𝜂 -𝑢 ∥ 𝐿 𝑝 (𝑄 𝑚 1+2𝛾 ) ≤ 𝐶 4 ∥𝑢∥ 𝐿 𝑝 (𝑈 ℓ 𝜂 +𝑄 𝑚 2𝜌𝜂 ) .
Then we apply adaptative smoothing to the map 𝑢 (c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 𝑢 sm 𝜂 -𝐷 𝑗 𝑢 op 𝜂 | 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑗+𝜎 |𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢 op 𝜂 ) -𝐷 𝑗 𝑢 op 𝜂 | 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 6 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 op 𝜂 ∥ 𝐿 𝑝 (𝐴) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢 op 𝜂 | 𝑊 𝜎,𝑝 (𝐴) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 sm 𝜂 -𝑢 op 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 ∥𝜏 𝜓 𝜂 𝑣 (𝑢 op 𝜂 ) -𝑢 op 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) .
Here,
𝐴 = 𝑥∈𝑄 𝑚 1+𝛾 ∩supp 𝐷𝜓 𝜂 𝐵 𝑚 𝜓 𝜂 (𝑥) (𝑥).
By the triangle inequality, for every 𝑣 ∈ 𝐵 𝑚 1 , we have
∥𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢 op 𝜂 ) -𝐷 𝑗 𝑢 op 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ ∥𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢 op 𝜂 ) -𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢)∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) + ∥𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) + ∥𝐷 𝑗 𝑢 -𝐷 𝑗 𝑢 op 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) .
By the change of variable theorem, we find
∥𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢 op 𝜂 ) -𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢)∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 7 ∥𝐷 𝑗 𝑢 op 𝜂 -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) .
A similar estimate holds for the Gagliardo seminorm. Furthermore, observing that supp 𝐷𝜓 𝜂 ⊂ 𝑈 𝑚 𝜂 and using that 𝜓 𝜂 ≤ 𝜌𝜂, we have 𝐴 ⊂ 𝑈 𝑚 𝜂 + 𝑄 𝑚 𝜌𝜂 . Combining this with estimate (iii) in Proposition 3.1 applied with 𝜔 = 𝑈 𝑚 𝜂 + 𝑄 𝑚 𝜌𝜂 , we deduce that (a) if 0 < 𝑠 < 1, then
𝜂 𝑠 |𝑢 sm 𝜂 -𝑢 op 𝜂 | 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑠 |𝜏 𝜓 𝜂 𝑣 (𝑢) -𝑢| 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 8 𝜂 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + ∥𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ;
(b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗 ∥𝐷 𝑗 𝑢 sm 𝜂 -𝐷 𝑗 𝑢 op 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑗 ∥𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 9 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 𝑢 sm 𝜂 -𝐷 𝑗 𝑢 op 𝜂 | 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑗+𝜎 |𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 10 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 sm 𝜂 -𝑢 op 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 ∥𝜏 𝜓 𝜂 𝑣 (𝑢) -𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 11 ∥𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) .
Finally, we apply thickening to the map 𝑢 sm 𝜂 . Choose 0 < 𝜌 < 𝜌, let Φ th 𝜂 : ℝ 𝑚 \ 𝑇 ℓ * 𝜂 → ℝ 𝑚 be the smooth map given by Proposition 5.1 applied with parameter 𝜌 and with 𝛺 = 𝑄 𝑚 1+𝛾 , and set
𝑢 th 𝜂 = 𝑢 sm 𝜂 • Φ th 𝜂 .
This map coincides with 𝑢 sm
𝜂 outside of 𝑈 𝑚 𝜂 + 𝑄 𝑚 𝜌𝜂 . Since ℓ + 1 > 𝑠𝑝, Proposition 5.1 ensures that 𝑢 th 𝜂 ∈ 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ; ℝ 𝜈 ), and moreover, the following estimates hold:
(a) if 0 < 𝑠 < 1, then
𝜂 𝑠 |𝑢 th 𝜂 -𝑢 sm 𝜂 | 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 12 𝜂 𝑠 |𝑢 sm 𝜂 | 𝑊 𝑠,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 𝜌𝜂 ) + ∥𝑢 sm 𝜂 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 𝜌𝜂 ) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 𝑢 th 𝜂 -𝐷 𝑗 𝑢 sm 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 13 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 sm 𝜂 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 𝜌𝜂 ) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 𝑢 th 𝜂 -𝐷 𝑗 𝑢 sm 𝜂 | 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 14 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 sm 𝜂 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 𝜌𝜂 ) +𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢 sm 𝜂 | 𝑊 𝜎,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 𝜌𝜂 ) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 th 𝜂 -𝑢 sm 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 15 ∥𝑢 sm 𝜂 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 𝜌𝜂 ) .
Hence, invoking estimate (i) in Proposition 4.1 with 𝛺 = 𝑈 𝑚 𝜂 +𝑄 𝑚 (𝜌+𝜌)𝜂 and 𝜔 = 𝑈 𝑚 𝜂 +𝑄 𝑚 𝜌𝜂 , and then estimate (iii) in Proposition 3.1 with 𝜔 = 𝑈 𝑚 𝜂 + 𝑄 𝑚 (𝜌+𝜌)𝜂 , we obtain (a) if 0 < 𝑠 < 1, then
𝜂 𝑠 |𝑢 th 𝜂 -𝑢 sm 𝜂 | 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 16 𝜂 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + ∥𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ;
(b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗 ∥𝐷 𝑗 𝑢 th 𝜂 -𝐷 𝑗 𝑢 sm 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 17 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 𝑢 th 𝜂 -𝐷 𝑗 𝑢 sm 𝜂 | 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 18 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) +𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 th 𝜂 -𝑢 sm 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ 𝐶 19 ∥𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) .
Using the triangle inequality, we conclude that (a) if 0 < 𝑠 < 1, then
𝜂 𝑠 |𝑢 th 𝜂 -𝑢 𝜂 | 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑠 |𝜏 𝜓 𝜂 𝑣 (𝑢) -𝑢| 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 20 𝜂 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + ∥𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 𝑢 th 𝜂 -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑗 ∥𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 21 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝜂 𝑗+𝜎 |𝐷 𝑗 𝑢 th 𝜂 -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 𝜂 𝑗+𝜎 |𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 22 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 th 𝜂 -𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 𝑣∈𝐵 𝑚 1 ∥𝜏 𝜓 𝜂 𝑣 (𝑢) -𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) + 𝐶 23 ∥𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) .
Due to our choice of 𝜓 𝜂 , and since ℓ ≤ 𝑠𝑝 and
𝑄 𝑚 ⊂ 𝐾 𝑚 𝜂 ⊂ 𝑄 𝑚 1+𝛾 ⊂ {𝑥 ∈ 𝑄 𝑚 1+2𝛾 : dist (𝑥, 𝜕𝑄 𝑚 1+2𝛾 ) ≥ 𝜓(𝑥)},
according to estimates (5.6) and (5.7), we have
Dist 𝒩 (𝑢 th 𝜂 (𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 )) ≤ max max 𝜎 𝑚 ∈𝒦 𝑚 𝜂 \ℰ 𝑚 𝜂 𝐶 1 𝜂 𝑚 𝑠𝑝 -1 ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) , sup 𝑥∈𝑈 ℓ 𝜂 +𝑄 𝑚 𝜌𝜂 𝐶 ′ ⨏ 𝑄 𝑚 𝑟 (𝑥) ⨏ 𝑄 𝑚 𝑟 (𝑥) |𝑢 op 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦d𝑧 (6.3) if 𝑠 ≥ 1, respectively Dist 𝒩 (𝑢 th 𝜂 (𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 )) ≤ max max 𝜎 𝑚 ∈𝒦 𝑚 𝜂 \ℰ 𝑚 𝜂 𝐶 1 𝜂 𝑚 𝑝 -𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) , sup 𝑥∈𝑈 ℓ 𝜂 +𝑄 𝑚 𝜌𝜂 𝐶 ′ ⨏ 𝑄 𝑚 𝑟 (𝑥) ⨏ 𝑄 𝑚 𝑟 (𝑥) |𝑢 op 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦d𝑧 (6.4)
if 0 < 𝑠 < 1. Note here that, in defining the bad cubes in equation (6.1), respectively (6.2), we take the constant 𝐶 > 0 which shows up in estimate (6.3), respectively (6.4). Doing so, by definition of the set of bad cubes ℰ 𝑚 𝜂 , the first term in each max is smaller than the radius 𝜄 of a tubular neighborhood of 𝒩. Moreover, since ℓ ≤ 𝑠𝑝, Proposition 3.8 ensures that we may take 𝑟 > 0 so small that the second term in each max is also smaller than 𝜄. Therefore, we deduce that
Dist 𝒩 (𝑢 th 𝜂 (𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 )) ≤ 𝜄.
This enables us to define 𝑢 𝜂 = Π • 𝑢 th 𝜂 , which, as we already explained at the end of Section 5, is smooth on 𝐾 𝑚 𝜂 \ 𝑇 ℓ * , and belongs to ℛ 𝑚-[𝑠𝑝]-1 (𝐾 𝑚 𝜂 ; 𝒩).
Since 𝑄 𝑚 ⊂ 𝐾 𝑚 𝜂 , to conclude, it only remains to prove that 𝑢 𝜂 → 𝑢 in 𝑊 𝑠,𝑝 (𝐾 𝑚 𝜂 ) as 𝜂 → 0. We claim that it suffices to show that 𝑢 th 𝜂 → 𝑢 in 𝑊 𝑠,𝑝 (𝐾 𝑚 𝜂 ) as 𝜂 → 0. Indeed, the map Π is smooth and has uniformly bounded derivatives, and 𝒩 is compact. Hence, the continuity of the composition operator from 𝑊 𝑠,𝑝 ∩ 𝐿 ∞ to 𝑊 𝑠,𝑝 -see for instance [START_REF]Sobolev maps to the circle[END_REF]Chapter 15.3
] -ensures that, if 𝑢 th 𝜂 → 𝑢 in 𝑊 𝑠,𝑝 (𝐾 𝑚 𝜂 ), then 𝑢 𝜂 = Π • 𝑢 th 𝜂 converges in 𝑊 𝑠,𝑝 (𝐾 𝑚 𝜂 ) to Π • 𝑢 = 𝑢.
We now prove that 𝑢 th 𝜂 → 𝑢. We start by noting that the continuity of the translation operator implies that lim
𝜂→0 sup 𝑣∈𝐵 𝑚 1 ∥𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) = 0 and lim 𝜂→0 sup 𝑣∈𝐵 𝑚 1 |𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢) -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝑄 𝑚 1+𝛾 ) = 0
for every 𝑗 ∈ {0, . . . , 𝑘}.
We first deal with the case 𝑠 ≥ 1. By the Gagliardo-Nirenberg interpolation inequality -see for instance [START_REF] Brezis | composition and products in fractional Sobolev spaces[END_REF][START_REF]Gagliardo-Nirenberg inequalities and non-inequalities: the full story[END_REF] -for every 𝑖 ∈ {1, . . . , 𝑘}, we have 𝐷 𝑖 𝑢 ∈ 𝐿 𝑠𝑝 𝑖 (𝑄 𝑚 1+2𝛾 ). Hölder's inequality ensures that
∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ≤ |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝑠-𝑖 𝑠𝑝 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑠𝑝 𝑖 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 )
, while Lemma 6.1 guarantees that
|𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ≤ 𝐶 24 |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝑠-𝑖-𝜎 𝑠𝑝 ∥𝐷 𝑖+1 𝑢 ∥ 𝜎 𝐿 𝑠𝑝 𝑖+1 (𝑄 𝑚 1+2𝛾 ) ∥𝐷 𝑖 𝑢∥ 1-𝜎 𝐿 𝑠𝑝 𝑖 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 )
.
Here, we use the fact that 𝑄 𝑚 1+2𝛾 is convex to justify the use of Lemma 6.1. We now wish to estimate the measure of the set 𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 . First note that
|𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | ≤ 𝐶 25 card (𝒰 𝑚 𝜂 )𝜂 𝑚 . (6.5)
Then, for every 𝜎 𝑚 ∈ 𝒰 𝑚 𝜂 , there exists 𝜏 𝑚 ∈ ℰ 𝑚 𝜂 which intersects 𝜎 𝑚 , and thus 𝜏 𝑚 +𝑄 𝑚 2𝜌𝜂 ⊂ 𝜎 𝑚 + 𝑄 𝑚 2(1+𝜌)𝜂 . If we write 𝜎 𝑚 = 𝑄 𝑚 𝜂 (𝑎), we find
𝜏 𝑚 + 𝑄 𝑚 2𝜌𝜂 ⊂ 𝑄 𝑚 𝛼𝜂 (𝑎) with 𝛼 = 3 + 2𝜌. Hence, 𝜏 𝑚 + 𝑄 𝑚 2𝜌𝜂 ⊂ 𝑄 𝑚 𝛼𝜂 (𝑎) ∩ 𝑄 𝑚 1+2𝛾 . We deduce from the definition of ℰ 𝑚 𝜂 that 𝜄 < 𝐶 1 𝜂 𝑚 𝑠𝑝 -1 ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜏 𝑚 +𝑄 𝑚 2𝜌𝜂 ) ≤ 𝐶 1 𝜂 𝑚 𝑠𝑝 -1 ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝑄 𝑚 𝛼𝜂 (𝑎)∩𝑄 𝑚 1+2𝛾 ) .
Since the number of overlaps between one of the cubes 𝑄 𝑚 𝛼𝜂 (𝑎) and all the other ones is bounded from above by a number depending only on 𝑚, summing over all cubes in 𝒰 𝑚 𝜂 and using the additivity of the integral, we deduce that
card (𝒰 𝑚 𝜂 ) ≤ 𝐶 26 1 𝜂 𝑚-𝑠𝑝 𝑄 𝑚 𝜂 (𝑎)∈𝒰 𝑚 𝜂 ∫ 𝑄 𝑚 𝛼𝜂 (𝑎)∩𝑄 𝑚 1+2𝛾 |𝐷𝑢| 𝑠𝑝 ≤ 𝐶 27 1 𝜂 𝑚-𝑠𝑝 ∫ (𝑈 𝑚 𝜂 +𝑄 𝑚 2(𝜌+1)𝜂 )∩𝑄 𝑚 1+2𝛾 |𝐷𝑢| 𝑠𝑝 . (6.6)
For further use, we already note that, in the case 0 < 𝑠 < 1, the exact same reasoning leads to 𝜄 < 𝐶 1
𝜂 𝑚 𝑝 -𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝑄 𝑚 𝛼𝜂 (𝑎)∩𝑄 𝑚 1+2𝛾 ) .
As for the case 𝑠 ≥ 1, replacing the additivity of the integral by the superadditivity of the Gagliardo seminorm, we obtain
card (𝒰 𝑚 𝜂 ) ≤ 𝐶 28 1 𝜂 𝑚-𝑠𝑝 ∫ (𝑈 𝑚 𝜂 +𝑄 𝑚 2(𝜌+1)𝜂 )∩𝑄 𝑚 1+2𝛾 ∫ (𝑈 𝑚 𝜂 +𝑄 𝑚 2(𝜌+1)𝜂 )∩𝑄 𝑚 1+2𝛾 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦. (6.7)
In both cases 𝑠 ≥ 1 and 0 < 𝑠 < 1, we conclude that lim
𝜂→0 |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝜂 𝑠𝑝 = 0. (6.8)
Indeed, we first use estimate (6.5) along with (6.6), respectively (6.7), to deduce that
|𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝜂 𝑠𝑝 ≤ 𝐶 29 ∥𝐷𝑢∥ 𝑝 𝐿 𝑝 (𝑄 𝑚 1+2𝛾 ) , respectively |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝜂 𝑠𝑝 ≤ 𝐶 30 |𝑢| 𝑝 𝑊 𝑠,𝑝 (𝑄 𝑚 1+2𝛾 ) .
In particular, |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | → 0. Using this information along with Lebesgue's lemma, we invoke again estimate (6.6), respectively (6.7), to deduce (6.8).
We next proceed as follows. When 𝑠 ≥ 1, we find
𝑗 𝑖=1 𝜂 𝑖-𝑗 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ≤ 𝑗 𝑖=1 𝜂 𝑠-𝑗 |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝜂 𝑠𝑝 𝑠-𝑖 𝑠𝑝 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑠𝑝 𝑖 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 )
→ 0 as 𝜂 → 0 (6.9)
and
𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ≤ 𝑗 𝑖=1 𝜂 𝑠-𝑗-𝜎 |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝜂 𝑠𝑝 𝑠-𝑖 𝑠𝑝 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑠𝑝 𝑖 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + 𝐶 24 𝑗-1 𝑖=1 𝜂 𝑠-𝑗-𝜎 |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝜂 𝑠𝑝 𝑠-𝑖-𝜎 𝑠𝑝 ∥𝐷 𝑖 𝑢∥ 1-𝜎 𝐿 𝑠𝑝 𝑖 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ∥𝐷 𝑖+1 𝑢∥ 𝜎 𝐿 𝑠𝑝 𝑖+1 (𝑄 𝑚 1+2𝛾 ) + |𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) → 0 as 𝜂 → 0. (6.10)
For the last term in (6.10), we use (6.8) and the Lebesgue lemma. Similarly, we have
∥𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂
) → 0 as 𝜂 → 0. This completes the proof that 𝑢 th 𝜂 → 𝑢 in 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ) when 𝑠 ≥ 1. The case 0 < 𝑠 < 1 is concluded analogously. Note that since 𝒩 is compact, we have 𝑢 ∈ 𝐿 ∞ . Therefore, we have
|𝑢| 𝑊 𝑠,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + 𝜂 -𝑠 ∥𝑢∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) ≤ |𝑢| 𝑊 𝑠,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + 𝜂 -𝑠 𝐶 31 |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 1 𝑝 ≤ |𝑢| 𝑊 𝑠,𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) + 𝐶 31 |𝑈 𝑚 𝜂 + 𝑄 𝑚 2𝜌𝜂 | 𝜂 𝑠𝑝 1 𝑝 → 0 as 𝜂 → 0.
Combining this with the fact that ∥𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 𝜂 +𝑄 𝑚 2𝜌𝜂 ) → 0 as 𝜂 → 0, we deduce that 𝑢 th 𝜂 → 𝑢 in 𝑊 𝑠,𝑝 (𝑄 𝑚 1+𝛾 ) as 𝜂 → 0 when 0 < 𝑠 < 1. This completes the proof of Theorem 1.2.
□
We now explain how to deal with more general domains. The first step is to be able to implement the dilation procedure used at the beginning of the proof. The method we used adapts without any modification to domains that are starshaped with respect to one of their points. However, using a more involved technique, it is possible to work with even more general domains. The reader may consult [START_REF]Sobolev maps to the circle[END_REF]Lemma 15.25] for an implementation of this technique on smooth domains using the normal vector, or [START_REF]Generic topological screening and approximation of Sobolev maps[END_REF] for an argument on continuous bounded domains using local parametrizations. Here we show that the approach even works under the weaker segment condition.
We recall that 𝛺 satisfies the segment condition whenever, for every 𝑥 ∈ 𝜕𝛺, there exists an open set 𝑈 𝑥 ⊂ ℝ 𝑚 containing 𝑥 and a nonzero vector 𝑧 𝑥 ∈ ℝ 𝑚 such that, if 𝑦 ∈ 𝑈 𝑥 ∩ 𝛺, then 𝑦 + 𝑡𝑧 𝑥 ∈ 𝛺 for every 0 < 𝑡 < 1. Lemma 6.2. Let 𝛺 ⊂ ℝ 𝑚 be a bounded open domain satisfying the segment condition. For every 𝛾 > 0 sufficiently small, there exists a smooth diffeomorphism Φ 𝛾 : ℝ 𝑚 → ℝ 𝑚 such that Φ 𝛾 (𝛺) ⊂ 𝛺 and 𝐷 𝑗 Φ 𝛾 → id uniformly on ℝ 𝑚 for every 𝑗 ∈ ℕ as 𝛾 → 0.
Geometrically, the segment condition means that 𝛺 cannot lie on both sides of 𝜕𝛺.
A typical example of a domain 𝛺 not satisfying this assumption is given by two open cubes whose boundaries share a common face. It is known -see for instance [1, 3.17] that there exists a 𝑊 1,𝑝 map on this domain which cannot be approximated by 𝒞 ∞ (𝛺) maps, even in the real valued case.
Proof of Lemma 6.2. Let 𝐵 𝛿 = 𝐵 𝑚-1 𝛿 × (-𝛿, 𝛿) be a cylinder of radius and half-height 𝛿. Since 𝜕𝛺 is compact, there exists a finite number of points 𝑥 1 , . We let
Φ 𝛾 = Φ 𝑛,𝛾 • • • • • Φ 1,𝛾 .
Observe that 𝐷 𝑗 Φ 𝛾 → id uniformly on ℝ 𝑚 for every 𝑗 ∈ ℕ as 𝛾 → 0.
By (6.11) and the construction of the maps Φ 𝑖,𝛾 , for every 𝑥 ∈ 𝜕𝛺, there exists 𝑖 ∈ {1, . . . , 𝑛} such that Φ 𝑖,𝛾 (𝑥) ∈ 𝛺, and this shows that Φ 𝛾 (𝛺) ⊂ 𝛺. This proves that the family of maps Φ 𝛾 satisfies the conclusions of the lemma.
□
Using this construction, we observe that, if 𝛺 ⊂ ℝ 𝑚 is a bounded domain satisfying the segment condition and 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; 𝒩), the map 𝑢 𝛾 = 𝑢 •Φ 𝛾 belongs to 𝑊 𝑠,𝑝 (𝛺 𝛾 ; 𝒩), where 𝛺 𝛾 = Φ -1 𝛾 (𝛺) is an open subset of ℝ 𝑚 containing 𝛺. Moreover, 𝑢 𝛾 → 𝑢 in 𝑊 𝑠,𝑝 (𝛺; 𝒩) as 𝛾 → 0.
Therefore, we may carry out the same reasoning as in the proof of Theorem 1.2 by choosing a cubication 𝒦 𝑚 𝜂 such that 𝛺 ⊂ 𝐾 𝑚 𝜂 ⊂ 𝛺 𝛾 . The other place in the proof of Theorem 1.2 where we used a specific assumption on the domain is when we applied Lemma 6.1, because we needed convexity to justify the use of this lemma. However, this is an artifact. Indeed, since we work on a dilated domain, by dilating slightly more if necessary, we may assume that 𝑢 ∈ 𝑊 𝑠,𝑝 ( Ω) for some open set Ω ⊂ ℝ 𝑚 containing 𝛺 𝛾 . It then suffices to apply instead Lemma 6.1 to the map 𝑢𝜓 ∈ 𝑊 𝑠,𝑝 (ℝ 𝑚 ), where 𝜓 : ℝ 𝑚 → [0, 1] is a smooth map such that 𝜓 = 1 on 𝛺 𝛾 and 𝜓 = 0 on ℝ 𝑚 \ Ω.
Taking these modifications into consideration, the proof of Theorem 1.2 above can be carried out the exact same way on any bounded domain 𝛺 ⊂ ℝ 𝑚 satisfying the segment condition. This leads to the following result. Theorem 6.3. Let 𝛺 ⊂ ℝ 𝑚 be a bounded domain satisfying the segment condition. If 𝑠𝑝 < 𝑚, then the class ℛ 𝑚-[𝑠𝑝]-1 (𝛺; 𝒩) is dense in 𝑊 𝑠,𝑝 (𝛺; 𝒩).
A second perspective of generalization for Theorem 1.2 consists in replacing 𝛺 with a smooth manifold. From now on, we assume that ℳ is a smooth, compact, connected Riemannian manifold of dimension 𝑚, isometrically embedded in ℝ ν for some ν ∈ ℕ. In the context where the domain is a smooth manifold, the suitable adaptation of the definition of the class ℛ is the following. We define the class ℛ 𝑖 (ℳ; 𝒩) as the set of maps 𝑢 : ℳ → 𝒩 which are smooth on ℳ \𝑇, where 𝑇 is a finite union of 𝑖-dimensional submanifolds of ℳ, and such that for every 𝑗 ∈ ℕ * and 𝑥 ∈ ℳ \ 𝑇,
|𝐷 𝑗 𝑢(𝑥)| ≤ 𝐶 1 dist (𝑥, 𝑇) 𝑗
for some constant 𝐶 > 0 depending on 𝑢 and 𝑗.
Our next result is the following counterpart of Theorem 1.2 when the domain is a smooth manifold. Theorem 6.4. If 𝑠𝑝 < 𝑚, then the class ℛ 𝑚-[𝑠𝑝]-1 (ℳ; 𝒩) is dense in 𝑊 𝑠,𝑝 (ℳ; 𝒩).
We first prove Theorem 6.4 when ℳ has no boundary. This allows us to rely on the nearest point projection onto ℳ. In the end, we shall briefly explain how to deduce the case with boundary from the case without boundary.
Hence, we first assume that ℳ has no boundary. Then, Theorem 6.4 can be deduced from Theorem 6.3, by extending the function we want to approximate on a tubular neighborhood of ℳ and using a slicing argument. The key observation is that, if 𝜄 > 0 is the radius of a tubular neighborhood of ℳ, then for every 𝑢 ∈ 𝑊 𝑠,𝑝 (ℳ; 𝒩), the map 𝑣 = 𝑢 • Π belongs to 𝑊 𝑠,𝑝 (ℳ + 𝐵 ν 𝜄/2 ; 𝒩). Indeed, for any summable function 𝑤 : ℳ → [0, +∞], we deduce from the coarea formula that
∫ ℳ+𝐵 ν 𝜄/2 𝑤 • Π ≤ 𝐶 1 ∫ ℳ ∫ Π -1 (𝑥)∩(ℳ+𝐵 ν 𝜄/2 ) 𝑤(Π(𝑦)) dℋ ν-𝑚 (𝑦) d𝑥 ≤ 𝐶 2 𝜄 ν-𝑚 ∫ ℳ 𝑤 < +∞.
Conclusion then follows from the theory of Fuglede maps presented in Section 3 (valid also for maps between manifolds, see [START_REF]Generic topological screening and approximation of Sobolev maps[END_REF]).
To implement this strategy of extension and slicing, we need the following transversality result. Lemma 6.5. Let 𝛴 ⊂ ℝ ν be an ℓ -dimensional hyperplane. For almost every 𝑎 ∈ ℝ ν, the set ℳ ∩(𝛴 + 𝑎) is a smooth submanifold of ℳ of dimension 𝑚 -ν +ℓ -or the empty set if ℓ < ν -𝑚. Moreover, if ℳ ∩ (𝛴 + 𝑎) ≠ ∅, then for every 𝑥 ∈ ℳ and every 𝑎 as above, we have dist (𝑥, ℳ ∩ (𝛴 + 𝑎)) ≤ 𝐶 dist (𝑥, 𝛴 + 𝑎), for some constant 𝐶 > 0 depending on ℳ, 𝛴, and 𝑎. Taking Lemma 6.5 for granted, we prove Theorem 6.4.
Proof of Theorem 6.4 when ℳ has no boundary. Let 𝑢 ∈ 𝑊 𝑠,𝑝 (ℳ; 𝒩). Let 𝜄 > 0 be the radius of a tubular neighborhood of ℳ, and let Π : ℳ + 𝐵 ν 𝜄 → ℳ be the nearest point projection. We define 𝛺 = ℳ + 𝐵 ν 𝜄/2 , which is a smooth bounded open subset of ℝ ν. As explained above, the map 𝑣 = 𝑢 • Π belongs to 𝑊 𝑠,𝑝 (𝛺; 𝒩). Therefore, Theorem 6.3 ensures the existence of a sequence (𝑣 𝑛 ) 𝑛∈ℕ of maps in ℛ ν-[𝑠𝑝]-1 (𝛺; 𝒩) converging to 𝑣 in 𝑊 𝑠,𝑝 (𝛺) as 𝑛 → +∞. Invoking Lemma 6.5, we deduce that for almost every 𝑎 ∈ 𝐵 ν 𝜄/2 , the map 𝑢 𝑛,𝑎 = 𝜏 𝑎 (𝑣 𝑛 ) |ℳ : ℳ → 𝒩 belongs to the class ℛ 𝑚-[𝑠𝑝]-1 (ℳ; 𝒩). Here, we recall that the translation 𝜏 𝑎 (𝑣 𝑛 ) is defined by 𝜏 𝑎 (𝑣 𝑛 )(𝑥) = 𝑣 𝑛 (𝑥 + 𝑎). Indeed, the first part of Lemma 6.5 ensures that the singular set of 𝑢 𝑛,𝑎 is as in the definition of the class ℛ 𝑚-[𝑠𝑝]-1 . On the other hand, the distance estimate in Lemma 6.5 implies that the estimates on the derivatives of 𝑢 𝑛,𝑎 are satisfied.
Moreover, using a slicing argument, we find that for almost every 𝑎 ∈ 𝐵 ν 𝜄/2 , up to extraction of a subsequence, (𝑢 𝑛,𝑎 ) 𝑛∈ℕ converges in 𝑊 𝑠,𝑝 (ℳ) to the map 𝜏 𝑎 (𝑣) |ℳ . This can be seen, for instance, using the theory of Fuglede maps presented in Section 3. Indeed, consider a summable map 𝑤 : 𝛺 → [0, +∞], and let 𝑖 : ℳ → 𝛺 be the inclusion map. Observe that 𝜏 𝑎 (𝑣) |ℳ = 𝑣 • (𝑖 + 𝑎). We estimate
∫ 𝐵 ν 𝜄/2 ∫ ℳ 𝑤(𝑖(𝑥) + 𝑎) d𝑥d𝑎 = ∫ ℳ ∫ 𝐵 ν 𝜄/2 (𝑥) 𝑤(𝑎) d𝑎d𝑥 ≤ |ℳ |∥𝑤∥ 𝐿 1 (𝛺) < +∞.
Therefore, for almost every 𝑎 ∈ 𝐵 ν 𝜄/2 , 𝑤 • (𝑖 + 𝑎) is summable on ℳ. If we now choose 𝑤 to be a detector for 𝑊 𝑠,𝑝 convergence, then, up to a subsequence independent of 𝑎, we have 𝑢 𝑛,𝑎 = 𝑣 𝑛 • (𝑖 + 𝑎) → 𝑣 • (𝑖 + 𝑎) = 𝜏 𝑎 (𝑣) |ℳ in 𝑊 𝑠,𝑝 (ℳ) as 𝑛 → +∞.
On the other hand, by the continuity of translations in 𝑊 𝑠,𝑝 , we know that 𝜏 𝑎 (𝑣) |ℳ → 𝑣 |ℳ = 𝑢 in 𝑊 𝑠,𝑝 (ℳ) as 𝑎 → 0 (more precisely, we should rely on an argument in the spirit of [START_REF]Generic topological screening and approximation of Sobolev maps[END_REF]Proposition 2.4], since there is again a slicing involved here).
We conclude the proof by invoking a diagonal argument: choosing a suitable sequence (𝑎 𝑛 ) 𝑛∈ℕ in 𝐵 ν 𝜄/2 such that 𝑎 𝑛 → 0, the maps 𝑢 𝑛 = 𝑢 𝑛,𝑎 𝑛 belong to ℛ 𝑚-[𝑠𝑝]-1 (ℳ; 𝒩) and converge to 𝑢 in 𝑊 𝑠,𝑝 (ℳ) as 𝑛 → +∞.
□
We now prove Lemma 6.5.
Proof of Lemma 6.5. Define Ψ : ℳ × 𝛴 → ℝ ν by
Ψ(𝑥, 𝑧) = 𝑥 -𝑧.
The map Ψ is a smooth map between smooth manifolds. Therefore, Sard's lemma ensures that for almost every 𝑎 ∈ ℝ ν, the linear map 𝐷Ψ(𝑥, 𝑧) : 𝑇 𝑥 ℳ × 𝑇 𝑧 𝛴 → ℝ ν is surjective for every (𝑥, 𝑧) ∈ Ψ -1 ({𝑎}). For such 𝑎, we compute
ℝ ν = 𝐷Ψ(𝑥, 𝑧)[𝑇 𝑥 ℳ × 𝑇 𝑧 𝛴] = 𝑇 𝑥 ℳ + 𝑇 𝑧 𝛴.
Moreover we observe that (𝑥, 𝑧) ∈ Ψ -1 ({𝑎}) if and only if 𝑥 -𝑧 = 𝑎. This shows that ℝ ν = 𝑇 𝑥 ℳ + 𝑇 𝑧 𝛴 for every 𝑥 = 𝑧 + 𝑎 ∈ 𝛴 + 𝑎.
Otherwise stated, for almost every 𝑎 ∈ ℝ ν, ℳ and 𝛴 + 𝑎 are transversal, which implies that ℳ ∩ (𝛴 + 𝑎) is a smooth submanifold of ℳ of dimension 𝑚 -ν + ℓ ; see e.g. [START_REF] Warner | Foundations of differentiable manifolds and Lie groups[END_REF]Theorem 1.39]. This concludes the proof of the first part of the lemma.
We now turn to the distance estimate. Without loss of generality, we may restrict ourselves to prove the estimate when 𝑎 = 0. Let 𝑦 ∈ ℳ ∩ 𝛴. Since ℳ and 𝛴 intersect transversely, after a suitable rotation followed by a translation -which do not modify the distances -we may assume that 𝑦 = 0, and that there exist 𝛿 > 0 and ℎ > 0 such that 𝛴 = {0} ν-ℓ ×ℝ ℓ and ℳ ∩(𝐵 𝑚 𝛿 ×(-ℎ, ℎ) ν-𝑚 ) is the graph of a smooth map 𝜙 : 𝐵 𝑚 𝛿 → (-ℎ, ℎ) . Denote by 𝜋 2 : ℝ ν → ℝ ℓ the projection onto the ℓ last variables, which corresponds to the orthogonal projection onto 𝛴, and by 𝜋 1 : ℝ ν → ℝ 𝑚 the projection onto the 𝑚 first variables. We observe that, for 𝑥 = (𝜋 1 (𝑥), 𝜙(𝜋
1 (𝑥))) ∈ ℳ ∩ (𝐵 𝑚 𝛿 × (-ℎ, ℎ) ν-𝑚 ), we have dist (𝑥, 𝛴) = |𝑥 -(0, 𝜋 2 (𝑥))|, while dist (𝑥, ℳ ∩ 𝛴) ≤ |(𝜋 1 (𝑥), 𝜙(𝜋 1 (𝑥))) -(𝜋 1 ((0, 𝜋 2 (𝑥))), 𝜙(𝜋 1 ((0, 𝜋 2 (𝑥)))))| ≤ (1 + |𝜙| 𝒞 0,1 ) dist (𝑥, 𝛴).
We conclude by using a finite covering argument. Indeed, since ℳ ∩ 𝛴 is compact, we may cover it by a finite number of cylindrical domains as above. We obtain a neighborhood 𝑈 𝜀 = ℳ ∩ 𝛴 + 𝐵 ν 𝜀 of radius 𝜀 > 0 such that, for every 𝑥 ∈ 𝑈 𝜀 , we have dist (𝑥, ℳ ∩ 𝛴) ≤ 𝐶 1 dist (𝑥, 𝛴).
On the other hand, for points 𝑥 ∈ ℳ with 𝑥 ∉ 𝑈 𝜀 , we have dist (𝑥, 𝛴) ≥ 𝐶 2 > 0, while dist (𝑥, ℳ ∩ 𝛴) ≤ 𝐶 3 since ℳ ∩ 𝛴 ≠ ∅. This completes the proof of the lemma.
□
Finally, we give the proof of Theorem 6.4 in the case where ℳ has non-empty boundary.
Proof of Theorem 6.4 when ℳ has non-empty boundary. The key idea is to view ℳ, or more precisely any compact subset of the interior of ℳ, as a subset of a smooth manifold without boundary, embedded in ℝ ν × ℝ, identifying ℝ ν with ℝ ν × {0}. For this, we rely on [START_REF]Density of bounded maps in Sobolev spaces into complete manifolds[END_REF]Lemma 3.4], which is a consequence of the collar neighborhood theorem.
Let 𝐾 be any compact subset in the relative interior of ℳ. From [10, Lemma 3.4], we deduce that there exists a smooth compact submanifold M of ℝ ν ×ℝ, without boundary, such that 𝐾 × {0} ⊂ M and 𝜋( M) ⊂ ℳ,
where 𝜋 : ℝ ν × ℝ → ℝ ν is the projection onto the first ν variables.
Let 𝑢 ∈ 𝑊 𝑠,𝑝 (ℳ; 𝒩). The map 𝑣 = 𝑢 • 𝜋 belongs to 𝑊 𝑠,𝑝 ( M, 𝒩). Hence, by Theorem 6.4 for manifolds without boundary, there exists a sequence (𝑣 𝐾 𝑛 ) 𝑛∈ℕ of maps in ℛ 𝑚-[𝑠𝑝]-1 ( M; 𝒩) such that 𝑣 𝐾 𝑛 → 𝑣 in 𝑊 𝑠,𝑝 ( M). In particular, (𝑣 𝐾 𝑛 ) |𝐾 → 𝑢 |𝐾 in 𝑊 𝑠,𝑝 (𝐾). Now, we observe that, for every 𝜀 > 0 sufficiently small, if we take 𝐾 = 𝐾 𝜀 such that ℳ \ 𝐾 𝜀 is contained in a uniform neighborhood of radius 𝜀 of 𝜕ℳ, then 𝑢 |𝐾 𝜀 may be dilated to a map 𝑢 𝜀 ∈ 𝑊 𝑠,𝑝 (ℳ; 𝒩). Moreover, if we denote by 𝑢 𝑛,𝜀 the corresponding dilations of the maps (𝑣 𝐾 𝜀 𝑛 ) |𝐾 𝜀 , we have both 𝑢 𝜀 → 𝑢 as 𝜀 → 0 and 𝑢 𝑛,𝜀 → 𝑢 𝜀 as 𝑛 → +∞.
We conclude using a diagonal argument.
□ 7 Shrinking
This section is dedicated to the shrinking procedure. As we explained in Section 2, shrinking is actually a more involved version of a scaling argument, whose purpose is to modify a given map in order to obtain a better map whose energy is controlled. The main result of this section is the following proposition, counterpart of [9, Proposition 8.1] in the fractional setting, which provides the shrinking construction. We emphasize that, similar to thickening but unlike opening, the map Φ does not depend on the map 𝑢 ∈ 𝑊 𝑠,𝑝 it shall be composed with.
Proposition 7.1. Let ℓ ∈ {0, . . . , 𝑚 -1}, 𝜂 > 0, 0 < 𝜇 < 1 2 , 0 < 𝜏 < 1 2 , 𝒦 𝑚 be a cubication in ℝ 𝑚 of radius 𝜂, and 𝒯 ℓ * be the dual skeleton of 𝒦 ℓ . There exists a smooth map Φ : ℝ 𝑚 → ℝ 𝑚 such that (i) Φ is injective;
(ii) for every 𝜎 𝑚 ∈ 𝒦 𝑚 , Φ(𝜎 𝑚 ) ⊂ 𝜎 𝑚 ;
(iii) Supp Φ ⊂ 𝑇 ℓ * + 𝑄 𝑚 2𝜇𝜂 and Φ(𝑇 ℓ * + 𝑄 𝑚 𝜏𝜇𝜂 ) ⊃ 𝑇 ℓ * + 𝑄 𝑚 𝜇𝜂 .
If in addition ℓ + 1 > 𝑠𝑝, then for every 𝑢 ∈ 𝑊 𝑠,𝑝 (𝐾 𝑚 ; ℝ 𝜈 ) and every 𝑣 ∈ 𝑊 𝑠,𝑝 (𝐾 𝑚 ; ℝ 𝜈 ) such that 𝑢 = 𝑣 on the complement of 𝑇 ℓ * + 𝑄 𝑚 𝜇𝜂 , we have 𝑢 • Φ ∈ 𝑊 𝑠,𝑝 (𝐾 𝑚 ; ℝ 𝜈 ), and moreover, the following estimates hold: for some constant 𝐶 ′ > 0 depending on 𝑚, 𝑠, and 𝑝. Estimates (a) to (d) will be used in the proof of Theorem 1.1. This section is organized as follows. In a first time, we explain the construction of the building blocks for shrinking, and we prove their geometric properties. Then we state the analytic estimates satisfied by the composition of a 𝑊 𝑠,𝑝 map 𝑢 with those building blocks. Finally, we explain how to suitably combine the building blocks in order to obtain the global shrinking construction, along with the required properties.
(a) if 0 < 𝑠 < 1, then (𝜇𝜂) 𝑠 |𝑢 • Φ -𝑣| 𝑊 𝑠,𝑝 (𝐾 𝑚 ) ≤ 𝐶 (𝜇𝜂) 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 ℓ * +𝑄 𝑚 𝜇𝜂 )) + ∥𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 ℓ * +𝑄 𝑚 𝜇𝜂 )) + 𝐶𝜏 ℓ +1-𝑠𝑝 𝑝 (𝜇𝜂) 𝑠 |𝑢| 𝑊 𝑠,
We start with the construction of the building blocks for shrinking, which is very similar to thickening. Therefore, in this section, we shall follow an analogous path to the one in Section 5. We start by introducing some additional notation, similar to Sections 3 and 5. Let 0 < 𝜇 < 𝜇 < 𝜇 < 1 and 0 < 𝜏 < 𝜇/𝜇 be fixed. We set
𝐵 1 = 𝐵 𝑑 𝜏𝜇𝜂 × 𝑄 𝑚-𝑑 (1-𝜇)𝜂 , 𝑄 2 = 𝑄 𝑑 𝜇𝜂 × 𝑄 𝑚-𝑑 (1-𝜇)𝜂 , 𝑄 3 = 𝑄 𝑑 𝜇𝜂 × 𝑄 𝑚-𝑑 (1-𝜇)𝜂 .
Note that 𝐵 1 ⊂ 𝑄 2 ⊂ 𝑄 3 . The rectangle 𝑄 3 contains the geometric support of the building block Φ, that is, Φ = id outside of 𝑄 3 . The rectangle 𝑄 2 is shrinked into the cylinder 𝐵 1 : we have Φ(𝐵 1 ) ⊃ 𝑄 2 . As usual, the region in between serves as a transition region.
As we did for thickening, we split the construction of the building block Φ into two parts. First, we deal with the geometric properties that need to be satisfied by Φ independently of the map 𝑢, and then, we move to the Sobolev estimates satisfied by 𝑢 • Φ. We take Φ to be exactly the map given by [9, Proposition 8.3], and we therefore only recall briefly how this map is built, referring the reader to [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF] for the details. Once again, the main change in our approach is that we establish the Sobolev estimates first for the building blocks, and then we glue them together in order to obtain the estimates given by Proposition 7.1.
Analogously to Section 5, we define 𝜁 : ℝ 𝑚 → ℝ by
𝜁(𝑥) = |𝑥 ′ | 2 + (𝜇𝜂) 2 𝜃 𝑥 ′′ 𝜇𝜂 + (𝜇𝜂) 2 𝜀𝜏 2 (7.1)
for every 𝑥 = (𝑥 ′ , 𝑥 ′′ ) ∈ ℝ 𝑑 × ℝ 𝑚-𝑑 . Here, 𝜃 : ℝ 𝑚-𝑑 → ℝ is defined similarly as in Section 5. We choose 1 < 𝑞 < +∞ sufficiently large so that there exist 0 < 𝑟 1 < 𝑟 2 satisfying
𝑄 𝑚-𝑑 1-𝜇 𝜇 ⊂ {𝑥 ′′ ∈ ℝ 𝑚-𝑑 : |𝑥 ′′ | 𝑞 < 𝑟 1 } ⊂ {𝑥 ′′ ∈ ℝ 𝑚-𝑑 : |𝑥 ′′ | 𝑞 < 𝑟 2 } ⊂ 𝑄 𝑚-𝑑 1-𝜇 𝜇 .
Then, we pick a nondecreasing smooth map θ :
ℝ + → [0, 1] such that θ(𝑟) = 0 if 0 ≤ 𝑟 ≤ 𝑟 1 and θ(𝑟) = 1 if 𝑟 ≥ 𝑟 2 .
= 𝐵 𝑑 𝜇𝜂 × 𝑄 𝑚-𝑑 (1-𝜇)𝜂 , 𝐵 3 = 𝐵 𝑑 𝜇𝜂 × 𝑄 𝑚-𝑑 (1-𝜇)𝜂 .
It will then suffice to compose Ψ with a suitable diffeomorphism Θ : ℝ 𝑚 → ℝ 𝑚 dilating 𝐵 2 to a set containing 𝑄 2 in order to obtain the desired map Φ.
We let 𝜑 : (0, +∞) → [1, +∞) be a smooth function such that (a) for 0 < 𝑟 ≤ 𝜏
√ 1 + 𝜀, 𝜑(𝑟) = 𝜇/𝜇 𝑟 √ 1 + 𝜀 1 + 𝑏 ln 1 𝑟 ; (b) for 𝑟 ≥ 1, 𝜑(𝑟) = 1;
(c) the function 𝑟 ∈ (0, +∞) ↦ → 𝑟𝜑(𝑟) is increasing. This is possible provided that we choose 𝜀 such that
(𝜇/𝜇) √ 1 + 𝜀 < 1
and then 𝑏 > 0 such that
(𝜇/𝜇) √ 1 + 𝜀 1 + 𝑏 ln 1 (𝜇/𝜇) √ 1+𝜀 < 1.
Then, we define 𝜆 :
ℝ 𝑚 → [1, +∞) by 𝜆(𝑥) = 𝜑 𝜁(𝑥) 𝜇𝜂 ,
and finally Ψ(𝑥 ′ , 𝑥 ′′ ) = (𝜆(𝑥 ′ , 𝑥 ′′ )𝑥 ′ , 𝑥 ′′ ).
The injectivity of Ψ relies on assumption (c) on 𝜑. The fact that Supp Ψ ⊂ 𝐵 3 uses assumption (b) on 𝜑, observing that 𝜁(𝑥) ≥ 𝜇𝜂 whenever 𝑥 ∈ ℝ 𝑚 \ 𝐵 3 , and hence 𝜆(𝑥) = 1. To prove (iii), note that if 𝑥 = (𝑥 ′ , 𝑥 ′′ ) ∈ 𝐵 1 and 𝑡 ≥ 0, we have
Ψ(𝑡𝑥 ′ , 𝑥 ′′ ) = 𝑡𝜑 𝑡 2 𝑥 ′ 𝜇𝜂 2 + 𝜀𝜏 2 𝑥 ′ , 𝑥 ′′ ,
where we used the fact that 𝜃 vanishes inside of 𝑄 𝑚-𝑑 1-𝜇 𝜇
. For 𝑡 = 0, the factor in front of 𝑥 ′ vanishes, while for 𝑡 = 𝜏, it is larger than
𝜇𝜂 |𝑥 ′ | ≥ 1.
We conclude by invoking the intermediate value theorem. The proof of (iv) amounts to estimate |𝐷 𝑗 𝜆| using the Faà di Bruno formula, and then conclude using Leibniz's rule. We obtain the second estimate from the first one by noting that 𝜁 ≥ (𝜇𝜂) √ 𝜀𝜏. The proof of (v) again involves explicitly computing jac Ψ as the determinant of a perturbation of a linear map, and then estimating the obtained expression. The second estimate relies on the fact that if 𝑥 = (𝑥 ′ , 𝑥 ′′ ) ∈ 𝐵 1 , then |𝑥 ′ | ≤ (𝜇𝜂)𝜏 and 𝜃 𝑥 ′′ 𝜇𝜂 = 0, whence 𝜁(𝑥) ≤ (𝜇𝜂) √ 1 + 𝜀𝜏. We refer the reader to [9, Lemma 8.5] for the details.
We then let Θ : ℝ 𝑚 → ℝ 𝑚 be a smooth diffeomorphism also of the form Θ(𝑥) = ( λ(𝑥)𝑥 ′ , 𝑥 ′′ ), with λ : ℝ 𝑚 → [1, +∞), such that Θ is supported in 𝑄 3 , maps 𝐵 2 on a set containing 𝑄 2 , and satisfies the estimates (𝜇𝜂) 𝑗-1 |𝐷 𝑗 Θ| ≤ 𝐶 4 and 0 < 𝐶 5 ≤ jac Θ ≤ 𝐶 6 on ℝ 𝑚 ; see [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF]Lemma 8.4]. Using the composition formula for the Jacobian and the Faà di Bruno formula, we conclude, as for thickening, that Φ = Θ • Ψ is the desired map. □
We now turn to the Sobolev estimates satisfied by 𝑢 • Φ. for some constant 𝐶 > 0 depending on 𝑠, 𝑚, 𝑝, 𝑐, 𝑐 ′ , 𝜇/𝜇, and 𝜇/𝜇.
We encounter again the assumption that balls centered at a point of 𝜔 significantly intersect 𝜔. We call the attention of the reader to the fact that, in the proof of Proposition 7.1, Proposition 7.3 will be applied with 𝜔 being a domain more complicated than just a rectangle. This contrasts with the situation encountered in Sections 3 and 5.
In the proof of Proposition 7.3, we need the counterpart of Lemma 5.4 for the map 𝜁 used for shrinking. The proof is the same as the proof of Lemma 5.4, since both constructions are identical up to an additive constant under the square root, and is therefore omitted. Lemma 7.4. For every 𝑥, 𝑦 ∈ ℝ 𝑚 , there exists a Lipschitz path 𝛾 :
[0, 1] → ℝ 𝑚 from 𝑥 to 𝑦 such that |𝛾| 𝒞 0,1 ([0,1]) ≤ 𝐶|𝑥 -𝑦|
for some constant 𝐶 > 0 depending only on 𝑚, and such that 𝜁 ≥ min(𝜁(𝑥), 𝜁(𝑦)) along 𝛾, where 𝜁 is the map defined in (7.1).
We are now ready to prove Proposition 7.3.
Proof of Proposition 7.3. As for thickening, the integer order estimate when 𝑠 ≥ 1 is proved exactly as [9, Corollary 8.2], but is presented here as a prelude for the more involved fractional order case. By the Faà di Bruno formula, we estimate
|𝐷 𝑗 (𝑢 • Φ)(𝑥)| 𝑝 ≤ 𝐶 7 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 |𝐷 𝑡 1 Φ(𝑥)| 𝑝 • • • |𝐷 𝑡 𝑖 Φ(𝑥)| 𝑝
for every 𝑗 ∈ {1, . . . , 𝑘} and 𝑥 ∈ Φ -1 (𝜔). Let 0 < 𝛽 < 𝑑. Using the estimates on the derivatives and the Jacobian of Φ, we find |𝐷 𝑡 𝑙 Φ| ≤ 𝐶 8 (jac Φ)
𝑡 𝑙 𝛽
(𝜇𝜂) 𝑡 𝑙 -1 , and therefore
|𝐷 𝑗 (𝑢 • Φ)(𝑥)| 𝑝 ≤ 𝐶 9 𝑗 𝑖=1 1≤𝑡 1 ≤•••≤𝑡 𝑖 𝑡 1 +•••+𝑡 𝑖 =𝑗 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 (jac Φ(𝑥)) 𝑡 1 𝑝 𝛽 (𝜇𝜂) (𝑡 1 -1)𝑝 • • • (jac Φ(𝑥)) 𝑡 𝑖 𝑝 𝛽 (𝜇𝜂) (𝑡 𝑖 -1)𝑝 ≤ 𝐶 10 𝑗 𝑖=1 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 (jac Φ(𝑥)) 𝑗𝑝 𝛽 (𝜇𝜂) (𝑗-𝑖)𝑝
.
Since 𝑗𝑝 ≤ 𝑠𝑝 < 𝑑, we may choose 𝛽 = 𝑗𝑝. Hence,
|𝐷 𝑗 (𝑢 • Φ)(𝑥)| 𝑝 ≤ 𝐶 10 𝑗 𝑖=1 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 jac Φ(𝑥) (𝜇𝜂) (𝑗-𝑖)𝑝
.
Since Φ is injective, the change of variable theorem ensures that
∫ Φ -1 (𝜔\𝑄 2 ) (𝜇𝜂) 𝑗𝑝 |𝐷 𝑗 (𝑢 • Φ)| 𝑝 ≤ ∫ Φ -1 (𝜔\𝑄 2 ) 𝐶 10 𝑗 𝑖=1 (𝜇𝜂) 𝑖𝑝 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 jac Φ(𝑥) d𝑥 ≤ 𝐶 10 𝑗 𝑖=1 ∫ 𝜔\𝑄 2 (𝜇𝜂) 𝑖𝑝 |𝐷 𝑖 𝑢| 𝑝 .
Combining inclusion (iii) and estimates (iv) and (v) in Proposition 7.2, we find
|𝐷 𝑗 (𝑢 • Φ)(𝑥)| 𝑝 ≤ 𝐶 11 𝜏 𝑑-𝑗𝑝 𝑗 𝑖=1 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 jac Φ(𝑥) (𝜇𝜂) (𝑗-𝑖)𝑝
for every 𝑥 ∈ Φ -1 (𝑄 2 ) ⊂ 𝐵 1 . Using again the change of variable theorem, we deduce that
∫ Φ -1 (𝑄 2 ) (𝜇𝜂) 𝑗𝑝 |𝐷 𝑗 (𝑢 • Φ)| 𝑝 ≤ ∫ Φ -1 (𝑄 2 ) 𝐶 11 𝜏 𝑑-𝑗𝑝 𝑗 𝑖=1 (𝜇𝜂) 𝑖𝑝 |𝐷 𝑖 𝑢(Φ(𝑥))| 𝑝 jac Φ(𝑥) d𝑥 ≤ 𝐶 11 𝜏 𝑑-𝑗𝑝 𝑗 𝑖=1 ∫ 𝑄 2 (𝜇𝜂) 𝑖𝑝 |𝐷 𝑖 𝑢| 𝑝 .
We conclude by additivity of the integral, combining the estimates on Φ -1 (𝜔 \ 𝑄 2 ) and on Φ -1 (𝑄 2 ). The proof of the estimate at order 0 relies on the same decomposition and change of variable, noting that in particular jac Φ ≥ 𝐶 12 to handle the region Φ -1 (𝜔 \ 𝑄 2 ).
We now move to the fractional estimate when 0 < 𝑠 < 1. Observe that, as in (5.4), we have
|Φ(𝑥) -Φ(𝑦)| |𝑥 -𝑦| ≤ 𝐶 13 𝜇𝜂 𝜁(𝑦)
for every 𝑥, 𝑦 ∈ 𝜔.
We start by splitting, in the spirit of the proof of Proposition 5.3,
∬ Φ -1 (𝜔)×Φ -1 (𝜔) 𝜁(𝑥)≤𝜁(𝑦) |𝑢 • Φ(𝑥) -𝑢 • Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦 = 𝐼 1 + 𝐼 2 + 𝐼 3 + 𝐼 4 , (7.4)
where we have set
𝐼 1 = ∬ Φ -1 (𝜔\𝑄 2 )×Φ -1 (𝜔\𝑄 2 ) 𝜁(𝑥)≤𝜁(𝑦) |𝑢•Φ(𝑥)-𝑢•Φ(𝑦)| 𝑝 |𝑥-𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦, 𝐼 2 = ∬ Φ -1 (𝑄 2 )×Φ -1 (𝑄 2 ) 𝜁(𝑥)≤𝜁(𝑦) |𝑢•Φ(𝑥)-𝑢•Φ(𝑦)| 𝑝 |𝑥-𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦, 𝐼 3 = ∬ Φ -1 (𝜔\𝑄 2 )×Φ -1 (𝑄 2 ) 𝜁(𝑥)≤𝜁(𝑦) |𝑢•Φ(𝑥)-𝑢•Φ(𝑦)| 𝑝 |𝑥-𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦, 𝐼 4 = ∬ Φ -1 (𝑄 2 )×Φ -1 (𝜔\𝑄 2 ) 𝜁(𝑥)≤𝜁(𝑦) |𝑢•Φ(𝑥)-𝑢•Φ(𝑦)| 𝑝 |𝑥-𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦.
Estimating the right-hand side of (7.4) is similar to Step 2 in the case 0 < 𝑠 < 1 of Proposition 5.3. The novelty here is that we need to be more careful with the domains on which the estimates are performed. Indeed, in order to obtain (a), we need to estimate the right-hand side of (7.4) by a sum of terms that are either preceded by a suitable power of 𝜏, or involve only the energy of 𝑢 on 𝜔 \ 𝑄 2 . We begin with 𝐼 2 . We define
ℬ 𝑥,𝑦 = 𝐵 𝑚 |Φ(𝑥)-Φ(𝑦)| Φ(𝑥) + Φ(𝑦) 2 ∩ 𝑄 2 , so that 𝐼 2 ≤ 𝐶 14 ∫ Φ -1 (𝑄 2 ) ∫ Φ -1 (𝑄 2 ) ⨏ ℬ 𝑥,𝑦
∫ Φ -1 (𝑄 2 ) ∫ Φ -1 (𝑄 2 ) ⨏ ℬ 𝑥,𝑦 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑦d𝑥 ≤ 𝐶 16 ∫ Φ -1 (𝑄 2 ) ∫ 𝑄 2 ∫ 𝒴 𝑥,𝑧 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 |Φ(𝑥) -𝑧| 𝑚 d𝑦d𝑧d𝑥, where 𝒴 𝑥,𝑧 = {𝑦 ∈ Φ -1 (𝑄 2 ): 𝑧 ∈ ℬ 𝑥,𝑦 } ⊂ {𝑦 ∈ ℝ 𝑚 : |Φ(𝑥) -𝑧| < 𝐶 17 𝜇𝜂 𝜁(𝑥) |𝑥 -𝑦|}. Therefore, ∫ Φ -1 (𝑄 2 ) ∫ 𝑄 2 ∫ 𝒴 𝑥,𝑧 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 |Φ(𝑥) -𝑧| 𝑚 d𝑦d𝑧d𝑥 ≤ 𝐶 18 ∫ Φ -1 (𝑄 2 ) ∫ 𝑄 2 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝑠𝑝 (
Φ -1 (𝑄 2 ) ∫ 𝑄 2 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝑠𝑝 (𝜇𝜂) 𝑠𝑝 𝜁(𝑥) 𝑠𝑝 d𝑧d𝑥 ≤ 𝐶 20 𝜏 𝑑-𝑠𝑝 ∫ 𝑄 2 ∫ 𝑄 2 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦.
The three other terms are handled similarly, so we only point out the required changes. We define instead
ℬ 𝑥,𝑦 = 𝐵 𝑚 |Φ(𝑥)-Φ(𝑦)| Φ(𝑥) + Φ(𝑦) 2 ∩ (𝜔 \ 𝑄 2 ).
For 𝐼 3 , we split
𝐼 3 ≤ 𝐶 21 ∫ Φ -1 (𝜔\𝑄 2 ) ∫ Φ -1 (𝑄 2 ) ⨏ ℬ 𝑥,𝑦 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑦d𝑥 + ∫ Φ -1 (𝜔\𝑄 2 ) ∫ Φ -1 (𝑄 2 ) ⨏ ℬ 𝑥,𝑦 |𝑢 • Φ(𝑦) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑦d𝑥 .
Note that we still have |ℬ 𝑥,𝑦 | ≥ 𝐶 22 |Φ(𝑥) -Φ(𝑦)| 𝑚 , using this time the assumption on the volume of balls centered in 𝜔 \ 𝑄 2 . We then pursue as for the second term in the right-hand side of (7.4): we use Tonelli's theorem, and after that, we integrate with respect to 𝑦. Similar to (7.5), we deduce that
𝐼 3 ≤ 𝐶 23 ∫ Φ -1 (𝜔\𝑄 2 ) ∫ 𝜔\𝑄 2 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝑠𝑝 (𝜇𝜂) 𝑠𝑝 𝜁(𝑥) 𝑠𝑝 d𝑧d𝑥 ∫ Φ -1 (𝑄 2 ) ∫ 𝜔\𝑄 2 |𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝑠𝑝 (𝜇𝜂) 𝑠𝑝 𝜁(𝑥) 𝑠𝑝 d𝑧d𝑥 .
Invoking the change of variable theorem, using the first estimate on jac Φ with 𝛽 = 𝑠𝑝 for the first term and the second estimate on jac Φ for the second term, we conclude that
𝐼 3 ≤ ∫ 𝜔\𝑄 2 ∫ 𝜔\𝑄 2 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦 + 𝜏 𝑑-𝑠𝑝 ∫ 𝜔 ∫ 𝜔 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦 .
By the exact same reasoning,
𝐼 4 ≤ ∫ 𝜔\𝑄 2 ∫ 𝜔\𝑄 2 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦 + 𝜏 𝑑-𝑠𝑝 ∫ 𝜔 ∫ 𝜔 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦 , while 𝐼 1 ≤ 𝐶 24 ∫ 𝜔\𝑄 2 ∫ 𝜔\𝑄 2 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑥d𝑦.
Collecting the estimates for the right-hand side of (7.4), we arrive at estimate (a) of Proposition 7.3. We finish with the estimate for the Gagliardo seminorm in the case 𝑠 ≥ 1. Consider 𝑥, 𝑦 ∈ Φ -1 (𝜔) such that, without loss of generality, 𝜁(𝑥) ≤ 𝜁(𝑦). As usual, using the Faà di Bruno formula, the multilinearity of the differential and the estimates on the derivatives of Φ, we write
|𝐷 𝑗 (𝑢 • Φ)(𝑥) -𝐷 𝑗 (𝑢 • Φ)(𝑦)| ≤ 𝐶 25 𝑗 𝑖=1 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢 • Φ(𝑦)| (𝜇𝜂) 𝑖 𝜁(𝑦) 𝑗 + 𝑗 𝑡=1 |𝐷 𝑖 𝑢 • Φ(𝑥)||𝐷 𝑡 Φ(𝑥) -𝐷 𝑡 Φ(𝑦)| (𝜇𝜂) 𝑖-1 𝜁(𝑥) 𝑗-𝑡 . (7.6)
For the second term in (7.6), we proceed once again by splitting the integral over 𝐵 𝑚 𝑟 (𝑥) and ℝ 𝑚 \ 𝐵 𝑚 𝑟 (𝑥) with 𝑟 = 𝜁(𝑥) to arrive at
∫ Φ -1 (𝜔) 𝜁(𝑥)≤𝜁(𝑦) |𝐷 𝑡 Φ(𝑥) -𝐷 𝑡 Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 ≤ 𝐶 26 (𝜇𝜂) 𝑝 𝜁(𝑥) (𝑡+𝜎)𝑝 . Hence, ∬ Φ -1 (𝜔)×Φ -1 (𝜔) 𝜁(𝑥)≤𝜁(𝑦) |𝐷 𝑖 𝑢 • Φ(𝑥)| 𝑝 |𝐷 𝑡 Φ(𝑥) -𝐷 𝑡 Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 (𝜇𝜂) (𝑖-1)𝑝 𝜁(𝑥) (𝑗-𝑡)𝑝 d𝑥d𝑦 ≤ 𝐶 27 ∫ Φ -1 (𝜔) |𝐷 𝑖 𝑢 • Φ(𝑥)| 𝑝 (𝜇𝜂) 𝑖𝑝 𝜁(𝑥) (𝑗+𝜎)𝑝 d𝑥. (7.7)
We then argue as for the integer order term. We split the integral in the right-hand side of (7.7) over the regions 𝜔 \ 𝑄 2 and 𝑄 2 . Owing to the change of variable theorem, using the first estimate on jac Φ with 𝛽 = (𝑗 + 𝜎)𝑝 over 𝜔 \ 𝑄 2 and the second estimate on jac Φ over 𝑄 2 , we obtain
∫ Φ -1 (𝜔) |𝐷 𝑖 𝑢 • Φ(𝑥)| 𝑝 1 𝜁(𝑥) (𝑗+𝜎)𝑝 d𝑥 ≤ 𝐶 28 (𝜇𝜂) 𝑖𝑝-(𝑗+𝜎)𝑝 ∫ 𝜔\𝑄 2 |𝐷 𝑖 𝑢| 𝑝 + 𝐶 29 𝜏 𝑑-(𝑗+𝜎)𝑝 (𝜇𝜂) 𝑖𝑝-(𝑗+𝜎)𝑝 ∫ 𝑄 2 |𝐷 𝑖 𝑢| 𝑝 .
As for thickening, the first term in (7.6) is handled exactly as in the case 0 < 𝑠 < 1, taking into account the presence of the factor (𝜇𝜂) 𝑖𝑝 𝜁(𝑦) 𝑗𝑝 : we use the same splitting as in (7.4), and then the usual averaging argument. Doing so, we deduce that the first term in (7.6) is bounded from above by a constant multiple of
∫ Φ -1 (𝜔\𝑄 2 ) ∫ 𝜔\𝑄 2 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝜎𝑝 (𝜇𝜂) (𝑖+𝜎)𝑝 𝜁(𝑥) (𝑗+𝜎)𝑝 d𝑧d𝑥 + ∫ Φ -1 (𝑄 2 ) ∫ 𝜔 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝜎𝑝 (𝜇𝜂) (𝑖+𝜎)𝑝 𝜁(𝑥) (𝑗+𝜎)𝑝 d𝑧d𝑥. (7.8)
An additional use of the change of variable theorem shows that (7.8) is estimated, up to a constant factor, by
(𝜇𝜂) (𝑖-𝑗)𝑝 ∫ 𝜔\𝑄 2 ∫ 𝜔\𝑄 2 |𝐷 𝑖 𝑢(𝑥) -𝐷 𝑖 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 + 𝜏 𝑑-(𝑗+𝜎)𝑝 (𝜇𝜂) (𝑖-𝑗)𝑝 ∫ 𝜔 ∫ 𝜔 |𝐷 𝑖 𝑢(𝑥) -𝐷 𝑖 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑧d𝑥.
Gathering the estimates for both terms in (7.6), we obtain the desired conclusion, hence finishing the proof of Proposition 7.3.
□
Now that we have at our disposal the building blocks for the shrinking procedure, we are ready to prove Proposition 7.1. As usual, for the convenience of the reader, we start with an informal presentation of the construction.
We first apply shrinking around the vertices of the dual skeleton 𝒯 ℓ * , with parameters 0 < 𝜇 𝑚-1 < 𝜈 𝑚 < 𝜇 𝑚 and 𝜏𝜇 𝜈 𝑚 , where 𝜇 𝑚-1 ≥ 𝜇 and 𝜇 𝑚 ≤ 2𝜇. This shrinks a neighborhood of size 𝜇 𝑚-1 𝜂 of these vertices into a neighborhood of size 𝜏𝜇𝜂. We then apply shrinking around the edges of 𝒯 ℓ * with parameters 0 < 𝜇 𝑚-2 < 𝜈 𝑚-1 < 𝜇 𝑚-1 and 𝜏𝜇 𝜈 𝑚-1 , where 𝜇 𝑚-2 ≥ 𝜇. This shrinks the part of a neighborhood of size 𝜇 𝑚-2 𝜂 of the edges of 𝒯 ℓ * lying at distance at most 𝜇 𝑚-1 𝜂 of the (𝑚 -1)-faces of 𝒦 𝑚 into a neighborhood of size 𝜏𝜇𝜂 of those edges. But since the part of the neighborhood of size 𝜇 𝑚-2 𝜂 lying at distance more than 𝜇 𝑚-1 𝜂 of the (𝑚 -1)-faces of 𝒦 𝑚 has already been shrinked during the previous step, we conclude that the whole neighborhood of size 𝜇 𝑚-2 𝜂 of 𝑇 1 is shrinked into a neighborhood of size 𝜏𝜇𝜂. We continue this procedure by downward induction until we reach the dimension ℓ * , which produces the desired map Φ.
We illustrate this induction procedure in Figures 7.1 As we will see, the induction procedure is more involved than in the case of thickening, and relies on Proposition 7.3 applied with domains more general than rectangles.
Proof of Proposition 7.1. The map Φ is constructed by downward induction. We consider finite sequences (𝜇 𝑖 ) ℓ ≤𝑖≤𝑚 and (𝜈 𝑖 ) ℓ ≤𝑖≤𝑚 such that
0 < 𝜇 = 𝜇 ℓ < 𝜈 ℓ +1 < 𝜇 ℓ +1 < • • • < 𝜇 𝑚-1 < 𝜈 𝑚 < 𝜇 𝑚 ≤ 2𝜇.
We first define Φ 𝑚 = id. Then, assuming that Φ 𝑑 has been defined for some 𝑑 ∈ {ℓ + 1, . . . , 𝑚}, we identify any 𝜎 𝑑 ∈ 𝒦 𝑑 with 𝑄 𝑑 𝜂 × {0} 𝑚-𝑑 , and we let Φ 𝜎 𝑑 be the map where 𝑇 𝜎 𝑑 is an isometry of ℝ 𝑚 mapping 𝑄 𝑑 𝜂 × {0} 𝑚-𝑑 to 𝜎 𝑑 . We then let Φ 𝑑-1 = Ψ 𝑑 • Φ 𝑑 . The required map is given by Φ = Φ ℓ .
Properties (i) to (iii) are already contained in [9, Proposition 8.1], so it only remains to prove the Sobolev estimates. The argument is similar to the one used in the proof of Proposition 5.1. We proceed by induction. One of the issues is how to remove inductively neighborhoods of dual skeletons. We let 𝑄 4 = 𝑄 𝑑 2𝜇𝜂 × 𝑄 𝑚-𝑑 (1-𝜇)𝜂 , so that 𝑄 3 ⊂ 𝑄 4 for every 𝑑 ∈ {ℓ + 1, . . . , 𝑚}. First, note that invoking Proposition 7.3 with 𝜔 = 𝑄 4 \ 𝑇 -1 𝜎 𝑑 (𝑇 𝑚-𝑑-1 + 𝑄 𝑚 𝜇 𝑑 𝜂 ) ensures that (a) if 0 < 𝑠 < 1, then (𝜇𝜂) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 ))) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )) ;
(d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ 𝜎 𝑑 ∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )\(𝑇 𝑚-𝑑-1 +𝑄 𝑚 𝜇 𝑑 𝜂 )) ≤ 𝐶 7 ∥𝑢∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) + 𝐶 8 𝜏 𝑑 ∥𝑢∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )) .
Indeed, we have: (i) (𝑇 𝜎 𝑑 (𝑄 4 ) \ (𝑇 𝑚-𝑑-1 + 𝑄 𝑚 𝜇 𝑑 𝜂 )) \ 𝑇 𝜎 𝑑 (𝑄 2 ) ⊂ 𝑇 𝜎 𝑑 (𝑄 4 ) \ (𝑇 𝑚-𝑑 + 𝑄 𝑚 𝜇 𝑑-1 𝜂 ), (ii) 𝑄 4 \ 𝑇 -1 𝜎 𝑑 (𝑇 𝑚-𝑑-1 + 𝑄 𝑚 𝜇 𝑑 𝜂 ) ⊂ Φ -1 (𝜔), and (iii) 𝜔 satisfies the condition on the volume of balls required to apply Proposition 7.3. Affirmation (ii) is a consequence of the fact that Φ has the specific form Φ(𝑥) = (𝜆(𝑥)𝑥 ′ , 𝑥 ′′ ) with 𝜆 : ℝ 𝑚 → [1, +∞). Affirmation (iii) follows from the fact that 𝜔 \ 𝑄 2 is actually a rectangle to which other rectangles have been removed. Note that, for convenience of notation, we let 𝑇 -1 = ∅.
Using the additivity of the integral or Lemma 2.1 combined with the usual finite number of overlaps argument, we deduce that (a) if 0 < 𝑠 < 1, then .
Using the compactness of 𝒩 and the fact that 𝑢 ∈ 𝑊 𝑠,𝑝 (𝐾 𝑚 ), we deduce from the Gagliardo-Nirenberg inequality that 𝐷 𝑖 𝑢 ∈ 𝐿 𝑠𝑝 𝑖 (𝐾 𝑚 ). Therefore, by Hölder's inequality, .
(Strictly speaking, Lemma 6.1 requires the domain to be convex. However, we already saw that this assumption is an artifact, which may easily be bypassed. Here, this can be done, for instance, relying on the existence of a continuous extension operator from 𝑊 𝑠,𝑝 (𝐾 𝑚 ; ℝ 𝜈 ) to 𝑊 𝑠,𝑝 (ℝ 𝑚 ; ℝ 𝜈 ).) Moreover, using the fact that 𝑢 ∈ 𝐿 ∞ (𝐾 𝑚 ) since 𝒩 is compact, we have 𝑝 .
∥𝑢∥
Since 𝑠𝑝 < ℓ + 1, we observe that all the powers on 𝜇𝜂 above are positive. Moreover, since |𝐾 𝑚 ∩ (𝑇 ℓ * + 𝑄 𝑚 2𝜇𝜂 )| → 0 as 𝜇 → 0, we deduce from Lebesgue's lemma that all Lebesgue norms and Gagliardo seminorms above tend to 0 when 𝜇 → 0.
This shows that 𝑢 sh 𝜏 𝜇 ,𝜇 → 𝑢 in 𝑊 𝑠,𝑝 (𝐾 𝑚 ), and since 𝑢 sh 𝜏 𝜇 ,𝜇 ∈ 𝒞 ∞ (𝐾 𝑚 ; 𝒩), the proof is complete.
□
Theorem 1.1 follows from Proposition 8.2 by using the fact that a cube has the extension property with respect to any manifold. This was already present in [START_REF] Hang | Topology of Sobolev mappings[END_REF]. For a proof, the reader may also consult [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF]Proposition 7.3].
Proof of Theorem 1.1. From the proof of Theorem 1.2, for every map 𝑢 ∈ 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩) and every number 𝜀 > 0 there exists 𝑣 ∈ 𝒞 ∞ (𝐾 𝑚 \ 𝑇 ℓ * ; 𝒩) ∩ 𝑊 𝑠,𝑝 (𝐾 𝑚 ; 𝒩) such that ∥𝑢 -𝑣 ∥ 𝑊 𝑠,𝑝 (𝑄 𝑚 ) ≤ 𝜀, where ℓ = [𝑠𝑝] and 𝒦 𝑚 is a cubication (depending on 𝑣) in ℝ 𝑚 slightly larger than 𝑄 𝑚 . Removing cubes that do not intersect 𝑄 𝑚 if necessary, we may assume that 𝐾 𝑚 is also a cube. Doing so, 𝐾 𝑚 has the ℓ -extension property with respect to 𝒩. Hence, 𝒞 ∞ (𝐾 𝑚 ; 𝒩) is dense in 𝒞 ∞ (𝐾 𝑚 \ 𝑇 ℓ * ; 𝒩) ∩ 𝑊 𝑠,𝑝 (𝐾 𝑚 ; 𝒩) with respect to the 𝑊 𝑠,𝑝 distance. This implies that 𝒞 ∞ (𝑄 𝑚 ; 𝒩) is dense in 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩), and finishes the proof of Theorem 1.1.
□
We now turn to the case of more general domains. Replacing Theorem 1.2 by Theorem 6.3 in the above proof, we obtain the following counterpart of Theorem 1.1: If 𝛺 satisfies the segment condition, if 𝜋 [𝑠𝑝] (𝒩) = {0}, and if we may find a sequence (𝜂 𝑛 ) 𝑛∈ℕ of positive real numbers such that 𝜂 𝑛 → 0 and such that for every 𝑛 ∈ ℕ, the cubication 𝒦 𝑚 𝜂 𝑛 used in the proof of Theorem 6.3 satisfies the [𝑠𝑝]-extension property with respect to 𝒩, then 𝒞 ∞ (𝛺; 𝒩) is dense in 𝑊 𝑠,𝑝 (𝛺; 𝒩).
Under an assumption as weak as the segment condition, it is not clear how to link the topology of the cubications 𝐾 𝑚 𝜂 𝑛 containing 𝛺 to the topology of 𝛺 itself. However, in the case where 𝛺 is a smooth domain, the topological assumption above can be clarified. Indeed, in this case, using a retraction along the normal vector to 𝜕𝛺, one may show that, if 𝒦 𝑚 𝜂 is a cubication of radius 𝜂 > 0 in ℝ 𝑚 for 𝜂 > 0 sufficiently small such that 𝒦 𝑚 𝜂 is made only of cubes that intersect 𝛺, then 𝐾 𝑚 𝜂 is homotopic to 𝛺. This implies that, if we endow 𝛺 with a structure of CW-complex, then the ℓ -extension property of 𝒦 𝑚 ℓ is equivalent to the ℓ -extension property of 𝛺, and this does not depends on the choice of CW-complex structure on 𝛺; see e.g. [START_REF] Hang | Topology of Sobolev mappings[END_REF]Section 2]. Here, analogously to the definition on a cubication, we say that 𝛺 has the ℓ -extension property with respect to 𝒩 whenever for any map 𝑓 ∈ 𝒞 0 (𝛺 ℓ +1 ; 𝒩), 𝑓 |𝛺 ℓ has an extension 𝑓 ∈ 𝒞 0 (𝛺; 𝒩), where 𝛺 ℓ denotes the ℓ -skeleton of the CW-complex structure on 𝛺.
This leads to the following theorem. As for Theorem 1.2, a last perspective of generalisation for Theorem 1.1 consists in allowing the domain to be a smooth compact, connected Riemannian manifold ℳ of dimension 𝑚, and isometrically embedded in ℝ ν. As we did for Theorem 6.4, we may restrict to the case where ℳ has empty boundary, since the general case reduces to this special case by embedding into a larger manifold without boundary.
In this setting, a tubular neighborhood of ℳ is homotopic to ℳ through the nearest point projection, and therefore has the ℓ -extension property if and only if ℳ has the ℓ -extension property. We may thus proceed as for Theorem 6.4 to deduce the following result. Proof. First assume that ℳ has empty boundary. Let 𝜄 > 0 be the radius of a tubular neighborhood of ℳ, let Π denote the nearest point projection onto ℳ, and let 𝛺 = ℳ + 𝐵 ν 𝜄/2 . Given 𝑢 ∈ 𝑊 𝑠,𝑝 (ℳ; 𝒩), as explained before the proof of Theorem 6.4, the map 𝑣 = 𝑢 • Π belongs to 𝑊 𝑠,𝑝 (𝛺; 𝒩). By the observation above, 𝛺 has the [𝑠𝑝]-extension property. Therefore, by Theorem 8.3, there exists a sequence (𝑣 𝑛 ) 𝑛∈ℕ in 𝒞 ∞ (𝛺; 𝒩) which converges to 𝑣 in 𝑊 𝑠,𝑝 (𝛺). We conclude as in the proof of Theorem 6.4, using a slicing argument to find a sequence (𝑎 𝑛 ) 𝑛∈ℕ in 𝐵 ν 𝜄/2 such that 𝑎 𝑛 → 0 as 𝑛 → +∞ satisfying 𝑢 𝑛 = 𝜏 𝑎 𝑛 (𝑣 𝑛 ) |ℳ → 𝑢 in 𝑊 𝑠,𝑝 (ℳ).
The case where ℳ is allowed to have non-empty boundary is deduced from the empty boundary case exactly as for the proof of Theorem 6.4, and we therefore omit the proof.
□
Theorem 1 . 1 .
11 If 𝑠𝑝 < 𝑚 and 𝜋 [𝑠𝑝] (𝒩) = {0}, then 𝒞 ∞ (𝑄 𝑚 ; 𝒩) is dense in 𝑊 𝑠,𝑝 (𝑄 𝑚 ; 𝒩).
𝜂 , is constant on the (𝑚 -[𝑠𝑝])-dimensional cubes orthogonal to cubes in 𝒰 [𝑠𝑝] 𝜂 . Therefore, on this neighborhood, the map 𝑢 op 𝜂 behaves locally as a function of [𝑠𝑝]-variables. But since 𝑠𝑝 ≥ [𝑠𝑝], this means that, on this region, 𝑢 op 𝜂 is actually a VMO function. The map 𝑢 op 𝜂 is obtained by modifying 𝑢 on a slightly larger neighborhood of 𝑈 [𝑠𝑝]
Figure 2 . 2 :
22 Figure 2.2: Opening around the 1-skeleton of bad cubes
Figure 2 . 3 :
23 Figure 2.3: Thickening around the centers of bad cubes
Figure 2 . 4 :
24 Figure 2.4: Shrinking around the centers of bad cubes
Proof.
We start by writing |𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺) ≤ |𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺∩𝑄) + |𝑢| 𝑝 𝑊 𝜎,𝑝 (𝛺\𝜆𝑄) + 2 ∫ 𝛺∩𝜆𝑄 ∫ 𝛺\𝑄 |𝑢(𝑥) -𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦d𝑥. Now we use the average estimate
Figure 3 . 1 :
31 Figure 3.1: Opening for 𝑚 = 2 and ℓ = 1
Lemma 3 . 5 .
35 Let 𝛺 ⊂ ℝ 𝑚 be an open set and 𝑢 ∈ 𝑊 𝜎,𝑝 (𝛺). Define 𝑤 : 𝛺 → [0, +∞] by
1 𝜓
1 ) if 𝑠 ≥ 1, respectively dist (𝑢 sm 𝜂 (𝑥), 𝐹) ≤ 𝐶 5
2 and 4.1, relying on an optimization argument. We split the integral over 𝐵 𝑚 𝑟 (𝑥) and ℝ 𝑚 \ 𝐵 𝑚 𝑟 (𝑥) and we insert 𝑟 = 𝜁(𝑥) to arrive at ∫ 𝜔\𝑇 𝜁(𝑥)≤𝜁(𝑦) |𝐷 𝑡 Φ(𝑥) -𝐷 𝑡 Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑦 ≤ 𝐶 20 𝜂 𝑝 𝜁(𝑥) (𝑡+𝜎)𝑝.
Using the inclusion𝒴 𝑥,𝑧 ⊂ ℝ 𝑚 \ 𝐵 𝑚 𝑟 (𝑥),where𝑟 = 𝑟(𝑥, 𝑧) = 𝐶 25 |Φ(𝑥) -𝑧|𝜁(𝑥) 𝜂 ,we conclude that∬ (𝜔\𝑇)×(𝜔\𝑇) 𝜁(𝑥)≤𝜁(𝑦) |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢 • Φ(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑦) 𝑗𝑝 d𝑥d𝑦 ≤ 𝐶 26 ∫ 𝜔\𝑇 ∫ 𝜔 |𝐷 𝑖 𝑢 • Φ(𝑥) -𝐷 𝑖 𝑢(𝑧)| 𝑝 |Φ(𝑥) -𝑧| 𝑚+𝜎𝑝 𝜂 𝜎𝑝 𝜁(𝑥) 𝜎𝑝 𝜂 𝑖𝑝 𝜁(𝑥) 𝑗𝑝 d𝑧d𝑥.Step 3: Estimate of the first term in (5.5): change of variable. As previously, we use estimate (v) of Proposition 5.2 with 𝛽 = 𝑠𝑝 and the change of variable theorem to conclude that ∬ (𝜔\𝑇)×(𝜔\𝑇) 𝜁(𝑥)≤𝜁(𝑦)
Figures 5 . 1 , 5 . 2 ,
5152 and 5.3 provide an illustration of this procedure on one cube when 𝑚 = 2 and ℓ = 0. This allows us to see the combination of two steps of the induction procedure. Figure 5.1 shows thickening around the vertices of the dual skeleton, which correspond to the centers of the cubes of 𝒰 𝑚 . The values of 𝑢 in the blue region on the left part of the figure are propagated into the blue region on the right part of the figure. This creates a point singularity in the center of each cube, depicted in red. Figure 5.2 illustrates thickening around the edges of the dual skeleton. The values of 𝑢 in the dark blue region on the left part of the figure are propagated into the dark blue region on the right part of the figure, which creates line singularities in red. The map 𝑢 is left unchanged on the white region, the part in light blue serving as a transition. The boundaries of the regions in Figure 5.1 are shown in light colors, to illustrate how all the different regions involved in the construction combine together. The combination of both steps inside the square is shown in Figure 5.3. The values in the blue regions on the corners are propagated inside of the whole square, which creates line singularities in red, forming a cross.
Figure 5 . 1 :Figure 5 . 2 :
5152 Figure 5.1: Thickening around vertices
Figure 5 . 3 :
53 Figure 5.3: Final thickening at order 1
𝜇𝜂) 𝑠𝑝 𝜁(𝑥) 𝑠𝑝 d𝑧d𝑥. (7.5) Now we use: (i) the fact that 𝜁(𝑥) ≥ 𝐶 19 𝜇𝜂𝜏, (ii) the second estimate on jac Φ -valid on Φ -1 (𝑄 2 ) ⊂ 𝐵 1 -and (iii) the change of variable theorem to get
∫
, 7 . 2 ,
72 and 7.3. Here, we take 𝑚 = 2 and ℓ = 0. In Figure7.1, which corresponds to the first step of the induction, the values in the gray region around the center of the cube in the left part of the figure are shrinked into the much smaller gray region on the right. During the next step, depicted in Figure7.2, the values in gray around the edges of the cube on the left are shrinked into the much smaller gray region around the dual skeleton on the right. The combination of both steps is shown in Figure7.3. The values in the region in gray on the left are shrinked into the small neighborhood of the dual skeleton in gray on the right.
Figure 7 . 1 :
71 Figure 7.1: Shrinking around vertices
Figure 7 . 2 :Figure 7 . 3 :
7273 Figure 7.2: Shrinking around edges
∥𝐷
𝑖 𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ |𝐾 𝑚 ∩ (𝑇 ℓ * + 𝑄 𝑚 2𝜇𝜂 )| 𝑠-𝑖 𝑠𝑝 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑠𝑝 𝑖 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )).Similarly, using Lemma 6.1, for every 𝑖 ∈ {1, . . . , 𝑘 -1}, we find that|𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ 𝐶 5 |𝐾 𝑚 ∩ (𝑇 ℓ * + 𝑄 𝑚 2𝜇𝜂 )| 𝑠-𝑖-𝜎 𝑠𝑝 ∥𝐷 𝑖 𝑢 ∥ 1-𝜎 𝐿 𝑠𝑝 𝑖 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ∥𝐷 𝑖+1 𝑢∥ 𝜎 𝐿 𝑠𝑝 𝑖+1 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 ))
Theorem 8 . 4 .
84 If 𝑠𝑝 < 𝑚, if 𝜋 [𝑠𝑝] (𝒩) = {0}, and if ℳ has the [𝑠𝑝]-extension property with respect to 𝒩, then 𝒞 ∞ (ℳ; 𝒩) is dense in 𝑊 𝑠,𝑝 (ℳ; 𝒩).
Section 4], is explained in Section 5, and we apply it to modify the map 𝑢 sm 𝜂 on 𝑈 𝑚 𝜂 to a map 𝑢 th 𝜂 whose values on 𝑈 𝑚 𝜂 only depend on the values of 𝑢 sm 𝜂 near 𝑈 𝜂 , which makes possible to project it back onto 𝒩 relying on the nearest point projection Π. Since the map 𝑢 sm 𝜂 is smooth, the map 𝑢 th 𝜂 is smooth on 𝑄 𝑚 \ 𝑇
smooth map. Working with the neighborhood 𝑈 [𝑠𝑝] 𝜂 [𝑠𝑝] 𝜂 𝛿 instead of the skeleton 𝑈 + 𝑄 𝑚 𝛿 is a + 𝑄 𝑚
[𝑠𝑝]
and this map is, in general, discontinuous on 𝑇
[𝑠𝑝] * 𝜂 . The map 𝑤 may be written as
𝑤 = 𝑣 • Φ he ,
where Φ he : 𝑈 𝑚 𝜂 \ 𝑇 [𝑠𝑝] * 𝜂 → 𝑈 [𝑠𝑝] 𝜂 is a Lipschitz map. Instead, the thickening procedure associates with a map 𝑣 : 𝑈 [𝑠𝑝] 𝜂 + 𝑄 𝑚 𝛿 → ℝ 𝜈 (for some 𝛿 > 0 sufficiently small) a map 𝑤 : 𝑈 𝑚 𝜂 \ 𝑇 [𝑠𝑝] * 𝜂 → ℝ 𝜈 , which, again, is in general singular on the set 𝑇 [𝑠𝑝] * 𝜂 . The map 𝑤 is obtained from 𝑣 as 𝑤 = 𝑣 • Φ th , where Φ th : 𝑈 𝑚 𝜂 \ 𝑇 [𝑠𝑝] * 𝜂 → 𝑈 [𝑠𝑝] 𝜂
is the key idea to avoid working with slices of Sobolev maps, and more importantly, to be able to choose Φ th smooth, which, in turn, is crucial to ensure that composition with Φ th preserves higher order Sobolev regularity. The detailed construction, devised in [9, 𝜂 , while not increasing too much the energy of the map on 𝑈 𝑚 𝜂 . Therefore, the map 𝑢 th 𝜂 is close to 𝒩 on the whole 𝑄 𝑚 \ 𝑇
[𝑠𝑝] *
[𝑠𝑝] * 𝜂
if 0 < 𝑠 < 1, then 𝜂 𝑠 |𝑢 • Φ| 𝑊 𝑠,𝑝 (𝜔) ≤ 𝐶 𝜂 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝜔) + ∥𝑢∥ 𝐿 𝑝 (𝜔) ;
9, Proposition 2.1], which contains the opening construction. Recall that we write 𝑠 = 𝑘 + 𝜎, with 𝑘 ∈ ℕ and 𝜎 ∈ [0, 1). Note carefully that the map Φ constructed below depends on the map 𝑢 ∈ 𝑊 𝑠,𝑝 it is composed with. Let 𝛺 ⊂ ℝ 𝑚 be open, ℓ ∈ {0, . . . , 𝑚 -1}, 𝜂 > 0, 0 < 𝜌 <1 2 , and 𝒰 ℓ be a subskeleton of ℝ 𝑚 of radius 𝜂 such that 𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 ⊂ 𝛺. For every 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; ℝ 𝜈 ), there exists a smooth map Φ : ℝ 𝑚 → ℝ 𝑚 such that (i) for every 𝑑 ∈ {0, . . . , ℓ } and for every 𝜎 𝑑 ∈ 𝒰 𝑑 , Φ is constant on the (𝑚 -𝑑)-dimensional cubes of radius 𝜌𝜂 which are orthogonal to 𝜎 𝑑 ; ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝜔) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝜔) ;(d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 ∥𝑢 ∥ 𝐿 𝑝 (𝜔) ;(iv) for every 𝜔 ⊂ 𝛺 such that 𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 ⊂ 𝜔, the following estimates hold: (a) if 0 < 𝑠 < 1, then𝜂 𝑠 |𝑢 • Φ -𝑢| 𝑊𝑠,𝑝 (𝜔) ≤ 𝐶 𝜂 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝑈 ℓ +𝑄 𝑚 2𝜌𝜂 ) + ∥𝑢∥ 𝐿 𝑝 (𝑈 ℓ +𝑄 𝑚 2𝜌𝜂 ) ; 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 ℓ +𝑄 𝑚 2𝜌𝜂 )
Proposition 3.1. (ii) Supp Φ ⊂ 𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 and Φ(𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 ) ⊂ 𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 ; (iii) 𝑢 • Φ ∈ 𝑊 𝑠,𝑝 (𝛺; ℝ 𝜈 ), and moreover, for every 𝜔 ⊂ 𝛺 such that 𝑈 ℓ + 𝑄 𝑚 2𝜌𝜂 ⊂ 𝜔, the following estimates hold: (a) (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 (𝑢 • Φ)∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝜔) ; (c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ)| 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 𝑗 𝜂 𝑗 ∥𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝑈 ℓ +𝑄 𝑚 2𝜌𝜂 ) ; (c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 ℓ +𝑄 𝑚 2𝜌𝜂 ) 𝜂 𝑖 (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, +
𝑖=1
then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 (𝑢 • Φ)∥ 𝐿 𝑝 (𝑄 3 ) ≤ 𝐶 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑄 4 ) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑄 4 ) ; Φ∥ 𝐿 𝑝 (𝑄 3 ) ≤ 𝐶 ∥𝑢∥ 𝐿 𝑝 (𝑄 4 ) ;
𝑗
𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑄 4 ) ;
𝑖=1
c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
𝑗
𝜂 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ)| 𝑊 𝜎,𝑝 (𝑄 3 ) ≤ 𝐶 𝜂 𝑖 d) for every 0 < 𝑠 < +∞, 𝑖=1
∥𝑢 •
we have |𝑢 𝑛 𝑖 | 𝑝 ≤ |𝑢| + |𝑢 𝑛 𝑖 -𝑢| Hence, if 𝛾 : 𝑌 → 𝑋 satisfies 𝑤 • 𝛾 ∈ 𝐿 1 (𝑌, 𝜆), we find that 𝑢 𝑛 𝑖 • 𝛾 ∈ 𝐿 𝑝 (𝑌, 𝜆), and moreover, we have Letting 𝑖 → +∞ allows us to conclude that 𝑢 𝑛 𝑖 • 𝛾 → 𝑢 • 𝛾 in 𝐿 𝑝 (𝑌, 𝜆). Furthermore, since |𝑢 • 𝛾| 𝑝 ≤ 𝑤 • 𝛾, we obtain
∫ 𝑌 |𝑢 𝑛 𝑖 • 𝛾 -𝑢 • 𝛾| 𝑝 d𝜆 ≤ 1 𝑖 𝜅 𝑝 ∫ 𝑌 𝑤 • 𝛾 d𝜆.
𝑝
≤ 𝑤.
𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑦) + 𝐷 𝑗 𝑢(𝑦)| 𝑝
Gathering the estimates obtained in Cases 1, 2, and 3, we deduce that
∫ 1 𝑝
|𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦
≤ 𝐶 13 ∫ 𝐵 𝑚 1 𝜑(𝑧) ∫ 𝜔 ∫ 𝜔 |𝑥 -𝑦| 𝑚+𝜎𝑝 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝐷 𝑗 𝑢(𝑦)| 𝑝 d𝑥d𝑦 1 𝑝
+ 𝑗 𝑖=1 𝜂 𝑖-𝑗 ∫ 𝜔∩supp 𝐷𝜓 ∫ 𝜔∩supp 𝐷𝜓 |𝑥 -𝑦| 𝑚+𝜎𝑝 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑖 𝑢(𝑦 + 𝜓(𝑦)𝑧)| 𝑝 d𝑥d𝑦 1 𝑝
𝑗 ∫ 1 𝑝
+ 𝜂 𝑖-𝑗-𝜎 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)| 𝑝 d𝑥
𝑖=1 𝜔∩supp 𝐷𝜓
Hence, unlike in the previous cases, estimate (4.5) is not needed, and a simple application
of Minkowski's inequality yields
∫ 𝜔\supp 𝐷𝜓 ∫ 𝜔\supp 𝐷𝜓 |𝐷 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦 1 𝑝
≤ ∫ 𝐵 𝑚 1 𝜑(𝑧) ∫ 𝜔\supp 𝐷𝜓 ∫ 𝜔\supp 𝐷𝜓 |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝐷 𝑗 𝑢(𝑦)| 𝑝 |𝑥 -𝑦| 𝑚+𝜎𝑝 d𝑥d𝑦
𝑖-𝑗-𝜎 ∫ 𝜔∩supp 𝐷𝜓 |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)| 𝑝 d𝑥 1 𝑝 d𝑧. Case 3: 𝑥, 𝑦 ∉ supp 𝐷𝜓. In this case, for 𝑖 < 𝑗, we observe that |𝐷 𝑖 𝑢(𝑥 + 𝜓(𝑥)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑥, 𝑧)] -𝐷 𝑖 𝑢(𝑦 + 𝜓(𝑦)𝑧)[𝐿 𝑖,𝑙,𝑗 (𝑦, 𝑧)]| = 0. Moreover, |𝐷 𝑗 𝑢(𝑥 +𝜓(𝑥)𝑧)[(id +𝐷𝜓(𝑥)⊗ 𝑧) 𝑗 ]-𝐷 𝑗 𝑢(𝑥)-𝐷 𝑗 𝑢(𝑦 +𝜓(𝑦)𝑧)[(id +𝐷𝜓(𝑦)⊗ 𝑧) 𝑗 ]+𝐷 𝑗 𝑢(𝑦)| = |𝐷 𝑗 𝑢(𝑥 + 𝜓(𝑥)𝑧) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 𝑢(𝑦 + 𝜓(𝑦)𝑧) + 𝐷 𝑗 𝑢(𝑦)|. 𝑗 (𝜑 𝜓 * 𝑢)(𝑥) -𝐷 𝑗 𝑢(𝑥) -𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑦) + 𝐷 𝑗 𝑢(𝑦)| 𝑝 𝜔 ∫ 𝜔 |𝐷 𝑗 (𝜑 𝜓 * 𝑢)(𝑥) -
7), respectively (4.8). Since 𝑢 op 𝜂 takes its values into 𝐹, for almost every 𝑧 ∈ 𝑄 𝑚 𝜓 𝜂 (𝑥) (𝑥), we have
dist (𝑢 sm 𝜂 (𝑥), 𝐹) ≤ |𝑢 sm 𝜂 (𝑥) -𝑢 𝜂 (𝑧)|. op
Averaging over 𝑄 𝑚 𝜓 𝜂 (𝑥) (𝑥), we find ⨏
dist (𝑢 sm 𝜂 (𝑥), 𝐹) ≤ 𝑄 𝑚 𝜓𝜂 (𝑥) (𝑥) |𝑢 sm 𝜂 (𝑥) -𝑢 𝜂 (𝑧)| d𝑧. op
Using the rewriting (4.1), we deduce that, for every 𝑥 ∈ 𝜔,
dist (𝑢 sm 𝜂 (𝑥), 𝐹) ≤ ⨏ 𝑄 𝑚 𝜓𝜂 (𝑥) (𝑥) ⨏ 𝑄 𝑚 𝜓𝜂 (𝑥) (𝑥) 𝜑 𝑦 -𝑥 𝜓(𝑥) ⨏ |𝑢 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦d𝑧 op ⨏
≤ 𝐶 3 |𝑢 𝜂 (𝑦) -𝑢 op
𝑄 𝑚 𝜓𝜂 (𝑥) (𝑥) 𝑄 𝑚 𝜓𝜂 (𝑥) (𝑥)
op 𝜂 (𝑧)| d𝑦d𝑧. (4.9)
𝑠 < 1. Again, this inequality is only useful on good cubes, but now it requires an upper bound on 𝜓 𝜂 instead if we take ℓ ≤ 𝑠𝑝.Hence, we proceed with the following construction. Assumption (4.6) ensures that we have enough room for the transition region for 𝜓 𝜂 : we have dist (𝐸 𝑚 𝜂 , 𝐾 𝑚 𝜂 \ 𝑈 𝑚 𝜂 ) ≥ 𝜂. Therefore, we may find 𝜁 𝜂 ∈ 𝒞 ∞ (𝛺) such that (a) 0 ≤ 𝜁 𝜂 ≤ 1 in 𝛺; 𝜂 𝑗 ∥𝐷 𝑗 𝜁 𝜂 ∥ 𝐿 ∞ ≤ 𝐶 8
if 𝑠 ≥ 1, respectively
dist (𝑢 sm 𝜂 (𝑥), 𝐹) ≤ 𝐶 7 𝑠-ℓ 𝑝 𝑚-ℓ 𝜓 𝜂 (𝑥) 𝜂 𝑝 |𝑢| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) (4.13)
if 0 < (b) 𝜁 𝜂 = 1 in 𝐾 𝑚 𝜂 \ 𝑈 𝑚 𝜂 ;
(c) 𝜁 𝜂 = 0 in 𝐸 𝑚 𝜂 ;
𝑚
𝜂
and 𝑄 𝑚 𝜓 𝜂 (𝑥) ⊂ 𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 , we have
dist (𝑢 sm 𝜂 (𝑥), 𝐹) ≤ 𝐶 6 1-ℓ 𝑠𝑝 𝜓 𝜂 (𝑥) 𝜂 𝑚-ℓ 𝑠𝑝 ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) (4.12)
(d) for every 𝑗 ∈ {1, . . . , 𝑘 + 1},
𝑄 𝑚 𝜌𝜂 . Estimate (4.7), respectively (4.8), is a straightforward consequence of estimate (4.9) for 𝑥 ∈ 𝐸 𝑚 𝜂 ∩ (𝑈 ℓ + 𝑄 𝑚 𝜌𝜂 ), estimate (4.10), respectively (4.11), for 𝑥 ∈ 𝐾 𝑚 𝜂 \ 𝑈 𝑚 𝜂 , and estimate (4.12), respectively (4.13), for 𝑥 ∈ (𝑈 𝑚 𝜂 \ 𝐸 𝑚 𝜂 ) ∩ (𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 ). Before closing this section, we summarize what we have obtained so far. Given a map 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; 𝐹), we have constructed a smooth map 𝑢 sm
𝜅𝜂 for every 𝑗 ∈ {1, . . . , 𝑘 + 1}, which ensures that the assumptions of Proposition 4.1 are satisfied. Moreover, we have 0 < 𝜓 𝜂 ≤ (𝜌 -𝜌)𝜂, which implies that, if 𝑥 ∈ 𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 , then 𝑄 𝑚 𝜓 𝜂 (𝑥) (𝑥) ⊂ 𝑈 ℓ 𝜂 +
Proposition 5.3. Let
𝑑 > 𝑠𝑝. Let Φ be as in Proposition 5.2. Let 𝜔 ⊂ ℝ 𝑚 be such that𝑄 3 ⊂ 𝜔 ⊂ 𝐵 𝑚 𝜂 𝑗 ∥𝐷 𝑗 (𝑢 • Φ)∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 𝜂 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ)| 𝑊 𝜎,𝑝 (𝜔) ≤ 𝐶 𝑗 𝑖=1𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝜔) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝜔) ;(d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ∥ 𝐿 𝑝 (𝜔) ≤ 𝐶 ∥𝑢 ∥ 𝐿 𝑝 (𝜔) ;
1 2 diam 𝜔. (5.3)
𝑐𝜂 for some 𝑐 > 0, and assume that there exists 𝑐 ′ > 0 such that
|𝐵 𝑚 𝜆 (𝑧) ∩ 𝜔| ≥ 𝑐 ′ 𝜆
𝑚 for every 𝑧 ∈ 𝜔 and 0 < 𝜆 ≤ For every 𝑢 ∈ 𝑊 𝑠,𝑝 (𝜔; ℝ 𝜈 ), we have 𝑢 •Φ ∈ 𝑊 𝑠,𝑝 (𝜔; ℝ 𝜈 ), and moreover, the following estimates hold: (a) if 0 < 𝑠 < 1, then |𝑢 • Φ| 𝑊 𝑠,𝑝 (𝜔) ≤ 𝐶|𝑢| 𝑊 𝑠,𝑝 (𝜔) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝜔) ; (c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 (𝑢 • Φ)∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 16 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ; (c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ)| 𝑊 𝜎,𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 17 𝑗 𝑖=1 𝜂 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) + 𝜂 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) ≤ 𝐶 18 ∥𝑢 ∥ 𝐿 𝑝 (𝑈 𝑚 +𝑄 𝑚 𝜌𝜂 ) . Conclusion follows by an additional application of the additivity of the integral or Lemma 2.1, by noting that actually Supp Φ ⊂ 𝑈 𝑚 + 𝑄 𝑚 𝜏 ℓ 𝜂 . We close this section with a discussion about how the thickening technique that we investigated inserts itself in the proof of Theorem 1.2. At the end of Section 4, we obtained an estimate on Dist 𝐹 (𝑢 sm 𝜂 ((𝐾 𝑚 \ 𝑈 𝑚 𝜂 ) ∪ (𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 )), where we recall that 𝑢 sm 𝜂 is the map obtained by successively opening and smoothing a map 𝑢 ∈ 𝑊 𝑠,𝑝 (𝛺; 𝐹), with 𝐹 ⊂ ℝ 𝜈 being an arbitrary closed set. Informally, we were able to control the distance between 𝑢 sm 𝜂 and 𝐹 except on the cubes in 𝒰 𝑚 𝜂 , far from the ℓ -skeleton. We apply now thickening to the map 𝑢 sm 𝜂 . Let Φ th 𝜂 be the map provided by Proposition 5.1 applied to 𝒰 𝑚 𝜂 with 𝒮 𝑚 = 𝒦 𝑚 𝜂 and using parameter 𝜌. We set 𝑢 th 𝜂 = 𝑢 sm 𝜂 • Φ th 𝜂 . To have 𝑢 ∈ 𝑊 𝑠,𝑝 along with the estimates provided by Proposition 5.1, we need to take ℓ + 1 > 𝑠𝑝. Since we already required ℓ ≤ 𝑠𝑝 in Section 4, this invites us to work with ℓ = [𝑠𝑝]. 𝐾 𝑚 𝜂 \ 𝑈 𝑚 𝜂 . On the other hand, by inclusion (iii) in Proposition 5.1, we have Φ th 𝜂 (𝑈 𝑚 𝜂 \ 𝑇 ℓ * 𝑠 < 1. Moreover, 𝑢 sm 𝜂 being smooth, the map 𝑢 th 𝜂 is smooth on 𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 . To summarize, we have obtained a map 𝑢 th 𝜂 which is smooth on 𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 , and whose distance from 𝐹 is controlled on the whole 𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 . Now let us get back to the case we are interested in, that is, where 𝐹 = 𝒩. In this case, it is well-known that there exists 𝜄 > 0 such that the nearest point projection Π : 𝒩 + 𝐵 𝑚 𝜄 → 𝒩 is well-defined and smooth. The open set 𝒩 + 𝐵 𝑚 𝜄 is called a tubular neighborhood of 𝒩. Assume that the right-hand side of (5.6) or (5.7) is less than 𝜄. Note that this requires both to take 𝑟 sufficiently small and to choose ℰ 𝑚 𝜂 such that, for every 𝜎 𝑚 ∈ 𝒦 𝑚 𝜂 \ ℰ 𝑚 𝜂 , ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) ≤ Under this assumption, the map 𝑢 𝜂 = Π • 𝑢 th 𝜂 is well-defined and smooth on 𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 , and takes its values into 𝒩.
if 0 < 𝜂 𝑚 𝑠𝑝 -1 𝐶 𝜄, respectively |𝑢| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) ≤ 𝜂 𝑚 𝑝 -𝑠 𝐶 𝜄. (5.8)
𝜂 ) ⊂ 𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 .
Therefore, Φ th 𝜂 (𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 ) ⊂ (𝐾 𝑚 𝜂 \ 𝑈 𝑚 𝜂 ) ∪ (𝑈 ℓ 𝜂 + 𝑄 𝑚 𝜌𝜂 ).
Combining this observation with estimate (4.7), respectively (4.8), we deduce that
Dist 𝐹 (𝑢 th 𝜂 (𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 )) ≤ max max 𝜎 𝑚 ∈𝒦 𝑚 𝜂 \ℰ 𝑚 𝜂 𝐶 𝜂 1 𝑚 𝑠𝑝 -1 ⨏ ∥𝐷𝑢∥ 𝐿 𝑠𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) , ⨏
sup 𝐶 ′ |𝑢 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦d𝑧 op (5.6)
𝑥∈𝑈 ℓ 𝜂 +𝑄 𝑚 𝜌𝜂 𝑄 𝑚 𝑟 (𝑥) 𝑄 𝑚 𝑟 (𝑥)
if 𝑠 ≥ 1, respectively
Dist 𝐹 (𝑢 th 𝜂 (𝐾 𝑚 𝜂 \ 𝑇 ℓ * 𝜂 )) ≤ max max 𝜎 𝑚 ∈𝒦 𝑚 𝜂 \ℰ 𝑚 𝜂 𝐶 𝜂 1 𝑚 𝑝 -𝑠 ⨏ |𝑢| 𝑊 𝑠,𝑝 (𝜎 𝑚 +𝑄 𝑚 2𝜌𝜂 ) , ⨏
sup 𝐶 ′ |𝑢 𝜂 (𝑦) -𝑢 op 𝜂 (𝑧)| d𝑦d𝑧 op (5.7)
𝑥∈𝑈 ℓ 𝜂 +𝑄 𝑚 𝜌𝜂 𝑄 𝑚 𝑟 (𝑥) 𝑄 𝑚 𝑟 (𝑥)
□ By inclusion (ii) in Proposition 5.1, we have
Φ th 𝜂 (𝐾 𝑚 𝜂 \ (𝑇 ℓ * 𝜂 ∪ 𝑈 𝑚 𝜂 )) ⊂
Int 𝑈 𝑚 𝜂 , we may define 𝜓 𝜂 as at the end of Section 4. Namely, we let 𝜓 𝜂 = 𝑡𝜁 𝜂 + 𝑟(1 -𝜁 𝜂 ), where 𝜁 𝜂 satisfies assumptions (a) to (d) page 41 and 0 < 𝑟 < 𝑡 with 𝑡 defined by (4.14). With this choice, 𝜓 𝜂 satisfies the assumptions of Proposition 4.1, and moreover 0 < 𝜓 𝜂 ≤ 𝜌𝜂. This implies that 𝑄 𝑚 1+𝛾 ⊂ {𝑥 ∈ 𝑄 𝑚 1+2𝛾 : dist (𝑥, 𝜕𝑄 𝑚 1+2𝛾 ) ≥ 𝜓(𝑥)}, and hence 𝑢 sm 𝜂 = 𝜑 𝜓 𝜂 * 𝑢 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, 𝜂 𝑗 ∥𝐷 𝑗 𝑢 sm 𝜂 -𝐷 𝑗 𝑢 𝜂 𝑗 ∥𝜏 𝜓 𝜂 𝑣 (𝐷 𝑗 𝑢
(b) if op 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 1+𝛾 ) ≤ sup 1 𝑣∈𝐵 𝑚 op 𝜂 ) -𝐷 𝑗 𝑢 𝜂 ∥ 𝐿 𝑝 (𝑄 𝑚 op 1+𝛾 )
𝑗
+ 𝐶 5 𝜂 𝑖 ∥𝐷 𝑖 𝑢
𝑖=1
op 𝜂 is well-defined and smooth on 𝑄 𝑚 1+𝛾 . Moreover, Proposition 4.1 and
equation (4.2) for the zero order case applied with 𝜔 = 𝑄 𝑚 1+𝛾 ensure that
(a) if 0 < 𝑠 < 1, then
|𝑢 sm 𝜂 -𝑢 𝜂 | 𝑊 𝑠,𝑝 (𝑄 𝑚 op 1+𝛾 ) ≤ sup 1 𝑣∈𝐵 𝑚 |𝜏 𝜓 𝜂 𝑣 (𝑢 𝜂 ) -𝑢 op 𝜂 | 𝑊 𝑠,𝑝 (𝑄 𝑚 op 1+𝛾 ) ;
op 𝜂 with 𝛺 = 𝑄 𝑚 1+2𝛾 . Let 𝜑 ∈ 𝐵 𝑚 1 be a fixed mollifier. Since 𝐸 𝑚 𝜂 ⊂ op 𝜂 ∥ 𝐿 𝑝 (𝐴) ;
. . , 𝑥 𝑛 ∈ ℝ 𝑚 and associated isometries 𝑇 1 , . . . , 𝑇 𝑛 of ℝ 𝑚 mapping 0 to 𝑥 𝑖 such that nonzero vectors 𝑧 1 , . . . ,𝑧 𝑛 ∈ ℝ 𝑚 such that, if 𝑦 ∈ 𝑇 𝑖 (𝐵 𝛿 ) ∩ 𝛺, then 𝑦 +𝑡𝑧 𝑖 ∈ 𝛺 for every 0 < 𝑡 < 1. Let 𝜓 : ℝ 𝑚-1 → [0, 1] be a smooth map such that 𝜓(𝑥) = 1 if 𝑥 ∈ 𝐵 𝛿/2 and 𝜓(𝑥) = 0 if 𝑥 ∈ ℝ 𝑚-1 \ 𝐵 3𝛿/4 . For 0 < 𝛾 < 1, we define Φ 𝑖,𝛾 : ℝ 𝑚 → ℝ 𝑚 by Φ 𝑖,𝛾 (𝑥) = 𝑥 + 𝛾𝜓(𝑇 -1 𝑖 (𝑥))𝑧 𝑖 . If 𝛾 < (∥𝐷𝜓∥ 𝐿 ∞ |𝑧 𝑖 |) -1, we observe that Φ 𝑖,𝛾 is a smooth diffeomorphism. Moreover, by construction of the vectors 𝑧 𝑖 , we have Φ 𝑖,𝛾 (𝛺) ⊂ 𝛺.
𝑛
𝜕𝛺 ⊂ 𝑇 𝑖 (𝐵 𝛿/2 ), (6.11)
𝑖=1
and also associated
𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + ∥𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 𝑠 |𝑣| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + 𝐶 ∥𝑣∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (b) if 𝑠 ≥ 1,then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂) 𝑗 ∥𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑣 ∥ 𝐿 𝑝 (𝐾 𝑚 ) ≤ 𝐶 ′ 𝑖 ∥𝐷 𝑖 𝑣∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂) 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ) -𝐷 𝑗 𝑣| 𝑊 𝜎,𝑝 (𝐾 𝑚 ) 𝑖 ∥𝐷 𝑖 𝑣∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑣| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ -𝑣∥ 𝐿 𝑝 (𝐾 𝑚 ) ≤ 𝐶 ′ ∥𝑣 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ;
𝑖=1 𝑗 (𝜇𝜂) ≤ 𝐶 ′ + 𝐶(𝜇𝜂) 𝑗 𝑖=1 (𝜇𝜂) 2𝜇𝜂 ))
Finally, we let 𝜃(𝑥 ′′ ) = θ(|𝑥 ′′ | 𝑞 ). With this definition, the map 𝜃 is smooth and satisfies 𝜃(𝑥 ′′ ) = 0 if 𝑥 ′′ ∈ 𝑄 𝑚-𝑑 1-𝜇 𝜇 and 𝜃(𝑥 ′′ ) = 1 if 𝑥 ′′ ∈ ℝ 𝑚-𝑑 \ 𝑄 𝑚-𝑑The number 𝜀 > 0 is to be determined later on, depending only on 𝜇/𝜇. As we will see in the course of the proof, the extra term involving 𝜏, which was not present in Section 5, serves to obtain a desingularized construction.We are now ready to state the geometric properties of Φ, which are the purpose of Proposition 7.2 below. Let 𝑑 ∈ {1, . . . , 𝑚}, 𝜂 > 0, 0 < 𝜇 < 𝜇 < 𝜇 < 1, and 0 < 𝜏 < 𝜇/𝜇. There exists a smooth function Φ : ℝ 𝑚 → ℝ 𝑚 of the form Φ(𝑥) = (𝜆(𝑥)𝑥 ′ , 𝑥 ′′ ), with 𝜆 : ℝ 𝑚 → [1, +∞), and such that 𝜏 𝑑 , for some constant 𝐶 ′ > 0 depending on 𝛽, 𝑗, 𝑚, 𝜇/𝜇 and 𝜇/𝜇.Proof. As we announced, we use the same construction as in[START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF] Proposition 8.3]. Similar to thickening, we start by constructing an intermediate map Ψ : ℝ 𝑚 → ℝ 𝑚 which satisfies the conclusion of Proposition 7.2 with the rectangles 𝑄 𝑖 replaced by the cylinders 𝐵 𝑖 defined as 𝐵 2
1-𝜇 .
𝜇
Proposition 7.2. (i) Φ is injective;
(ii) Supp Φ ⊂ 𝑄 3 ;
(iii) Φ(𝐵 1 ) ⊃ 𝑄 2 ;
(iv) for every 𝑥 ∈ 𝑄 3 , |𝐷 𝑗 Φ(𝑥)| ≤ 𝐶 𝜇𝜂 𝜁 𝑗 (𝑥) for every 𝑗 ∈ ℕ
* , and for every 𝑥 ∈ ℝ 𝑚 ,
|𝐷 𝑗 Φ(𝑥)| ≤ 𝐶 (𝜇𝜂)
1-𝑗 𝜏 𝑗 for every 𝑗 ∈ ℕ * , for some constant 𝐶 > 0 depending on 𝑗, 𝑚, 𝜇/𝜇 and 𝜇/𝜇; (v) for every 𝑥 ∈ ℝ 𝑚 , jac Φ(𝑥) ≥ 𝐶 ′ (𝜇𝜂) 𝛽 𝜁 𝛽 (𝑥) for every 0 < 𝛽 < 𝑑, and for every 𝑥 ∈ 𝐵 1 , jac Φ(𝑥) ≥ 𝐶 ′ 1
Proposition 7.3. Let 𝑑 > 𝑠𝑝. Let Φ be as in Proposition 7.2. Let 𝜔 ⊂ ℝ 𝑚 be such that 𝑄 2 ⊂ 𝜔 ⊂ 𝐵 𝑚 𝑐𝜇𝜂 for some 𝑐 > 0, and assume that there exists 𝑐 ′ > 0 such that |𝐵 𝑚 𝜆 (𝑧) ∩ (𝜔 \ 𝑄 2 )| ≥ 𝑐 ′ 𝜆 𝑚 for every 𝑧 ∈ 𝜔 \ 𝑄 2 and 0 < 𝜆 ≤ 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝜔\𝑄 2 ) + 𝐶𝜏 𝑗+𝜎 |𝐷 𝑗 (𝑢•Φ)| 𝑊 𝜎,𝑝 (Φ -1 (𝜔)) ≤ 𝐶 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝜔\𝑄 2 ) +(𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝜔\𝑄 2 ) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝜔)) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝜔) ;
1 2 diam 𝜔. (7.2)
𝑗
𝑑-𝑗𝑝
𝑝 (𝜇𝜂) 𝑗 𝑖=1
𝑖=1 𝑑-(𝑗+𝜎)𝑝 (𝜇𝜂) + 𝐶𝜏 𝑗 𝑝 𝑖=1 (𝜇𝜂) (d) for every 0 < 𝑠 < +∞,
𝑑
∥𝑢 • Φ∥ 𝐿 𝑝 (Φ -1 (𝜔)) ≤ 𝐶 ∥𝑢∥ 𝐿 𝑝 (𝜔\𝑄 2 ) + 𝐶𝜏 𝑝 ∥𝑢∥ 𝐿 𝑝 (𝜔) ;
For every 𝑢 ∈ 𝑊 𝑠,𝑝 (Φ -1 (𝜔); ℝ 𝜈 ), we have 𝑢 • Φ ∈ 𝑊 𝑠,𝑝 (Φ -1 (𝜔); ℝ 𝜈 ), and moreover, the following estimates hold:
(a) if 0 < 𝑠 < 1, then |𝑢 • Φ| 𝑊 𝑠,𝑝 (Φ -1 (𝜔)) ≤ 𝐶 |𝑢| 𝑊 𝑠,𝑝 (𝜔\𝑄 2 ) + 𝐶𝜏 𝑑-𝑠𝑝 𝑝 |𝑢| 𝑊 𝑠,𝑝 (𝜔) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂) 𝑗 ∥𝐷 𝑗 (𝑢 • Φ)∥ 𝐿 𝑝 (Φ -1 (𝜔)) ≤ 𝐶 𝑗 𝑖=1 (𝜇𝜂) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝜔) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂)
|𝑢 • Φ(𝑥) -𝑢(𝑧)| 𝑝 |𝑥 -𝑦| 𝑚+𝑠𝑝 d𝑧d𝑦d𝑥. Observe that |ℬ 𝑥,𝑦 | ≥ 𝐶 15 |Φ(𝑥) -Φ(𝑦)| 𝑚 due to the fact that 𝑄 2 is a rectangle with comparable sidelengths. Moreover, |Φ(𝑥) -𝑧| ≤ 3 2 |Φ(𝑥) -Φ(𝑦)|. Hence, using Tonelli's theorem, we find
|𝑢•Φ 𝜎𝑑 | 𝑊 𝑠,𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )\(𝑇 𝑚-𝑑-1 +𝑄 𝑚 𝜇 𝑑 𝜂 )) ≤ 𝐶 1 |𝑢| 𝑊 𝑠,𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) +𝐶 2 𝜏 𝑑-𝑠𝑝 𝑝 |𝑢| 𝑊 𝑠,𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂) 𝑗 ∥𝐷 𝑗 (𝑢•Φ 𝜎 𝑑 )∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )\(𝑇 𝑚-𝑑-1 +𝑄 𝑚 𝜇 𝑑 𝜂 )) ≤ 𝐶 3 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )) ;(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},(𝜇𝜂) 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ 𝜎 𝑑 )| 𝑊 𝜎,𝑝 (𝑄 4 \(𝑇 𝑚-𝑑-1 +𝑄 𝑚 𝜇 𝑑 𝜂 )) 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂)) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝑇 𝜎 𝑑 (𝑄 4 )\(𝑇 𝑚-𝑑 +𝑄 𝑚
𝑗
𝑖=1 (𝜇𝜂) + 𝐶 4 𝜏 𝑑-𝑗𝑝 𝑝 (𝜇𝜂) ≤ 𝐶 5 𝑗 𝑖=1 𝑗 (𝜇𝜂) 𝜇 𝑑-1 𝜂 ))
𝑖=1
𝑗
𝑑-(𝑗+𝜎)𝑝
+ 𝐶 6 𝜏 𝑝
𝑖=1
(𝜇𝜂) 𝑠 |𝑢 • Ψ 𝑑 | 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑-1 +𝑄 𝑚 𝜇 𝑑 𝜂 )) ≤ 𝐶 9 (𝜇𝜂) 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) + ∥𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) + 𝐶 10 𝜏 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + ∥𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘},(𝜇𝜂) 𝑗 ∥𝐷 𝑗 (𝑢 • Ψ 𝑑 )∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑-1 +𝑄 𝑚 𝜇 𝑑 𝜂 )) 𝑖 ∥𝐷 𝑖 𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ;(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},(𝜇𝜂) 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Ψ 𝑑 )| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑-1 +𝑄 𝑚 𝜇 𝑑 𝜂 )) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) + 𝐶 14 𝜏 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 • Ψ 𝑑 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 𝑚-𝑑-1 +𝑄 𝑚 𝜇 𝑑 𝜂 )) ≤ 𝐶 15 ∥𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚In particular, since 𝜏 < 1, another application of Proposition 7.3 yields the following simpler estimates:(a) if 0 < 𝑠 < 1, then (𝜇𝜂) 𝑠 |𝑢 • Ψ 𝑑 | 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ 𝐶 17 (𝜇𝜂) 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + ∥𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂) 𝑗 ∥𝐷 𝑗 (𝑢 • Ψ 𝑑 )∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ 𝐶 18 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ;(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},(𝜇𝜂) 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Ψ 𝑑 )| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 • Ψ 𝑑 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ 𝐶 20 ∥𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) .Combining both these sets of estimates through a downward induction procedure on 𝑑, we arrive at the following estimates:(a) if 0 < 𝑠 < 1, then (𝜇𝜂) 𝑠 |𝑢 • Φ| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ 𝐶 21 (𝜇𝜂) 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 ℓ * +𝑄 𝑚 𝜇𝜂 )) + ∥𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 ℓ * +𝑄 𝑚 𝜇𝜂 )) + 𝐶 22 𝜏 ℓ +1-𝑠𝑝 𝑝 (𝜇𝜂) 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + ∥𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂) 𝑗 ∥𝐷 𝑗 (𝑢 • Φ)∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ 𝐶 23 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 ℓ * +𝑄 𝑚 𝜇𝜂 )) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ;(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},(𝜇𝜂) 𝑗+𝜎 |𝐷 𝑗 (𝑢 • Φ)| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 ℓ * +𝑄 𝑚 𝜇𝜂 )) +(𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 • Φ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ 𝐶 27 ∥𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )\(𝑇 ℓ * +𝑄 𝑚 𝜇𝜂 )) + 𝐶 28 𝜏 ℓ +1-𝑠𝑝 𝑝 ∥𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) . (a) if 0 < 𝑠 < 1, then (𝜇𝜂) 𝑠 |𝑢 sh 𝜏 𝜇 ,𝜇 -𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ) ≤ 𝐶 1 (𝜇𝜂) 𝑠 |𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + ∥𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (b) if 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂) 𝑗 ∥𝐷 𝑗 𝑢 sh 𝜏 𝜇 ,𝜇 -𝐷 𝑗 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ) ≤ 𝐶 2 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘}, (𝜇𝜂) 𝑗+𝜎 |𝐷 𝑗 𝑢 sh 𝜏 𝜇 ,𝜇 -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + (𝜇𝜂) 𝑖+𝜎 |𝐷 𝑖 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 sh 𝜏 𝜇 ,𝜇 -𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ) ≤ 𝐶 4 ∥𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 ))
ℓ +1-𝑠𝑝 𝑝 (𝜇𝜂) ≤ 𝐶 11 𝑗 𝑖=1 (𝜇𝜂) + 𝐶 12 𝜏 (𝜇𝜂) ≤𝐶 13 ℓ +1-𝑠𝑝 𝑝 𝑗 𝑖=1 𝑗 𝑖=1 (𝜇𝜂) ℓ +1-𝑠𝑝 𝑝 𝑗 𝑖=1 (𝜇𝜂) 2𝜇𝜂 )) 𝑗 𝑗 𝑖=1 (𝜇𝜂) 2𝜇𝜂 )) ≤ 𝐶 19 𝑖=1 (𝜇𝜂) 𝑗 𝑖=1 (𝜇𝜂) + 𝐶 24 𝜏 ℓ +1-𝑠𝑝 𝑝 𝑗 𝑖=1 (𝜇𝜂) 2𝜇𝜂 )) ≤ 𝐶 25 𝑗 𝑖=1 (𝜇𝜂) 2𝜇𝜂 )\(𝑇 ℓ * +𝑄 𝑚 𝜇𝜂 )) + 𝐶 26 𝜏 ℓ +1-𝑠𝑝 𝑝 𝑗 𝑖=1 (𝜇𝜂) 𝑗 𝑖=1 𝑗 (𝜇𝜂) ≤ 𝐶 3 (𝜇𝜂)
𝑖=1
2𝜇𝜂 )\(𝑇 𝑚-𝑑 +𝑄 𝑚 𝜇 𝑑-1 𝜂 )) + 𝐶 16 𝜏 ℓ +1-𝑠𝑝 𝑝 ∥𝑢 ∥
𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ≤ 𝐶 6 |𝐾 𝑚 ∩ (𝑇 ℓ * + 𝑄 𝑚 2𝜇𝜂 )|On the other hand, we observe that|𝐾 𝑚 ∩ (𝑇 ℓ * + 𝑄 𝑚 2𝜇𝜂 )| ≤ 𝐶 7 (𝜇𝜂) ℓ +1 . Therefore, (a) if 0 < 𝑠 < 1, then |𝑢 sh 𝜏 𝜇 ,𝜇 -𝑢| 𝑊𝑠,𝑝 (𝐾 𝑚 ) ≤ 𝐶 8 |𝑢| 𝑊 𝑠,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + 𝐶 9 (𝜇𝜂) 𝑠 ≥ 1, then for every 𝑗 ∈ {1, . . . , 𝑘}, ∥𝐷 𝑗 𝑢 sh 𝜏 𝜇 ,𝜇 -𝐷 𝑗 𝑢 ∥ 𝐿 𝑝 (𝐾 𝑚 ) ≤ 𝐶 10 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘}, |𝐷 𝑗 𝑢 sh 𝜏 𝜇 ,𝜇 -𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ) ≤ 𝐶 11 𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + 𝐶 13 |𝐷 𝑗 𝑢| 𝑊 𝜎,𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ; (d) for every 0 < 𝑠 < +∞, ∥𝑢 sh 𝜏 𝜇 ,𝜇 -𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ) ≤ 𝐶 14 (𝜇𝜂)
(b) if 𝑗
𝑖=1 (𝜇𝜂) 𝑖-𝑗+ 𝑠-𝑖 𝑠𝑝 (ℓ +1) ∥𝐷 𝑖 𝑢∥ 𝐿 𝑠𝑝 𝑖 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ;
(c) if 𝑗
𝑖=1 (𝜇𝜂) 𝑖-𝑗-𝜎+ 𝑠-𝑖 𝑠𝑝 (ℓ +1) ∥𝐷 𝑖 𝑢∥ 𝐿 𝑠𝑝 𝑖 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 ))
𝑘-1
+ 𝐶 12 𝑖=1 (𝜇𝜂) 𝑖-𝑗+ 𝑠-𝑖-𝜎 𝑠𝑝 (ℓ +1) ∥𝐷 𝑖 𝑢∥ 1-𝜎 𝐿 𝑠𝑝 𝑖 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ∥𝐷 𝑖+1 𝑢∥ 𝜎 𝐿 𝑖+1 (ℓ +1 𝑠𝑝
1
𝑝 .
ℓ +1-𝑠𝑝
𝑝 ;
90
Theorem 8.3. Let 𝛺 ⊂ ℝ 𝑚 be a smooth bounded open domain. If 𝑠𝑝 < 𝑚, if 𝜋 [𝑠𝑝] (𝒩) = {0}, and if 𝛺 has the [𝑠𝑝]-extension property with respect to 𝒩, then 𝒞 ∞ (𝛺; 𝒩) is dense in 𝑊 𝑠,𝑝 (𝛺; 𝒩).
𝑝d𝑧.
Acknowledgements
I am deeply grateful to Petru Mironescu and Augusto Ponce for introducing me to this beautiful topic, for their constant support and many helpful suggestions to improve the exposition. I especially thank Petru Mironescu for long discussions concerning the paper, and Augusto Ponce for sharing and discussing with me the preprint [11].
(𝜇𝜂) 𝑖 ∥𝐷 𝑖 𝑢∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) + 𝐶(𝜇𝜂) 𝑗 ∥𝐷 𝑗 𝑣∥ 𝐿 𝑝 (𝐾 𝑚 ∩(𝑇 ℓ * +𝑄 𝑚 2𝜇𝜂 )) ;
(c) if 𝑠 ≥ 1 and 𝜎 ≠ 0, then for every 𝑗 ∈ {1, . . . , 𝑘},
for some constant 𝐶 > 0 depending on 𝑚, 𝑠, and 𝑝.
For integer order estimates, we could avoid mentioning the map 𝑣 in the statement of Proposition 7.1 and only establish energy estimates for 𝑢 • Φ alone on 𝐾 𝑚 ∩ (𝑇 ℓ * + 𝑄 𝑚 2𝜇𝜌 ), as in [START_REF]Strong density for higher order Sobolev spaces into compact manifolds[END_REF], as the estimates above then follow from the assumption 𝑢 = 𝑣 outside of 𝑇 ℓ * + 𝑄 𝑚 2𝜇𝜌 using the additivity of the integral. However, for fractional order estimates, we face the usual problem linked to the lack of additivity of the Gagliardo seminorm.
We pause here to explain how Proposition 7.1 will be used in the proof of Theorem 1.1. Given 𝑢 and 𝑣 as above, Proposition 7.1 allows us to control, via a suitable choice of 𝜏 > 0, the energy of 𝑢 • Φ in terms of the energy of 𝑣 alone. Indeed, given 𝜇 > 0, if we choose 𝜏 𝜇 sufficiently small -depending on 𝑣 -then, using the fact that 𝑢 = 𝑣 outside of 𝑇 ℓ * + 𝑄 𝑚 𝜇𝜂 , we find
Conclusion follows from the fact that 𝑢 = 𝑣 outside of 𝑇 ℓ * + 𝑄 𝑚 𝜇𝜂 , by noting that actually Supp Φ ⊂ 𝑇 ℓ * + 𝑄 𝑚 𝜈 𝑚 𝜂 , and using once again the additivity of the integral or Lemma 2.1.
□ 8 Density of smooth maps
In view of Theorem 1.2, in order to prove Theorem 1.1, it suffices to show that maps of the class ℛ may be approximated by smooth maps with values into 𝒩. As we already announced, the basic idea to do so is to remove the singularities of maps in the class ℛ by filling them with a smooth map. The key tool in this direction is the following lemma, which relies on the fact that 𝐾 ℓ is a homotopy retract of the complement 𝐾 𝑚 \ 𝑇 ℓ * of the dual skeleton 𝑇 ℓ * . The statement we present is from [9, Proposition 7.1], but similar ideas were already used, e.g., in [34, Section 1], [21, Section 2], or [START_REF] Hang | Topology of Sobolev mappings[END_REF]Section 6]. Lemma 8.1. Let 𝒦 𝑚 be a cubication in ℝ 𝑚 of radius 𝜂 > 0, ℓ ∈ {0, . . . , 𝑚 -1}, 𝒯 ℓ * the dual skeleton of 𝒦 ℓ , and 𝑢 ∈ 𝒞 ∞ (𝐾 𝑚 \ 𝑇 ℓ * ; 𝒩). If there exists 𝑓 ∈ 𝒞 0 (𝐾 𝑚 ; 𝒩) such that 𝑓 |𝐾 ℓ = 𝑢 |𝐾 ℓ , then for every 0 < 𝜇 < 1, there exists 𝑣 ∈ 𝒞 ∞ (𝐾 𝑚 ; 𝒩) such that 𝑣 = 𝑢 on 𝐾 𝑚 \ (𝑇 ℓ * + 𝑄 𝑚 𝜇𝜂 ).
In order to apply Lemma 8.1, it is useful to know when a continuous map from 𝐾 ℓ to 𝒩 may be extended to a continuous map from 𝐾 𝑚 to 𝒩. Following Hang and Lin [START_REF] Hang | Topology of Sobolev mappings[END_REF], we introduce the notion of extension property.
Let 𝒦 𝑚 be a cubication in ℝ 𝑚 and ℓ ∈ {0, . . . , 𝑚-1}. We say that 𝒦 𝑚 has the ℓ -extension property with respect to 𝒩 whenever, for every continuous map 𝑓 : 𝐾 ℓ +1 → 𝒩, 𝑓 |𝐾 ℓ has an extension 𝑔 ∈ 𝒞 0 (𝐾 𝑚 ; 𝒩). The identification of the key role played by the extension property in the strong density problem was one of the major contributions of [START_REF] Hang | Topology of Sobolev mappings[END_REF]. In this respect, we start with the following proposition, which provides an approximation results for maps in the class ℛ as the ones used in the proof of Theorem 1.2. All the other results in this section, starting with Theorem 1.1, will be deduced from this proposition. Proposition 8.2. Let 𝒦 𝑚 be a cubication in ℝ 𝑚 . Let ℓ ∈ {0, . . . , 𝑚 -1} be such that ℓ = [𝑠𝑝], and 𝒯 ℓ * the dual skeleton of 𝒦 ℓ . If 𝜋 ℓ (𝒩) = {0} and if 𝐾 𝑚 has the ℓ -extension property with respect to 𝒩, then 𝒞 ∞ (𝐾 𝑚 ; 𝒩) is dense in 𝒞 ∞ (𝐾 𝑚 \ 𝑇 ℓ * ; 𝒩) ∩ 𝑊 𝑠,𝑝 (𝐾 𝑚 ; 𝒩) with respect to the 𝑊 𝑠,𝑝 distance.
Proof. Let 𝑢 ∈ 𝒞 ∞ (𝐾 𝑚 \𝑇 ℓ * ; 𝒩)∩𝑊 𝑠,𝑝 (𝐾 𝑚 ; 𝒩). We denote by 𝜂 the radius of the cubication 𝒦 𝑚 . Using the assumption 𝜋 ℓ (𝒩) = {0}, we may extend 𝑢 |𝐾 ℓ to a continuous map from 𝐾 ℓ +1 to 𝒩. The ℓ -extension property of 𝒦 𝑚 with respect to 𝒩 then ensures that 𝑢 |𝐾 ℓ extends to a continuous map from 𝐾 𝑚 to 𝒩. Therefore, Lemma 8.1 implies that, for every 0 < 𝜇 < 1, there exists a map 𝑢 ex 𝜇 ∈ 𝒞 ∞ (𝐾 𝑚 ; 𝒩) such that 𝑢 ex 𝜇 = 𝑢 on 𝐾 𝑚 \ (𝑇 ℓ * + 𝑄 𝑚 𝜇𝜂 ). We now apply shrinking to this map 𝑢 ex 𝜇 . More precisely, we assume that 𝜇 < 1 2 , we take 0 < 𝜏 < 1 2 and we define 𝑢 sh 𝜏,𝜇 = 𝑢 ex 𝜇 •Φ sh 𝜏,𝜇 , where Φ sh 𝜏,𝜇 is provided by Proposition 7.1. By Proposition 7.1 and the remark below, choosing 𝜏 = 𝜏 𝜇 sufficiently small, we deduce that |
00410200 | en | [
"spi.nrj"
] | 2024/03/04 16:41:20 | 2009 | https://hal.science/hal-00410200/file/Beroual_IMETI2009.pdf | Y N Jaffré
T Aka-Ngnui
A Beroual
Non-Thermal Plasmas for NOx Treatment
Keywords: Corona Discharges, Energy, Non-Thermal Plasmas, Gas Treatments
This work is devoted to the determination of corona ignition threshold for non-thermal plasma generation and to the optimization of various kinds of plasma reactor geometries for exhaust gas treatment applications. The tested plasma reactor geometries were cylindrical. Some reactors have dielectric barrier made of glass or quartz in order to observe the discharges. First, the distributions of electric field and energy in reactors have been simulated. Then, current and voltage waveforms have been measured to check the discharge appearance. Geometrical singularities have been detected by both luminescence and current measurements to avoid unwanted current distortions. The experimental analyze shows the evolution of resistive and reactive components of current associated to corona discharges when increasing the voltage.
INTRODUCTION
Nitrogen oxides (NOx) are among the most important and critical pollutants that need to be reduced in exhaust gases. The present Selective Catalytic Reduction (SCR) processes can be improved for NOx reduction by a Non-Thermal Plasma (NTP) treatment. Plasmas are ionized gases, whose active species are generated by applying high electric fields in a plasma reactor. NTP alone does not reduce NO generated by a thermal engine to N2 and O2, but provides an oxidation of NO to NO 2 upstream of a traditional Selective Catalytic Reduction. The efficiency of this system depends on two main parameters: power supply and geometry. Plasmas have been studied for gas treatment since 1980. First applications were dedicated to power plants, for example in Riace, Italy, for ENEL's coal fired electrical generation plant (Civitano et al., 1986). Research for automotive transportation exhaust gas treatment started in the middle of the 1990's. Nowadays, the purpose is to assist the newest catalyst processes with non-thermal plasmas (NTP) to optimize the energetic consumptions and to improve the technology by mastering the discharge. This kind of treatment is of great importance since transportation exhaust gas regulations are becoming ever more stringent. European norm (standard specification) EURO 6 im-poses a reduction of 50% on automotive NOx emissions within 2015. This paper aims at the generation and optimization of plasmas for pollution control on automotive thermal engines. In the first part, we recall some fundamental features of discharge mechanisms. The second part lights on the influence of the electric field for NTP generation and link it to the electron energies. The third part shows results of external electric field simulations. And the last part presents respectively the experimental set-up and the measurements on NTP reactor.
DISCHARGE MECHANISMS
The current versus voltage characteristic is plotted on figure 1 for DC voltage supply with an interelectrode gap of 1 cm, plane to plane electrode geometry and standard air as dielectric. Three main regions can be observed and each region corresponds to a given mode. The first mode (dark zone) is characterized by the absence of luminous phenomena and by very weak currents. The applied voltage and thence the electric field induce elastic collisions and possible ionizations due to natural or artificial radiations. Since the discharge cannot be maintained by itself, it is called "non-autonomous". The second mode is characterized by a glow, a luminous activity defined on Figure 1 as the corona discharges. The latter starts when U exceeds U 0 ; thence the electric field brings sufficient energies to the electrons that ionize or dissociate molecules. Each electron newly generated contributes to electronic avalanches generation. Since the discharge propagates, the current increases, and the voltage falls down. Arc discharges involve high currents. The electrical energy consumption is the highest where arc discharges occur. The arc regime can be linked up with short circuits. Note that for the above conditions, the dielectric strength is of about 30kV/cm. Dark and Arc discharges have no interest for NTP generation. Dark discharges do not lead to oxidation or reduction processes. Arc discharges induce thermal mechanisms, high electrical energy consumptions and device destructions.
NTP PHENOMENA KNOWLEDGE
Our purpose is pollution control using corona discharges. These discharges are generated when the applied voltage (hence the electric field) exceeds a threshold value higher than the ionization and dissociation voltages of the considered gas. The electrons and ions The electrons move toward the anode and the ions (positive carriers) toward the cathode. By considering the NTP conditions, neutral molecules and ions are motionless compared to the electrons. The electron mobility can be approximated from the Chapman-Enskog theory of diffusion and simplified to:
µ = q mυ ( 1
)
where µ is the mobility, q the elementary charge (electron charge), m the mass of the electron and υ is the momentum transfer collision frequency. The mass of an electron is at least 2000 times less than any molecule. For an NTP, the external electric field mainly contributes to enhance electron energies. The electron drift velocities depend upon the magnitude, frequency and direction of the electric field:
ν = µ.E (2)
ν is the drift velocity and E is the external electric field.
The electrons are submitted to the Lorenz force which is:
F = q. E (3)
As concerns the current density j, it is given by the following relationship:
j = n e .q.µ.E (4)
where n e is the electron density. The kinetic energy of electrons is reliable to the enthalpy such as:
1 2 .m.ν = 3 2 .k.T e (5)
k is the Boltzmann constant and T e is the electron temperature expressed in eV (1eV=1.60.10 -19 J=11605K).
The statistical distribution of velocities of species is given by the Maxwell-Boltzmann statistical distribution equation. The electron energy distribution function can be defined as Maxwellian. Only a part of electrons can compete for dissociation and ionization processes. The equations above link the electron energy with the applied external electric field. By considering air at standard conditions of temperature and pressure, the physical mechanisms can be described by a series of complex equations (space charges, species interactions, transport, diffusion...). However, the external electric field remains a major parameter for NTP processes. Collisions occur according to parameters such as the energies, density of the species or collision cross sections. The mean free path of an electron is the drift distance between 2 electronic collisions:
λ = 1 n g .σ (6) λ = ν υ (7)
λ is the mean free path, n g is the molecular density of the surrounding gas and s is the cross section. For standard conditions in air, λ ≈1.10 -6 m; this can lead to either slowing electrons (elastic collisions) or generating electronic avalanche (inelastic collisions). The latter suggests that a satisfying electric field can produce adequate energetic electrons. More specifically, inelastic collisions lead to internal modifications of involved species. Some examples among numerous possible reactions are listed below:
e + O 2 → 2e + O + O + (8) e + O 2 → 2e + O + 2 (9) e + H 2 O → 2e + OH + H + (10) e + H 2 O → H -+ OH (11)
O -+ N O → N O 2 + e (12)
When considering a pure NO gas, the dissociation into N and O through inelastic collisions easily arises. The dissociation energy of NO is fixed at 6.5eV ([Pen93]); this value is obtained for a rough estimate electric field E=5.8MV.m -1 with λ=1.4.10 -6 m. The exhaust gases are mainly composed of N 2 and O 2 . Therefore, NTP does not generate a reduction but an oxidation. The oxidation takes place for energy levels above 5eV. The corona discharge ignition in the air can be calculated for cylindrical geometries from Peek formula
E c = 31.δ 1 + 0.308 √ δ.r (13)
with δ = 3, 92.p T g (14)
E c is the peak value of the corona field in (kV.cm -1 ), r is the radius of the conductor in (cm), p is the pressure in (cm.Hg) and Tg is the temperature in (Kelvin). The electric field distribution depends upon the applied voltage and the geometrical parameters. As the considered is coaxial, the electric field E can be calculated using Poisson's equation
∆.V = - q 0 (n i -n e -n n ) (15) E = -∇V (16) E(r, t) = U (t) r.ln Rout Rin ( 17
)
Where V is the voltage potential, n i and n n are ion and neutral densities, U is the differential applied potential, r is the distance parameter from internal electrode, R out is the external radius and R in is the internal radius. Both radicals O and OH are involved in oxidation processes of NO to NO 2 . These radicals are mainly produced by corona discharges in high electric field regions inducing energetic electrons according to the Maxwellian distribution of energy. But the oxidation itself takes place in low field regions during relaxing times of voltage applications or downstream of interelectrode gaps. In addition, some oxidations can appear upstream of interelectrode gaps resulting from discharge shockwaves.
CORONA DISCHARGE SIMULATIONS
Discharge on electrode geometries, magnitude and frequency of the voltage, electrical space charges, materials and gas parameters.
Geometry
In order to initiate the most efficient corona discharges for NTP generation, the geometry is highly divergent.
According to equation 16 it allows high electric field while voltage magnitudes remain below 25kV. The figure 3 shows a simulation of the internal radius influence on the electric field distribution for U=16kV and external radius of 15mm. Corona discharge ignition is calculated from equation 13. The curves show that a small radius enables to get a maximum value for E substantially higher than the corona ignition, but decreasing when increasing the gap from inner electrode. For a 15mm external radius, the internal radius is ranged between 0.2 and 4.2mm. While inner radius was set as variable on Figure 3, voltage U has been set as variable on Figure 3 where E was plotted for R in =0.35mm and R out =15mm. The corona ignition threshold is calculated for these geometric parameters: E corona =8.2kV/mm.
Power supply and arc limitation
The voltage has to be sustained below the disruptive voltage (U d ). However, the arc will not occur if the voltage application time over U d is too narrow. Streamers propagate with a non infinite velocity which means that the critical electric field can be reached by a pulse voltage during few nanoseconds but no arc. Care must be given on pulse repetition frequency, relaxing, rising and falling times. When relaxing time is too short and repetition frequency is too high, space charges can not be completely
Dielectric barrier
Another method to limit arc discharges consists of adding a dielectric barrier on one or both electrodes. Glass, quartz or any translucent materials are convenient for discharge observations as shown on figure 4. DC power supply cannot be used with Dielectric Barrier Discharges (DBD) because charges settle on barrier surfaces and cancel the electric field.
Materials and gas parameters
The NTP for NOx treatment has an oxidative leading function. Materials used for the electrodes have to remain unaffected by oxidation. Tungsten or stainless steel is preferred for building reactors. Exhaust gases can have rather high temperatures according to experimentations, with a maximum temperature of 550K. The insulating materials have to be matched to these temperature and corrosive requirements. The exhaust gas is simulated using an air compressor or generated using a 4strokes/1-cylinder fuel engine. Engine gas flow is ranged between 10l.min -1 and 150l.min -1 .
EXPERIMENTAL SETUP
The investigated system consists of a voltage source, a test reactor and an electrical measurement system. The voltage is measured thanks to a voltage divider, Tektronik 6015A (dividing ratio of 1000), with a bandwidth of 20MHz. The current is measured using 4 devices: 2 inductives, 1 resistive and 1 digital multimeter. The inductive devices are a transducer, Stangenes 0.5-0.1W, transforming current into voltage (ratio of 0.1V/A) with a bandwidth of 20MHz on one hand, a Rogowski coil associated to an integer (ratio of 0.1V/A) with a bandwidth range from 100kHz to 1GHz on the other hand. The resistive sensor is configured to obtain accurate measurement of the current on a 50Ω impedance. DMM device (Digital MultiMeter) is a NI PXI-4071 measuring current with ranges from 1pA up to 3A. This is mainly used to obtain the DC component and thus complete information on complex currents generated by discharges. In order to limit the current, a protective resistance is added to the circuit as shown on figure 5. Two types of voltage sources are used: (1) DC high voltage 60kV max; and
(2) AC high voltage 25kV (rms).
CORONA THRESHOLD MEASUREMENT
The measurements of high voltage currents are very sensitive. There are actual risks of arc discharges and thus the destruction of measurement devices. To avoid these shows almost a sin wave of current in phase with the voltage. Note that during experimentations we used a high speed CCD camera to observe discharges and eventual singularities (hot spot) that promote current peaks and arcs. The corona ignition threshold observation is in agreement with the previous calculation with an acceptable error of 5%. Luminance generated by the corona discharges appears for U=10.5kV (see figure 1). This would mean that singularities were avoided.
PROSPECTS AND CONCLUSIONS
The selective catalytic reduction (SCR) leads to NOx reduction. The products used for assisting the reactions are based on urea or hydrocarbons. The requirements to assist the reduction are the temperature and an oxidation stage converting a part of NO to NO 2 . The fastest reactions with NH 3 can take place for NO and NO 2 on stoichiometric ratios of 1:1. NH 3 is obtained by the urea hydrolysis. But, below 200 o C and during transient phases, the oxidative level doesn't work efficiently. The purpose is to create positive conditions for NOx reduction by SCR through the association with non-thermal plasmas whatever engine phases. This work contributes to discharge analysis for NTP generation. Corona discharges are well fitted for NTP reactors, but not for any voltage or geometry. In mastering electrical engineering, the combination of non-thermal plasmas and SCR has many additional advantages and promises to be a valuable economic way to treat pollutants in near future. Our researches lead to plasma technology experimentations with real exhaust gases using a thermal engine mounted on a test bench since we assumed that transient phases are not well transcripted by simulated gases.
Figure 1 :
1 Figure 1: I-U characteristic for a DC discharge
Figure 2 :Figure 3 :
23 Figure 2: E vs. inner radius for U=16kV and corona ignition value
Figure 4 :
4 Figure 4: Corona Discharges with Glass (DBD)
Figure 5 :
5 Figure 5: Equivalent Circuit of the Experimental Setup
Figure 6
6
6 :
6 Current and Voltage Characteristics of AC Corona Discharges for a Wire/Cylinder Geometry, No Dielectric Barrier. Umax ranges : (a) 9.5kV, (b) 11kV, (c) 14kV, (d) 16.5kV |
01736984 | en | [
"info.info-mo"
] | 2024/03/04 16:41:20 | 2017 | https://hal.science/hal-01736984/file/ESAFORM_2017_paper.pdf | N Bur
email: [email protected]
P Joyot
email: [email protected]
On the Use of PGD for Optimal Control Applied to Automated Fibre Placement
Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its conceptual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts.
The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process.
Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy.
However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.
INTRODUCTION
Automated Fibre Placement (AFP) is one the main technologies employed today to manufacture advanced composite laminates from unidirectional prepregs. This technique consists in laying and welding tapes of prepregs, building a laminate having the geometry and the desired mechanical characteristics.
This process has been widely studied [1,2,3,4] with numerical models becoming more accurate, robust and complex. New techniques of computation [5,6], based on variables' separation, have enabled models' enrichment by adding parameters as extra-coordinates [7]. Thus, the Proper Generalised Decomposition (PGD) method leads to multi-parametric virtual charts providing a whole set of solutions for each combination of the considered parameters [8,9,10,11,12].
Then, the computational vademecum can be exploited on-line for process control or process optimisation purposes. Indeed, within the AFP we want to efficiently control the heating power: tapes have to be heated enough to ensure the melting of the matrix coating the fibres and the cohesion with the previously laid tapes, while not exceeding a threshold from which material burns.
Therefore, we took advantage of the PGD to build off-line virtual charts in order to determine the best power associated to a draping velocity profile [13].
However, in those simulations, solutions were computed from equations provided by underlying physics of the studied phenomena, the optimisation being carried out as post-process.
We propose here to compute directly the solution of an optimisation problem in order to get the separated representations of both the field and the control to obtain it. That is to say the optimisation is made directly off-line, reducing the cost of the post-process and improving the real-time control of the AFP.
Within the next section, we present the equations governing the phenomenon under consideration. In order to have a reference solution the system is solved by standard finite element method (FEM). Thereafter, section b) focuses on the writing and solving of the optimal system. We improve the obtained results by applying Uzawa's method in section b). Lastly section b) addresses few conclusions and perspectives.
PROCESS MODELLING
AFP process can be modelled with an heat equation associated with the next boundary conditions. Heat source applies on boundary Γ P (see Figure 1); considering a wide enough domain Ω, we set a Dirichlet's condition taking into account a fixed temperature on Γ R ; then, for the sake of simplicity, an homogeneous Neumann's condition is applied on others boundaries.
Γ B Γ L Γ T Γ P Φ Ω FIGURE 1. Domain of study.
This leads to the following advection-diffusion equation 1
-div (K∇u) + ρC p V∇u = 0 in Ω; u = 0 on Γ R ; -K∂ n u = 0 on Γ B ∪ Γ L ∪ Γ T ; -K∂ n u = -Φ on Γ P ; (1)
with
K = k 0 0 k ⊥ et V = v 0 T .
This system can obviously be solved in a classical way with standard FEM. However, since we want a reference solution to latter compare with results from other methods, we have to implement comparable systems. Consequently, in order to compare solution in separated representation, we take advantage of the tensor product method introduced by R. E. Lynche, J. R. Rice and D. H. Thomas [14] [15]. The writing of discrete form of Equation ( 1) separating the two space coordinates and using tensor forms of operators allows the handling of shape functions in the separated representation, the system remaining solved with a global, non separated FEM.
We can thus obtain the temperature field from the input Φ modelling the laser heat flux. For some couples power/speed we compute the corresponding reference solutions.
From these results we extract the temperature profiles on boundary Γ P where the source is applied. Indeed, we want to control the temperature on this same boundary during the process, to reach an optimum, providing the ideal heat flux. We compute also P S ray , the power seen by Γ P , as the integrand of the flux on this boundary multiplied by the width of the shone surface S ray .
We gather in Table 1 some key values for these four reference solutions, for the purpose of latter comparison. Due to velocity values, we increase the number of nodes to contain the Péclet number, avoiding the requirement of a stabilisation technique
v = 0.001 m • s -1 v = 0.01 m • s -1 v = 0.1 m • s -1 v = 1 m • s -1 Pw = 600 W Pw = 1897 W Pw = 6000 W Pw =
SETTING UP THE OPTIMAL SYSTEM
As announced in the Introduction, The PGD method provides multi-parametric virtual charts that can be used to control the AFP process [13].
Another way to go on is to use the optimal control theory. Within the AFP process, a heat flux provided by a laser melt the thermoplastic. The difficulty consists in determining the best power of the laser to reach an optimal temperature to melt the thermoplastic enough but not too much.
Thus we consider the following cost-function to be minimised, since the flux is applied only on Γ P , part of the boundary
J(u, Φ) = 1 2 Γ P (u -u d ) 2 + αΦ 2 dγ, (2)
subject to the state advection-diffusion equation ( 1), with α cost parameter of the command. That way Φ is used as control and we want to reach u d on boundary Γ P . The domain of study is depicted on Figure 1.
The corresponding Lagrangian writes
L (u, Φ, p) = J(u, Φ) + Ω p -div (K∇u) + ρC p V∇u dx. ( 3
)
To find a stationary point of the Lagrangian we set
∇ u L (u, Φ, p) = 0; ∇ Φ L (u, Φ, p) = 0; ∇ p L (u, Φ, p) = 0. ( 4
)
Expanding these equations leads to the non-linear optimality system (see [START_REF] Lions | Optimal Control of Systems Governed by Partial Differential Equations[END_REF]), whom weak form writes, with test functions u et p ,
Ω K∇u • ∇u + Ω ρC p V∇uu -Γ P 1 α pu dγ = 0 ∀u ; Ω K∇p • ∇p + Ω ρC p Vp∇p + Γ P up dγ = Γ P u d p dγ ∀p . (5)
In Equation ( 5), fields u and p are coupled. The computation of such a problem can be achieved using mixed formulation, within a standard FEM as well as in the PGD framework as described thereafter.
The discrete form of the state variable u (x, z) and the adjoint parameter p (x, z) are expressed in tensor product form
U = ∞ i U i x ⊗ U i z and P = ∞ i P i x ⊗ P i z . (6)
The vector Ψ which brings together nodal values of u and p takes the form
Ψ = U P = ∞ i U i x ⊗ U i z P i x ⊗ P i z . (7)
The discretised weak form of Equation ( 5) is written
Ψ T AΨ = Ψ T B were A = 8 i=1 A j
x ⊗ A j z and B = B x ⊗ B z will be expressed in a tensor form, and with the test function defined by
Ψ = U x ⊗ U z + U x ⊗ U z P x ⊗ P z + P x ⊗ P z . ( 8
)
This discretised problem can then be solved with the PGD method, for different values of the velocity, the desired state u d coming from the corresponding reference solution.
Since the goal of optimal control is to minimise the distance between the unknown u and the desired state u d on the boundary Γ P , we retrieve the wanted temperature profile on this boundary. However, to reach it, the flux, computed by PGD, to be applied take substantially away compared to the FEM input, as speed increases (see Figure 2).
v = 0.001 ms -1 v = 0.1 ms -1 v = 0.01 ms -1 v = 1 ms -1
UN-MIXING THE OPTIMALITY SYSTEM
Applying the PGD framework on the optimality system (5) can produce, in our case, weird results. Instead of searching a workaround within numerical stabilisation techniques, we propose to solve this system without mixed elements. For this purpose, we use Uzawa technique [17] [18].
Given an initial guess p (0) for p, Uzawa's method consists in our case of the following coupled iteration:
A u u (k+1) = A pu p (k) p (k+1) = p (k) + ω A pu u (k+1) -u d + A p p (k) (9)
where ω > 0 is a relaxation parameter. Tensors are defined by 2 summarises the previous simulations, collecting some key-values. Thus, solving the optimality system with the PGD algorithm, but without mixed elements using Uzawa method leads to consistent results, since both temperature and heat flux remain similar to those expected (using standard FEM).
A u = k K u x ⊗ M u z + M u x ⊗ k ⊥ K u z + ρC p vH u x ⊗ M u z A pu = - 1 α M pu x ⊗ M pu
FIGURE 2 .
2 FIGURE 2. Flux on Γ P for different speeds.
z
A p = k K p x ⊗ M p z + M p x ⊗ k ⊥ K p z -ρC p vH p x ⊗ M p z A up = M up x ⊗ M up zThis coupled iteration is computed within a fixed-point loop, each field u and p being solved by PGD. As previously, Figure3shows the computed flux to reach the desired temperature u d on the boundary Γ P .v = 0.001 ms -1 v = 0.1 ms -1 v = 0.01 ms -1 v = 1 ms -1
FIGURE 3 .
3 FIGURE 3. Flux on Γ P with Uzawa's method Table2summarises the previous simulations, collecting some key-values. Thus, solving the optimality system with the PGD algorithm, but without mixed elements using Uzawa method leads to consistent results, since both temperature and heat flux remain similar to those expected (using standard FEM).
TABLE 1 .
1 Key values for reference solutions
x z l 1 l 2 l 3 h Γ R
CONCLUSION
In order to control the AFP process, we proposed a method based on the PGD technique allowing the inclusion of parameters as extra-coordinates. Due to numerical instabilities, we transformed a mixed formulation to a coupled problem, improving significantly the results.
Next step consists in considering the desired state u d as a new coordinate. Thus we will be able to build directly the process control as a multi-parametric virtual chart. |
04102154 | en | [
"phys.meca.acou"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04102154/file/inversion.pdf | S Maugeais
email: [email protected]
J Gilbert
Brass player's mask parameters obtained by inverse method
An optimization method is proposed to find mask parameters of a brass player coming from a one degree of freedom lip model, with only constant mouth pressure and periodic mouthpiece pressure as input data, and a cost function relying on the waveform and the frequency of the signal. It delivers a set of parameters called C -admissible, which is a subset of all mask parameters that allow the inverse problem to be well defined up to an acceptable precision. Values for the mask parameters are found that give a good aproximation of real signals, with an error on the playing frequency of less than 5 cents for some notes. The evolution of the mask parameters is assessed during recordings with real musicians playing bend notes and their effects on the playing frequency are compared to the theoretical change on a model.
Introduction
Models of brass instruments have been used for a long time to understand the physics (cf. [START_REF] Campbell | The Science of Brass Instruments[END_REF]) and synthesize their sound (cf. [START_REF] Harrison-Harsley | Physical Modelling of Brass Instruments using Finite-Difference Time-Domain Methods[END_REF]). However, one remaining difficulty is the calibration of these models as many of the constants appearing in the mathematical equations are difficult to measure, in particular when they involve human body parts such as the lips of a brass player. A good calibration of these parameters is useful not only for sound synthesis, where a realistic sound is the main goal, but also for theoretical purposes, as the equations governing models of brass instruments are nonlinear, and are therefore very sensitive to changes in the constants of the model.
In the literature, many models exist for the lips, with different degrees of complexity, either as a one degree of freedom oscillator (which is the most common), or a two degrees of freedom oscillator taking into account different polarities (cf. [START_REF] Boutin | Trombone lip mechanics with inertive and compliant loads ("lipping up and down")[END_REF]), and models trying to come closer to the geometry of the opening section of vibrating lips (cf [START_REF] Campbell | The Science of Brass Instruments[END_REF], section 5.1.2). However, the more complex the model, the more constants there are that need to be calibrated, and the higher the uncertainty. The question of calibration of the parameters is not restricted to lip models of brass players, and a related problem is that of the embouchure of reed instruments which may seem easier as measures can be undertaken directly on the reed. The study of the reed parameters is the source of a large literature (see for example [START_REF] Avanzini | Modelling the Mechanical Response of the Reed-Mouthpiece-Lip System of a Clarinet. Part I. A One-Dimensional Distributed Model[END_REF][START_REF] Van Walstijn | Modelling the Mechanical Response of the Reed-Mouthpiece-Lip System of a Clarinet. Part II: A Lumped Model Approximation[END_REF][START_REF] Muñoz Arancón | Estimation of saxophone reed parameters during playing[END_REF][START_REF] Chatziioannou | Estimation of Clarinet Reed Parameters by Inverse Modelling[END_REF][START_REF] Helie | Inversion of a physical model of a trumpet[END_REF][START_REF] Chatziioannou | Investigating Clarinet Articulation Using a Physical Model and an Artificial Blowing Machine[END_REF][START_REF] Smyth | Toward an estimation of the clarinet reed pulse from instrument performance[END_REF]) which can be used as a source for methods dedicated to the lips.
Concerning brass instruments, the body of literature is more reduced, although many articles deserve to be cited and will serve as reference for the present work (see [START_REF] Velut | How Well Can Linear Stability Analysis Predict the Behaviour of an Outward-Striking Valve Brass Instrument Model?[END_REF] for a table summarizing known values). Most notably, [START_REF] Elliott | Regeneration in brass wind instruments[END_REF] who was one of the first to give a complete set of parameters, [START_REF] Brigitte D'andréa | Asymptotic State Observers for a Simplified Brass Instrument Model[END_REF] who built an asymptotic state observer, [START_REF] Vergez | The BRASS Project, from Physical Models to Virtual Musical Instruments: Playability Issues[END_REF] who used simulated annealing with a cost function depending on playing frequency only, [START_REF] Boutin | Trombone lip mechanics with inertive and compliant loads ("lipping up and down")[END_REF] with data coming from high speed camera and [START_REF] Fréour | Parameter of a physical model of brass instruments by constrained continuation[END_REF] using bifurcation diagrams. The present article aims to provide a new method to identify embouchure parameters that can be used with very little apparatus on actual musicians, and that can follow their evolution while playing. It introduces a new cost function which is a combination of [START_REF] Vergez | The BRASS Project, from Physical Models to Virtual Musical Instruments: Playability Issues[END_REF] (for the frequency part) and [START_REF] Chatziioannou | Investigating Clarinet Articulation Using a Physical Model and an Artificial Blowing Machine[END_REF] (without the displacement), together with some penalization (see section 2.4). An optimization algorithm is used to minimize this cost function on recordings with actual musicians and the results are discussed (cf. section 3).
Method
Model
To reduce the amount of parameters that must be calibrated, the model chosen in the present article is the simplest one described by (2.1), with only three equations (cf. [START_REF] Mattéoli | Minimal blowing pressure allowing periodic oscillations in a model of bass brass instruments[END_REF]), that proved to replicate many of the properties of brass instruments (see for example [START_REF] Mattéoli | Diversity of ghost notes in tubas, euphoniums and saxhorns[END_REF]). It relates the mouth pressure p m to the mouthpiece pressure p and the opening h through a spring-mass-dashpot equation describing the lips, a valve effect computing the flow u through the lip from the difference of pressure, and the expression of the input impedance:
ḧ + ω ℓ Q ℓ ḣ + ω 2 ℓ (h -H) = pm-p µ ṗn = Z c C n u + s n p n p = 2ℜ N n=1 p n u = wh + sgn(p m -p) 2|pm-p| ρ (2.1)
where h + = max(h, 0), w is the width of the lips, and Z c the characteristic impedance.
Here, the impedance is decomposed using modal analysis as a sum of simple fractions
Z(ω) = Z c n C n jω -s n + C * n jω -s * n (2.2)
and p n being complex valued. The variables p n form a decomposition of the mouthpiece pressure using the special form of the impedance. They are not really modal coordinates, as they are not naturally orthogonal for some scalar product, but give a convenient way to solve the problem. The input impedance is measured through an impedance bridge and is therefore known. It is decomposed into formula (2.2) using the Rational Fraction Polynomial method (see [START_REF] Ewins | Modal Testing: Theory, Practice and Application[END_REF], section 4.4.3), giving the values for s n and C n which are therefore fixed for a given instrument. The value of C n and s n computed for the Bb trombone in first position and used in the measures (basse trombone Courtois and mouthpiece Holton) are given in table 5. The mask parameters are the remaining constants appearing in equations (2.1), and are the lip angular resonance frequency ω ℓ = 2πF ℓ , the quality factor Q ℓ , its surface density µ and its opening at rest H. The set of equations 2.1 can be rewritten as a real valued ordinary differential equation (cf. [START_REF] Mattéoli | Minimal blowing pressure allowing periodic oscillations in a model of bass brass instruments[END_REF]) ẏ = f (y) of order 1 in dimension 2N + 2 with a state variable
y = [h, ḣ, ℜ(p 1 ), Im (p 1 ) . . . , ℜ(p N ), Im (p N )]. (2.3)
Such a formulation is very convenient both for time numerical simulation, and for the study of bifurcation diagrams, as computed by the software auto-07p [START_REF] Doedel | Auto-07p, continuation and bifurcation software for ordinary differential equations[END_REF] (see section 2.9). The time simulations performed in this project are based on the Runge-Kutta 4 algorithm and are coded in C language as a large number of them are performed. The sampling rate is set to 44100Hz as it gives a sufficient precision for the results (comparable to measured data), ensures the stability and convergence of the numerical scheme, and is sufficiently fast (0.18s for a one second signal). The time simulations have to be performed on a time range long enough so that stationary regime is attained. In practice, this means signals of up to 4 seconds have to be simulated as transient regime can be quite long for some mask parameters.
For a set of mask parameters
M = (ω ℓ , Q ℓ , H, µ), (2.4)
we write p M for the mouthpiece pressure obtained by solving the system (2.1) with zero initial value
y(0) = [0, • • • , 0] (cf. equation 2.
3). All the other parameters of the model, including p m , are fixed and constant during one optimization.
Experimental protocol
The goal of the project is to get access to as many mask parameters as possible with as little apparatus as necessary so as not to hinder the musician's playing. We therefore focused on only two piezoresistive pressure sensors (Endevco 8507C-5): one in the mouth of the musician (or artificial mouth), and one in the mouthpiece. Simultaneous recordings give access at each time step to the quantities p m and p. Both signals are sampled at 44100Hz. As only one period of p is needed by the algorithm below, the signal can be broken into pieces to see the evolution of parameters (see section 2.4). Once a period is chosen, p m is averaged on the same time period to have a constant value to feed the numerical simulation.
Three sets of measures were performed with two experienced amateur trombone players, one of them being recorded twice, labeled A1, A2 and B in the rest of this article. For each session, the musician was asked to play 6 notes on a Bb bass trombone in first position (Bb2, F3, Bb3, D4, F4, Bb4), together with a bend on F4 (first down, then up), and a crescendo on F4.
Extraction of a signal's period
For a given sampled signal p, either simulated (as in equation (2.1)) or measured (see section 2.2), the determination of a period is critical for the method and the first step to perform it is the identification of the periodic regime and its frequency. This is done using a python implementation [START_REF] Guyot | Fast python implementation of the yin algorithm[END_REF] of the Yin algorithm (see [START_REF] De | YIN, a fundamental frequency estimator for speech and music[END_REF]). The Yin algorithm produces an estimator called the harmonic rate, which is a real number between 0 and 1, that gives a quantification of how periodic the signal is: A harmonic rate very small (ideally 0) meaning that the signal is close to being periodic. In practice, we consider the signal to be periodic when the harmonic rate is lower than 10 -3 , and extract the part of the periodic regime with the smallest harmonic rate.
Yin also gives an estimation of the instantaneous frequency F at each point. From this, it is already possible to extract a waveform p † of duration exactly one period in the periodic regime. However, as this waveform has to be compared to another one coming from a reference signal, a phase condition has to be fixed. We therefore demand that all the waveforms • begin by crossing 0,
• in an increasing way.
(2.5) This is always possible in practice as p has a mean value equal to 0. The normalization is achieved by considering the waveform p † and shifting it to the left until the first point satisfies the phase condition, giving rise to a new waveform p which is used as a reference.
It should be noted that on a general signal, the phase condition may not be sufficient to uniquely determine p. However, for the signals obtained either numerically or experimentally, this condition proved to be sufficient.
For a set of mask parameters M, we also write F M and p M for the frequency and normalized waveform of the signal p M obtained in section 2.1.
Definition of the cost function
The goal of the cost function is to try to compare two periodic signals p ref and p, from which frequencies F ref , F and waveforms p ref , p are extracted. The signal p ref can be either a recorded signal, or a simulated signal obtained from known mask parameters (for test purposes), and is the reference against which the model outputs are compared. Although in theory it should be enough to compare p ref and p, it puts too much emphasis on the waveform itself, and too little on the frequencies. As both timbre and intonation are important for the applications, it is necessary to add an extra weight to the difference in frequencies. The preliminary cost function C • is therefore
C • (p ref , p) = 1 ∥ p ref ∥ 2 2 min 1 F ref , 1 F 0 ( p ref (t) -p(t)) 2 dt + α F (1200 log 2 (F ref /F )) 2 (2.6) with ∥ p ref ∥ 2 2 = 1/F ref 0 p ref (t) 2 dt.
The first term of the sum is the square of the relative RMS difference, and the second one the square of the relative frequency difference.
The choice of the constant α F guides the optimization procedure either toward a better approximation of the waveform (small α F ) or toward a better approximation of the frequency (big α F ).
In our case, the choice of α F = 0.02, obtained by trial and error, leads to good results during optimization for trombone sounds, in that intonation (errors around 10 cents) and waveforms (errors around 30%) are respected. In particular it means that the difference in cents between two signals is
≤ C • (p ref , p)/α F .
Penalization
As is already well known (cf. [START_REF] Helie | Inversion of a physical model of a trumpet[END_REF]) the inversion problem is not well defined and it is actually easy to find multiple sets of mask parameters which give signals with very similar waveforms (see table [START_REF] Adachi | Time-domain simulation of sound production in the brass instrument[END_REF]) This means in particular that the cost function lacks convexity. One typical solution to remedy this problem is to convexify the cost function using Tikhonov regularization, which amounts to adding quadratic terms with respect to some mask parameters. More precisely, we define the complete cost function C
M ref M Difference F ℓ (
C (p ref , M) = C • (p ref , p M ) + β Q Q 2 ℓ + β H H 2 (2.7)
The choice on the specific penalization has been made on Q ℓ and H because it proved to be • sufficient to have a well defined solution up to a sufficiently good precision (cf. table 2),
• necessary to remove very different solutions (cf. section 2.8).
It should be noted that this particular choice of penalization, instead of a more general form like (H -H 0 ) 2 for a reference H 0 which should be fixed for the whole optimization, implies that the optimization procedure will favor solutions with the smallest quality factor and lip opening. This was chosen for lack of a good candidate for H 0 .
This particular choice of penalization proved to give results close to those in the literature, except for H (cf. section 3). The method for fixing the values of β Q and β H is done so that a typical value of the penalization should be of the same magnitude as C • (p ref , p). As we expect C • (p ref , p) to be about 0.3 (cf. section 3), that Q ∼ = 7 and H ∼ = 10 -4 m (chosen among the known values in [START_REF] Velut | How Well Can Linear Stability Analysis Predict the Behaviour of an Outward-Striking Valve Brass Instrument Model?[END_REF]), we took
β Q = 5 × 10 -3 and β H = 3 × 10 7 m -2 .
Continuity, optimization algorithm
The algorithm chosen to find the minimum of the cost function is the dual annealing optimization (cf. [START_REF] Xiang | Generalized simulated annealing algorithm and its application to the Thomson model[END_REF]) which is a stochastic algorithm that requires neither the cost function to be regular, nor the minimum to be unique.
Indeed, the cost function in this article is not continuous: a small variation of the mask parameters can lead to completely dissimilar solutions. For example the trombone player can obtain the different notes by only varying its lip resonance frequency: the variation of playing frequency with lip resonance frequency clearly has jumps (cf. [START_REF] Mattéoli | Minimal blowing pressure allowing periodic oscillations in a model of bass brass instruments[END_REF] figure 4).
Another problem of the cost function is that it has many local minima. Although we do not have a mathematical or musical reason for this, it is clearly seen during the optimization process using dual annealing as it performs local searches before jumping to other locations (see table 3 where each line represents a local minimum).
Limitations of dual annealing
The dual annealing algorithm is known to give a solution for some very slow (logarithmically) decreasing temperatures (cf. [START_REF] Xiang | Generalized simulated annealing algorithm and its application to the Thomson model[END_REF]), but that kind of evolution of the temperature implies a very slow convergence. In practice, a faster decreasing temperature is used, but the convergence is not assured.
Moreover, as this algorithm is of stochastic nature, there is no simple criterion to stop it, and an arbitrary condition has to be chosen: the choice was made to bound by 1000 the number of calls to the cost function. With this choice, a typical run lasts about 1 hour on a desktop computer.
The probabilistic nature of this algorithm also means that two runs of the same algorithm, with the same initialization data (except for the seed of the random generator) give different solutions that can be quite different, as the global minimum might not have been reached in one run. It is therefore often necessary to launch the algorithm iteratively until a new run does not produce a solution with lower cost function. We say that a set of mask parameters satisfying this hypothesis is C -admissible. A random set of mask parameters is not C -admissible in general . Indeed, two sets M 1 and M 2 can give very similar waveforms (cf. table 1), so that C
Precision of the algorithm
• (p ref , p M1 ) = C • (p ref , p M2
), but have different quality factor or lip opening, and implying for example
C (p ref , M 1 ) < C (p ref , M 2 ). In that case, M 2 cannot be C -admissible.
Moreover, this definition highly depends on the choices made for the definition of C , be it α F or the choice of penalizations. A set of mask parameters may be C -minimal for one choice, but no longer for another one!
The assumption in this article is that for every "realistic" signal (i.e. coming from the recording of a trombone), there is only one mask parameter that is C -admissible. It is not at all clear that this is true, and this is even known to be false if the penalizations are not added (see section 2.5). Taking this assumption for granted, a set of mask parameters obtained by minimization of the cost function is automatically C -admissible.
To find a suitable C -admissible set of mask parameters and to stay close to an actual trombone signal, so that the robustness of the optimization procedure can be tested, we applied the optimization procedure to a reference signal on a recorded D4 (cf. section 2.2), cf. figure 1. The obtained values are given in the second column of table [START_REF] Avanzini | Modelling the Mechanical Response of the Reed-Mouthpiece-Lip System of a Clarinet. Part I. A One-Dimensional Distributed Model[END_REF], denoted by M ref . The set of mask parameters used for the initialization of the dual annealing algorithm is given in the third column, and the result of the optimization algorithm is in the last one. The search space is given in its caption.
M ref
Init The resulting waveforms for the two sets of mask parameters are indistinguishable, with a relative RMS error of only 3%, and an error on the frequencies of about 6 cents. The results of optimization are acceptable (much lower than the dispersion of the values found in the literature) and representative of the errors we found on other simulations (cf. section 3) except for the opening at rest H, but its value is so small that it is hard to give a physical interpretation (see section 3.2). The difference between M ref and M optim may be explained by the fact that we had to stop the algorithm at one point (cf. section 2.7) or because C is insufficiently convexified.
Continuation
During the analysis of the bend (cf. section 3.4), the continuation software auto-07p ( [START_REF] Doedel | Auto-07p, continuation and bifurcation software for ordinary differential equations[END_REF]) is used to follow the evolution of the playing frequency with different parameters. The continuation is first initialized for a reference set of mask parameters M 1 which is chosen for each musician to be the point which minimizes the cost function C • among all the optimized values of mask parameters, so as to be as close as possible to the actual recording.
The bifurcation diagram is built using the dependency on p m up to the recorded value of the reference signal, so that auto-07p is now precisely set to the signal p M1 .
Then a continuation curve along one of the mask parameters (either Q ℓ , F ℓ , H or µ), δ is computed, all other physical variables being fixed, and the playing frequency is drawn.
Results
The results obtained for the different musicians are presented in the following subsections, but first it is interesting to look at the mouth pressure as a function of the playing frequency, cf figure 2. Indeed, although all these data are directly recorded, and not optimized, we can clearly see differences between the two recording sessions of musician A, where the second session has a larger mouth pressure, which could translate into perceptible differences within the optimized data of a single musician.
Ab4
Errors on sustained notes
Both RMS and frequency errors obtained at the end of optimization are presented in figures 3 and 5. The RMS error can be quite large for some notes (up to 40%), which is not surprising as the model is one of the simplest and many physical details are neglected. As the optimization looks for a best fit among all mask parameters, this means the model should be complexified to take into account more of the physics of the instrument if precision on timbre and playing frequency have to be maintained, provided the dual annealing algorithm give results close enough to the global minimum. For reference, a typical waveform is shown in figure [START_REF] Bromage | Open Areas of Vibrating Lips in Trombone Playing[END_REF] where the reference signal is given in green, the reconstructed signal is in orange and the difference between them is in dotted red. The relative RMS error for this particular signal is 0.28. Although many properties are well approximated, the higher harmonics of the signal are clearly not in agreement with the experimental signal. This gives a typical value that can be expected for the RMS error. Concerning repeatability, estimations of the mask parameters of musician A are coherent and give almost the same results for both the RMS error and the frequency error (except for the playing frequency of the note Bb4). However, the errors differ largely between both players, player B mainly getting the lowest error. This may suggest that both musicians use different techniques, and that player B is closer to the simple model (2.1).
Note in particular the difference between errors for the note F3, where musician A has the largest RMS error of all (cf. figure 4), and musician B has one of the lowest.
Discussion on sustained notes
Lip resonance frequency
The lip resonance frequency as a function of the playing frequency is shown in figure 6 for all three musicians. As for any outward model, the frequency of the lips is lower than the playing frequency (cf. [START_REF] Campbell | Brass Instruments As We Know Them Today[END_REF]), which is clearly seen in this figure as the circles are below the line F play = F ℓ . Note that for a given frequency, there is little dispersion from player A (A1 or A2) to player B. Moreover the regression line
F ℓ = 0.9366F play -31.57 (3.1)
gives a good fit with R 2 = 0.996 and could be used as a first estimation of the playing frequency using only the lip frequency.
Quality factor
The estimated quality factors for all three players are displayed in figure 7. As in the case of lip frequency (cf. 3.2) results are very close for all three players, and also for all notes, being between 2 and 5. Compared to the literature, they are however smaller than the measured values of [START_REF] Cullen | Brass Instruments: Linear Stability Analysis and Experiments with an Artificial Mouth[END_REF] (between 9 and 10.5) but comparable to the estimation of [START_REF] Lopez | Physical Modeling of Buzzing Artificial Lips: The Effect of Acoustical Feedback[END_REF] (around 5), [START_REF] James | Experimental Mechanical and Fluid Mechanical Investigations of the Brass Instrument Lip-reed and the Human Vocal Folds[END_REF] (between 1.2 and 1.8), [START_REF] Richards | Investigation of the lip reed using computational modelling and experimental studies with an artificial mouth[END_REF] (around 3.7), [START_REF] Rodet | Physical models of trumpetlike instruments. detailed behavior and model improvements[END_REF] (around 2.88) and [START_REF] Adachi | Time-domain simulation of sound production in the brass instrument[END_REF] (between 0.5 and 3). Except for the first reference, this justifies the penalization on Q ℓ , which tends to favor the smallest possible values.
Ab4
The values obtained for the two recordings of player A are always very close, this may mean that it depends very little on the loudness.
Surface density
The optimized values of µ -1 are given in figure 8. Except for the 4 highest values, they are comparable to [START_REF] Rodet | Physical models of trumpetlike instruments. detailed behavior and model improvements[END_REF]. However, they are overestimated compared to other values found in the literature ( [START_REF] Elliott | Regeneration in brass wind instruments[END_REF][START_REF] Cullen | Brass Instruments: Linear Stability Analysis and Experiments with an Artificial Mouth[END_REF][START_REF] Lopez | Physical Modeling of Buzzing Artificial Lips: The Effect of Acoustical Feedback[END_REF][START_REF] Richards | Investigation of the lip reed using computational modelling and experimental studies with an artificial mouth[END_REF]) where they are between 0.03 and 0.2m 2 .kg -1 .
We can see that the data for A1 is systematically higher than that of A2, indicating a possible dependency on the mouth pressure and the loudness.
Opening at rest
The values of opening at rest obtained by optimization are given in figure 9. They seem very small compared to what was obtained by other authors, up to a factor 10: a typical value obtained by optimization is around 2.10 -5 m (see figure 9) whereas [START_REF] Cullen | Brass Instruments: Linear Stability Analysis and Experiments with an Artificial Mouth[END_REF] and [START_REF] Richards | Investigation of the lip reed using computational modelling and experimental studies with an artificial mouth[END_REF] have a typical value of 5.10 However, when shifting from opening at rest to mean opening (cf. figure 10) using formula (A.3), which takes also into account the mouth pressure, the lip frequency and the lip surface density, the results are comparable to those of [START_REF] Bromage | Open Areas of Vibrating Lips in Trombone Playing[END_REF], which are between 0.6mm and 2mm. It should be noted that in this case, the value of opening at rest is negligible in the formula (A.3) in appendix A.
Just as for µ -1 there appears to be a correlation between loudness and mean opening. It is not only expected, but actually obvious from the formula (A.3) where the mouth pressure appears.
Optimization for bent notes
The musicians were instructed to perform pitch bends on F4: without moving the slide, the player used embouchure adjustments to vary the pitch, first below its normal value, then above, then below, and then back to F4. For each recording of approximately 10 seconds, the signals are cut into chunks of 0.2 seconds, and the optimization procedure is applied independently on each chunk. The note F4 has been chosen because it is one of the most comfortable to bend for the musician. The RMS and frequency errors can be found in figures [START_REF] De | YIN, a fundamental frequency estimator for speech and music[END_REF] and [START_REF] Doedel | Auto-07p, continuation and bifurcation software for ordinary differential equations[END_REF]. The RMS error is around 0.25, except for a group of notes played by musician B with low frequencies, which may be related to a particular technique used by this musician. The frequency error is quite low except for the lowest notes, and is in agreement with the very low frequency error found for F4 in figure [START_REF] Campbell | Brass Instruments As We Know Them Today[END_REF]. This suggests that the model is not able to predict precisely what the musician is doing for the lowest frequencies of the bend. Indeed, there seems to be two regions for the errors, with a change at around 352Hz, as if the optimization process could not find parameters that fit the playing frequency below that value. In practice, musicians often use special techniques to bend to very low notes, such as using the vocal tract. This technique is clearly not taken into account in the model, and it seems the algorithm indicates its own limits.
Discussion on bent notes
During bending, the musician varies many parameters. This makes it quite difficult to see the influence of any of them. In the following diagrams, the evolution of playing frequency is shown with respect to the mask parameters. To put it into perspective, the theoretical evolution with respect to only the considered parameter (the other parameters being kept constant) is also computed using auto-07p (see section 2.9). The mask parameters used to initialize the continuation are those with smallest cost function among all the optimized values for this recording, to ensure that the model is as close to the measures as possible.
Quality factor Q ℓ
The results of the optimization for bent notes is presented in figure 13 for the quality factor. The values obtained are within the same range as in figure 7. One striking feature is the proximity from the measures of musician B, and the results of continuation obtained by auto-07p. It seems like the playing frequency is completely predicted by the evolution of the quality factor. However, the precision of the fit must be put into perspective with the rather large errors in the optimization (see figures 11 and 12). The fit is not so good with musician A, although the results of the continuation go in the right direction.
F4
Figure 13: Variation of Q ℓ as a function of F play . Crosses represent the measures for all three musicians, circles the initialization parameter for continuation (see section 2.9), and lines the continuation obtained with auto-07p. The dashed line represents the average frequency of the actual played F4 with no bend.
Lip resonance frequency F ℓ
The results of optimization for the lip resonance frequency are given in figure 14, and are quite difficult to interpret. Even more than in figure 12, there seem to be two regions, one before 352Hz, and one after.
Above 352Hz, the estimation of lip frequency does not give a clear tendency. Although we could expect the lip frequency to increase with playing frequency, just as in figure 6, this is not what appears in the figure. This suggests that the lip frequency is only a coarse tuner, and the quality factor is actually the fine tuner.
Lip surface density µ
The results of the optimization for bent notes is presented in figure 15 for the lip surface density. The values are compatible with those in figure 8, and the evolution of playing frequency relatively to µ is compatible with the theoretical one obtained by continuation.
Opening at rest H
The results of the optimization for bent notes is presented in figure 16 for the opening at rest. The values obtained are within the same range as in figure 9.
As explained in section 3.2, the values obtained for H are very small, and therefore not very well defined (see error term in table 2). The mean opening obtained from other optimized values and formula (A.3) is given in figure 17, and a clear tendency can be observed above 350Hz: the mean opening increases with the playing frequency for all musicians.
Below 350Hz the tendency is not so clear. Moreover, one must be careful with interpretation as there may be other phenomena involved than those directly modeled (cf. section 3.3). Crosses represent the measures for all three musicians, circles the initialization parameter for continuation (see section 2.9), and lines the continuation obtained with auto-07p. The dashed line represents the average frequency of the actual played F4 with no bend.
Optimization for a crescendo
The same procedure as in section 3.3 was used for the recordings of a crescendo on F4 for all three musicians.
The relative RMS error is presented in figure 18, and indicates that the higher the mouth pressure, the higher the RMS error. This shows that the simple model (2.1) is good at reproducing the timbre for low pressure, but not so much for higher pressures. This may be due to the nonlinear propagation along the length of the trombone. The error in frequency is presented in figure 19 and is compatible with that in figure 5. It proves that the model (2.1) is actually quite good at reproducing the frequency, whatever the dynamic of the playing. This indicates that the limits of the model are not so much on the frequency, but more on timbre. The waveform for two different dynamics are shown in figure 20, both for the measured signal, together with the reconstructed signal from optimized mask parameters. The difference in timbre is clearly seen for the forte recording. The details of the figures obtained through optimization during a crescendo can be found at Comparison of measured vs. simulated sounds with these parameters can be found at http://perso.univ-lemans.fr/~smauge/ mask.
Conclusion
In this article, a new method is proposed to estimate the mask parameters of a brass musician within a set of acceptable parameters (so called C -admissible). This approach is used on recordings of actual musicians during playing, and is able to deliver a coherent set of parameters (except maybe for the opening at rest), in that they are not too far from existing results in the literature, and their values evolve in a way that is compatible with theory during the playing. The values obtained prove that a simple model is already capable of reproducing a playing frequency close to that played by an actual musician, with a dynamic and waveform that are similar to the measured ones. This may prove useful for instrument making, although more research should be done to assess the robustness of the method, and investigate the variability of the mask parameters from player to player. Moreover, it can give new leads to a better understanding of intonation, such as the almost linear relation between playing and lip frequencies, or the role of quality factor as a fine tuner. Furthermore, the system seems to be able to detect when a particular technique is used, for example on the lowest part of bent notes where the vocal tract is used by the musician, as the difference between the techniques is clearly seen on the cost function.
For a given set of mask parameters M ref , the goal is to find an algorithm that captures an approximation of M ref with only the knowledge of p ref and F ref . The algorithm we propose here is not able to do that without a prior assumption on M ref , namely (⋆) M ref must be a minimum of the cost function M → C (p ref , M)
Figure 1 :
1 Figure 1: Block diagram explaining how to get a Cadmissible set of mask parameters from a recorded signal and use it to assess the precision of the algorithm.
Figure 2 :
2 Figure 2: Measured mouth pressure P m averaged over one period as a function of the playing frequency for all three musicians and the six notes.
Figure 3 :
3 Figure 3: RMS error of the signals obtained from optimized mask parameters, relative to the measured signal for 6 different notes and the three musicians.
Figure 4 :
4 Figure 4: Typical waveform for the recorded signal (green) and for the signal obtained from optimized mask parameters (orange) as played by musician A2 on a Bb3. The difference between signals is drawn in dashed red. The relative RMS error is 0.28
Figure 5 :
5 Figure 5: Frequency error in cents of the signals obtained from optimized mask parameters, relative to the recorded signal for 6 different notes and the three musicians
Figure 6 :
6 Figure 6: Variation of F ℓ as a function of F play for all three musicians in circles, together with the diagonal F play = F ℓ in dashed black, and the regression line in dashed red.
Figure 7 :
7 Figure 7: Variation of Q ℓ as a function of F play for all three musicians in circles.
Figure 8 :
8 Figure 8: Variation of µ -1 as a function of F play for all three musicians.
Figure 9 :
9 Figure 9: Variation of H as a function of F play for the three musicians.
Figure 10 :
10 Figure 10: Mean opening of the lips as a function of F play for the three musicians.
F4Figure 11 :
11 Figure 11: RMS error for bent note on F4 for all three musicians. The dashed line represents the average frequency of the actual played F4 with no bend.
F4Figure 12 :
12 Figure 12: Frequency error for bent note on F4 for all three musicians. The dashed line represents the average frequency of the actual played F4 with no bend.
Figure 14 :
14 Figure14: Variation of F ℓ as a function of F play . Crosses represent the measures for all three musicians, circles the initialization parameter for continuation (see section 2.9), and lines the continuation obtained with auto-07p. The dashed line represents the average frequency of the actual played F4 with no bend.
F4Figure 15 :
15 Figure15: Variation of µ -1 as a function of F play . Crosses represent the measures for all three musicians, circles the initialization parameter for continuation (see section 2.9), and lines the continuation obtained with auto-07p. The dashed line represents the average frequency of the actual played F4 with no bend.
F4Figure 16 :
16 Figure16: Variation of H as a function of F play . Crosses represent the measures all three musicians, circles the initialization parameter for continuation (see section 2.9), and lines the continuation obtained with auto-07p. The dashed line represents the average frequency of the actual played F4 with no bend.
Figure 17 :
17 Figure 17: Variation of H mean as a function of F play .Crosses represent the measures for all three musicians, circles the initialization parameter for continuation (see section 2.9), and lines the continuation obtained with auto-07p. The dashed line represents the average frequency of the actual played F4 with no bend.
Figure 18 :
18 Figure 18: Variation of RMS error as a function of P m for all three players.
Figure 19 :
19 Figure 19: Variation of frequency error as a function of P m for all three players.
Figure 20 :
20 Figure 20: Comparison of wave functions for measured signal vs. optimized signal for musician B. Top for piano (relative RMS error : 20%, playing frequency error 1.5 cents)), and bottom for forte RMS error : 34%, playing frequency error 0.7 cents))
F
Table 1 :
1
Hz) 177.70 184.05 60 cents
Q ℓ 3.82 4.79 20%
µ (kg.m -2 ) 1.28 1.42 10.9%
H (m) 1.2 × 10 -5 1.9 × 10 -4 1483%
Values of two different sets of mask parameters giving almost identical signals: relative RMS difference is 0.9% and difference in frequencies is 1.41 cents. Mouth pressure is equal for both simulations and fixed at 2500Pa.
Table 2 :
2
. M optim (error)
Data for optimization: Mouth pressure p m = 1656P a and width of the lips w = 12.10 -3 m. Search space is F ℓ ∈ [150, 200], Q ℓ ∈ [0.1, 6], µ ∈ [0.1, 3], H ∈ [10 -5 , 10 -3 ]. Error on playing frequency : 6cents
Table 3 :
3 ℓ (Hz) Optimization steps. Each line represents the result of a gradient descent performed by the dual annealing algorithm Note F ℓ (Hz) Q ℓ µ(kg.m -2 ) H(mm) P m (P a) F play (Hz)
Q ℓ µ(kg.m -2 ) H(mm) Cost Frequency (Hz)
184.38090 4.27669 1.27669 0.09 0.11377 233.18
184.33001 4.54898 1.04458 0.08 0.10951 233.18
180.35897 4.34470 1.14597 0.11 0.10177 233.04
178.16236 4.37235 1.17461 0.15 0.09876 232.95
176.77990 4.35483 1.02964 0.09 0.09696 232.98
176.77899 4.33623 1.03454 0.11 0.09614 233.01
175.21228 4.10074 1.10552 0.11 0.09348 233.04
175.21124 4.10265 1.09939 0.13 0.09318 233.08
175.21124 4.10265 1.09939 0.03 0.09000 232.93
175.21816 4.08641 1.05548 0.01 0.08922 232.97
174.56073 4.13190 1.07071 0.03 0.08912 232.91
174.56073 4.13190 1.07071 0.03 0.08909 232.91
174.56073 4.15607 1.07071 0.03 0.08718 232.71
174.56169 4.15607 1.07071 0.03 0.08613 232.47
174.56169 4.15610 1.07071 0.03 0.08608 232.43
174.56160 4.15610 1.07071 0.03 0.08608 232.40
174.56160 4.15610 1.07015 0.03 0.08600 232.56
174.56098 4.15610 1.07015 0.03 0.08593 232.52
174.55610 4.15610 1.07015 0.03 0.08587 232.47
174.55610 4.15539 1.07015 0.03 0.08548 232.49
Bb2 78.78 3.18 1.33 0.32 896 116.12
Bb3 167.71 3.54 0.57 0.0114 1433 234.26
Bb4 392.90 3.42 0.62 0.452 5134 469.69
D4 232.11 4.56 1.32 0.38 2488 295.62
F3 123.77 2.40 1.22 0.0139 1350 175.45
F4 277.53 3.79 1.68 0.0132 3716 351.19
Table 4 :
4 Values obtained by optimization for different notes played by musician B (mezzo forte) for a width w = 0.012m. Comparison of measured vs. simulated sounds with these parameters can be found at http://perso.univ-lemans. fr/~smauge/mask/#sounds
s n C n
-12.24 + 237.80i 237.83 + 12.23i
-16.01 + 700.45i 210.28 + 4.81i
-21.08 + 1063.24i 222.43 + 4.41i
-24.09 + 1438.12i 210.97 + 3.53i
-26.14 + 1838.74i 237.99 + 3.38i
-30.00 + 2176.32i 268.08 + 3.70i
-32.85 + 2532.56i 159.47 + 2.07i
-35.80 + 2920.16i 150.19 + 1.84i
-40.08 + 3307.88i 175.93 + 2.13i
-45.82 + 3707.10i 99.70 + 1.23i
-276.05 + 4886.60i 1036.27 + 58.54i
Table 5 :
5 Values of the coefficients in equation for the impedance decomposition for the Bb trombone in first position 2.2
Acknowledgment
The authors would like to thank Christophe Vergez for many discussions and very helpful comments on the manuscript.
In memoriam
This article is dedicated to the memory of Joël Gilbert , who was at the origin of this project and a constant source of ideas.
A Mean opening
Suppose that p and h are of period T , write for the mean value of any T -periodic function. Applying it on equation
gives
As h is T -periodic, h ′ = h ′′ = 0, and as p = 0 we get
B Tables |
04102160 | en | [
"phys.meca.acou"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04102160/file/syahi.pdf | Sylvain Maugeais
Non parametric harmonization of a tabla's membrane
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The tabla, along with other Indian percussion instruments like pakhawaj and mridangam, is known amongst membranophones for its harmonic quality (cf. [START_REF] Raman | Indian musical drums[END_REF]) and resonance (cf. [START_REF] Roda | Resounding Objects: Musical Materialities and the Making of Banarasi Tablas[END_REF], chapter 6.3). These specificities are mainly due to the syahi, which is an extra mass added at the center of the membrane. Although it used to be made of clay that had to be applied each time the instrument was used (cf. [START_REF] Bharata-Muni | The Natyasastra. Bibliotheca Indica[END_REF] chapter 33, verse 25-26), it is now made either of single use dough (bass side of pakhawaj of mridangam) or syahi : "a paste of starch, gum, iron oxide, charcoal, or other materials." (cf. [START_REF] Fletcher | The physics of musical instruments[END_REF] chapter 18.5) (for tablas). The process of either applying the dough (cf. [START_REF] Bharata-Muni | The Natyasastra. Bibliotheca Indica[END_REF]) or the syahi (cf. [START_REF] Roda | Resounding Objects: Musical Materialities and the Making of Banarasi Tablas[END_REF]) is quite complex. The latter requires the application of different layers of syahi paste, drying after each step and rubbing with a stone. These different steps ensure there is a reticulum of cracks forming in the syahi (see fig. [START_REF] Antunes | Harmonic configurations of nonhomogeneous membranes[END_REF]), which reduces the stiffness of the added mass (cf. [START_REF] Roda | Resounding Objects: Musical Materialities and the Making of Banarasi Tablas[END_REF], chapter 5.11). The shape of the syahi has to be very precise to achieve harmonicity. Leaving aside the sustain, various authors have tried to describe the its geometry by numerically optimising the inharmonicity: from [START_REF] Ramakrishna | Vibration of indian musical drums regarded as composite membranes[END_REF] (refined by [START_REF] Gaudet | The evolution of harmonic indian musical drums: A mathematical perspective[END_REF]) viewing it as a composite membrane, [START_REF] Sathej | the eigenspectra of indian muscial drums[END_REF] and [START_REF] Samejima | Vibration analysis of a musical drum head under nonuniform density and tension using a spectral method[END_REF] in a similar way but with a smooth approximation of the composite density, to [START_REF] Maugeais | How to apply a plaster a drum make it harmonic[END_REF] and [START_REF] Antunes | Harmonic configurations of nonhomogeneous membranes[END_REF] who used non paramet-ric optimisation. All these articles found inhomegeneous membranes whose first in vacuo eigenfrequencies are almost harmonic, and whose shape is very similar to that of an actual physical tabla. On the other hand, following [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] who studied air loading on timpani membrane, [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF] applied the same optimisation procedure as [START_REF] Ramakrishna | Vibration of indian musical drums regarded as composite membranes[END_REF] on composite membranes but taking into account the air loading. They too obtained an almost harmonic set of frequencies. As a byproduct, they also got decay times. Unfortunately, these are sometimes unrealistic mainly because their model does not take into account the viscoelastic losses, which can be incorporated along the line of [START_REF] Gallardo | Sound model of an orchestral kettledrum considering viscoelastic effects[END_REF] who studied it for the kettledrum. The present article expands these results by applying a non parametric optimisation (like [START_REF] Antunes | Harmonic configurations of nonhomogeneous membranes[END_REF]), taking into account air loading (like [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF]) and viscoelastic losses (like [START_REF] Gallardo | Sound model of an orchestral kettledrum considering viscoelastic effects[END_REF]), and imposing conditions on the density ensuring the playability of the tabla (like [START_REF] Maugeais | How to apply a plaster a drum make it harmonic[END_REF]).
Following [START_REF] Henrique | Optimal design and physical modelling of mallet percussion instruments[END_REF] in their work on optimisation of musical instruments, the finite element method has been chosen to compute the eigenfrequencies relative to general densities. Together with boundary element method and optimisation, these techniques form the technical core of the present work. The remainder of the article is organised as follows: in section 2, the model used for the tabla and air loading is described and the modes in vacuo are computed using the finite element method (FEM). In section 3, the matrices involved in the boundary element method (BEM) are computed, first outside the kettle, and then inside. In section 4, the solutions of the complete system, membrane together with air loading, is computed, and are compared with the results of the bibliography. Finally, in section 5, the actual optimisation is carried out and discussed. The article concludes with open questions and future directions.
Problem formulation
Throughout the text, the only concern is that of the right hand tabla. The left hand one, although it has also been studied in other works (eg. [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF]), is much more difficult to model as the player's wrist is usually used during playing by resting at different positions and applying different pressures while fingers strike the membrane ( [START_REF] Kippen | The tabla of Lucknow : a cultural analysis of a musical tradition[END_REF] chapter
Model
Following [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF], the tabla is modeled as a rigid cylinder of radius a and height L, capped at the top by a membrane Σ and at the bottom by a rigid disk. An infinite rigid plane flange enclosing the membrane in the plane of the membrane is assumed (see ibid. concerning the validity of this assumptions). Let us write x = (r, θ, z) for the cylindrical coordinates, the membrane Σ being positioned in the z = 0 plane and the center of the membrane at r = 0.
Following [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] (and [START_REF] Gallardo | Sound model of an orchestral kettledrum considering viscoelastic effects[END_REF] for the viscoelastic term), the displacement η of the membrane is governed by the equation
σ ∂ 2 η ∂t 2 = T ∆ η + ν ∂η ∂t + p in z=0 -p out z=0 (1)
where σ(x) is the (non uniform) density of the membrane at the point x, T is the membrane tension by unit length, ν is the viscoelastic damping coefficient, p in and p out are the acoustic pressure fields inside and outside the kettle (i.e. above the plane z = 0). Assuming the membrane is fixed at the rim yields η(x, t) = 0 for all t and x ∈ ∂Σ
As the kettle acts as a baffle, a Neuman condition is used on the fixed borders of the kettle
∂p in ∂ρ ρ=a = ∂p in ∂z z=-L = 0 for all t. (3)
In the same way, the assumption that the membrane is surrounded by an infinite plane flange implies ∂p out ∂ρ ρ>a,z=0 = 0 for all t.
Denote by η (resp. p in , p out ) the temporal Fourier transform of η (resp. p in , p out ) with variable ω. The global acoustic pressure field p composed of p in and p out satisfies the wave equation
∆ + ω 2 c 2 a p = 0 (5)
where c a denotes the speed of sound in air.
Denote by G ω in and G ω out the Green functions for equation [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] for the inside and outside domains. Then the pressure fields can be computed (cf. [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF], eqs. ( 21) and ( 22))
p in (x) = Σ G ω in (x|x ′ ) ∂ p in ∂z ′ z ′ =0 dx ′ (6)
p out (x) = Σ G ω out (x|x ′ ) ∂ p out ∂z ′ z ′ =0 dx ′ . ( 7
)
The compatibility of the speeds of air and membrane at z = 0, together with linearized fluid dynamics, imply
∂ p in ∂z z=0 = ∂ p out ∂z z=0 = ω 2 ρ 0 η, (8)
with ρ 0 denoting the density of air at equilibrium. Finally the equation verified by η is given by
-ω 2 σ η = T ∆ ( η -jων η) + ω 2 ρ 0 Σ G ω in (x|x ′ ) z ′ =0 η(x ′ )dx ′ - Σ G ω out (x|x ′ ) z ′ =0 η(x ′ )dx ′ . (9)
Description of the elements and solutions in vacuo
The membrane is meshed uniformly with triangles and a spectral element method of order n is used. Let S be the set of shape functions defined on the triangulation using Legendre-Gauss-Lobatto points that are not on the boundary of ∂Σ. Define a hermitian product on the Sobolev space
H 1 0 (Σ) by ⟨f, g⟩ = Σ f (x)g(x)dx. ( 10
)
For the density function σ on Σ and tension parameter T , define the matrices
A σ = (⟨σφ, ψ⟩) φ,ψ∈S (11)
B = T (⟨grad φ, grad ψ⟩) φ,ψ∈S . (12)
The computation of the scalar products for computing the matrices A σ and B are performed using the collocation method. The lossless in vacuo equation implied by ( 9) is given by
-ω 2 A σ Φ = BΦ (13)
and its solutions ω, Φ are obtained through the computation of eigen-pairs, and are Galerkin weak approximations of in vacuo frequencies and modes of the membrane with density σ.
According to [START_REF] Babuška | Eigenvalue problems[END_REF] (see also [START_REF] Marburg | Six boundary elements per wavelength: is that enough?[END_REF] for a general discussion on the size of the mesh), the error term for ω or Φ can be bounded for small enough eigenvalues by h n ω 2 , where h denotes the mesh-size, which is the maximum diameter of the triangles of the mesh.
The ωs are counted with multiplicities and sorted by their magnitude as a set ω i i∈I where I denotes the set of eigen-pairs of [START_REF] Kippen | The tabla of Lucknow : a cultural analysis of a musical tradition[END_REF], and the corresponding eigenvector is denoted by Φ i .
The error on frequencies in cent for different meshes and different orders are given on figure [START_REF] Bharata-Muni | The Natyasastra. Bibliotheca Indica[END_REF] for the first 120 modes (extra logarithmic scale on the cents is used to allow the comparison between high and low order). This number of modes is necessary to ensure a good precision during the next step of the computation (cf. section 4.1). As expected, the error increases with frequency (cf. [START_REF] Babuška | Eigenvalue problems[END_REF]) and should be minimal for the first ones. The poor precision on the first eigenvalues is explained by the approximation of the geometry: a lesser number of points in the mesh implies a worse approximation of the circle.
As the number of modes is important for the precision required in section 4.1, and the speed of computation is not relevant, order 3 and 1096 points are used in the rest of the article.
The numerical method was implemented in Python from scratch on a desktop computer without using any finite element package.
Boundary element method
Contrary to [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF], [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF] and [START_REF] Gallardo | Sound model of an orchestral kettledrum considering viscoelastic effects[END_REF], the integrals in ( 9) are not computed "explicitly" because the density of the membrane is not constant, so the projection onto the eigenfunctions of the radial component of the membrane equation cannot be readily applied. Instead, the Boundary Element Method (BEM) is used (cf. for example [START_REF] Hall | The Boundary Element Method[END_REF] or [START_REF] Kirkup | The boundary element method in acoustics: A survey[END_REF]), together with a spectral element methods (SEM) of order 3. All the integrals involving only regular functions are computed using collocation methods.
Computation of the outside Green function
According to [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF], equation [START_REF] Samejima | Vibration analysis of a musical drum head under nonuniform density and tension using a spectral method[END_REF], G out can be expressed as
G ω out (x|x ′ ) = -1 4π e j ω ca ∥x-x ′ ∥ ∥x -x ′ ∥ + e j ω ca ∥x * -x ′ ∥ ∥x * -x ′ ∥ (14)
with x * = (r, θ, -z) if x = (r, θ, z). In particular, when r, r ′ ∈ Σ we get
G ω out (r|r ′ ) = -2 4π e j ω ca ∥r-r ′ ∥ ∥r -r ′ ∥ . ( 15
)
Define the matrix G out (ω) whose entries are
(G ω out ) φ,ψ = -1 4π Σ φ(r) Σ ψ(r ′ ) e j ω ca ∥r-r ′ ∥ ∥r -r ′ ∥ dr ′ dr. (16)
for any two φ, ψ ∈ S shape functions.
Denote by κ
ψ (r) = Σ ψ(r ′ ) e j ω ca ∥r-r ′ ∥ ∥r -r ′ ∥ dr ′ so that (G ω out ) φ,ψ = -1 2π Σ φ(r)κ ψ (r)dr (17)
For each s ∈ S, denote by r s its node point and w s its weight in the Lobatto quadrature. As κ φ is regular, (G ω out ) φ,ψ is approximated by
(G ω out ) φ,ψ ∼ = -1 4π s∈S w s φ(r s )κ ψ (r s ) = w φ κ ψ (r φ ) (18)
To compute κ ψ , the usual technique is used (cf. [START_REF] Costabel | Principles of boundary element methods[END_REF], 6.1): separate the singular part and compute it analytically, the rest of the integral is computed numerically. For this
write e j ω ca ∥rφ-r ′ ∥ ∥rφ-r ′ ∥ = 1 ∥rφ-r ′ ∥ -E(∥r φ -r∥ ′ , ω), with E is a regular function. Then κ ψ (P φ ) = Σ ψ(r ′ ) 1 ∥r φ -r ′ ∥ dr ′ κ * φ,ψ + Σ ψ(r ′ )E(∥r φ -r ′ ∥, ω)dr ′
The first integral κ * φ,ψ can be computed explicitly using a polar parametrization of triangles, together with antiderivatives of trigonometric polynomials. As for the second, E being regular, it can be approximated using the collocation method
Σ ψ(r ′ )E φ (r ′ , ω)dr ′ ∼ = w ψ E(∥r φ -r ψ ∥, ω). Finally (G ω out ) φ,ψ ∼ = -1 2π w φ κ * φ,ψ + w ψ E(∥r φ -r ψ ∥, ω)
which is an approximation of order n thanks to Lobatto's quadrature.
Finally, define the matrices H and E(ω) by
H φ,ψ = w φ κ * φ,ψ (19)
E φ,ψ (ω) = w φ w ψ E(∥r φ -r ψ ∥, ω) (20)
so that
G ω out = -1 2π (H + E(ω)) (21)
Computation of the Green function inside the shell
Let (Ψ n ) n∈N be a complete system of eigenvectors for the Laplace operator on Σ with Neuman boundary conditions, and denote by ϖ n its eigenvalue.This data can be computed using the finite element method as above changing the boundary condition, the weak approximation now taking place in the Sobolev space H 1 (Σ) instead of H 1 0 (Σ). Figure 4 gives the error in the computations of the ϖ n for different meshes and different orders. As for the Dirichlet condition, order 3 with 1096 points give very good results. A computation following [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] equation [START_REF] Roda | Resounding Objects: Musical Materialities and the Making of Banarasi Tablas[END_REF] gives an explicit form for the Green function inside the shell:
G ω in (r|r ′ ) = - n Ψ n (r)Ψ n (r ′ ) cotan(µ n L) µ n (22)
with
µ n = ϖ 2 n -ω 2 c 2 a .
To compute the weak approximation matrix of G ω in at entries φ, ψ ∈ S remark that
(G ω in ) φ,ψ = Σ Σ G ω in (r|r ′ )φ(r)ψ(r ′ )drdr ′ , (23)
so that
(G ω in ) φ,ψ = - n cotan(µ n L) µ n ⟨Ψ n , φ⟩⟨Ψ n , ψ⟩ (24)
Projection onto the space of in vacuo modes
To keep track of the right symmetries when looking for eigenmodes, as well as reducing the dimension of the search space, it is convenient to use a projection of the equations onto the space of eigenmodes in vacuo
H vac = vect (Φ i ) 0≤i≤ℓ .
It is a good candidate for the approximation of the exact solution u thanks to Sturm-Liouville's theory. Restricting to this space is the equivalent of the truncation of the infinite series done in [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] for equations ( 38) and (46). For any matrix N among A, B, G ω in , G ω out , define its restriction to the space H vac by
N i,j = t Φ i NΦ j . (25)
for all 0 ≤ i, j ≤ ℓ.
The projection of the fundamental equation ( 9) can then be written as
-ω 2 A σ M = (1-jων) BM +ω 2 ρ 0 G ω in -G ω out M (26)
In general, the results depend on the value of ℓ, a bigger value of ℓ giving more precise results. We found that a typical value of ℓ = 80 is a good choice for the computations to ensure a good precision for the first 15 modes (cf. section 5). This high value of ℓ is the main reason why order 3 methods were chosen (see figures 3 and4).
Solutions of the complete system
The last step to find the eigenmodes of the loaded non uniform membrane is to solve equation (26) which is non polynomial in ω because of the presence of this variable in the matrices G ω in and G ω out . This is achieved following the same "fixed point" strategy as in [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF]. It should be noted however that this method does not always give a solution (see ibid. section IV) and is also confronted to the well known problem of spurious eigenvalues and nonuniqueness of solutions (see [START_REF] Chen | Recent development of dual bem in acoustic problems[END_REF]). Although this is a real problem for bigger drums, we found that all the solutions given by this method for the tabla are of the same order as that of a physical tabla, owing to the relatively small size of the membrane and the reduced air loading compared to that of the timpani.
Fixed point algorithm
The method chosen here follows only the "simple case" of [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] section IV: start with an eigensolution in vacuo (ω 0 , M 0 ), where M 0 denotes the coordinates of the solution in the basis of the Φ i s. In particular, M 0 is part of the canonical basis:
M 0 = t (0, • • • , 0, m 1 , 0, • • • , 0).
Denote by m the non zero coordinate. Then solve recursively the eigenvalue problem
-ζ 2 A σ Λ = (1 -jω n ν) B + 2ω 2 n ρ 0 G ωn in -G ωn out depends only on ωn Λ (27)
with indeterminates ζ and Λ, where the previous guess ω n has been replaced in the Green functions and the viscoelastic losses.
In general, this system has ℓ + 1 eigensolutions. To ensure that the chosen solution has the same kind of symmetries as the initial guess M 0 in terms of nodal cirlces, the one that has the largest relative m-th coordinate (hence closest to M 0 ) is picked. This is the crucial step and requires that the perturbation is not too strong ( [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] section IV). This hypothesis not only allows the fixed point method to converge, but also gives realistic results for the tabla. The fixed point method is described precisely below (cf. algorithm 1).
Algorithm 1: Fixed point algorithm Data: m < ℓ Result: solution of (26)
ϖ 0 = ω m do S ←-{(ζ s , M s )} = set of normalized eigen-solutions of (27) i ←- argmax{m-th coordinate of|M s |, (ζ s , M s ) ∈ S} ϖ n+1 ←-ζ i n ←-n + 1 while |ϖ n+1 -ϖ n | > ε
The output of this algorithm is compared on figure 5 for the uniform membrane of a timpani to the results of [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] table I, for different meshes and different orders. The modes (0, 1) and (0, 2) where removed because the algorithm does not converge in that case (or sometimes converges to a spurious mode) which is in agreement with the discussion of [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF], section IV. Indeed, these two modes are the most perturbed ones, and the real solutions are far from the modes in vacuo. However, for the other modes, the results are within the margin of error as the results in op. cit. are given with a precision of only 10 cents. The results obtained in this figure confirm that a high order scheme is necessary, together with a fine mesh.
Figure 5: Relative error in cents for the timpani compared to [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF], table I, for different uniform meshes and different orders. Modes (0, 1) and (0, 2) where removed because of convergence problems.
Comparison with known results
Case of tabla
The results obtained by the fixed point algorithm for different uniform meshes and different orders are compared to those of [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF], table II for the in vacuo modes (see figure 6) and for modes loaded by air (see figure 7). As for the uniform membrane detailed on figure 5, the results are within the margin of error, as the results in [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF] Table II are given only with a precision of 10 cents. However, the effect of high order is less clear on these figures than for figure 5. This is due to the discontinuity of the composite membranes which induces less regularity on the FEM problem and a slower convergence. The modes in vacuo and loaded are well approximated at order 2 and 3 for a uniform mesh of 1096 points.
Regularisation of the composite membrane
To get a clearer result, the method is tested on a regularisation of the composite density and compared to the computation with the same algorithm but with degree 3 and finer mesh (1096 points). The results can be seen on figure 8 for different mesh sizes and are compatible with convergence of the method.
On the necessity of the viscoelastic losses
Because the only dissipative mechanism used in the models of [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] and [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF] is the sound radiation, both articles give unrealistic values for τ 60 's of some modes (cf. [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF], section IV and [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF], section III.C). To get more realistic decay times, a straightforward improvement is to add viscoelastic losses, as in [START_REF] Gallardo | Sound model of an orchestral kettledrum considering viscoelastic effects[END_REF] who studied the timpani model of [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF] together with such an added term. From here onwards, we consider the same value as [START_REF] Gallardo | Sound model of an orchestral kettledrum considering viscoelastic effects[END_REF] for the viscoelastic coefficient: ν = 0.6 × 10 -6 s.
Figure 6: Relative error in cents for inharmonicities of the modes in vacuo relative to mode (0, 2), compared to [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF], table II, for different uniform meshes and different orders. Mode (0, 2) is removed as it is taken as reference Figure 7: Relative error in cents for inharmonicities of the modes in vacuo relative to mode (0, 2) compared to [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF], table II, for four different uniform meshes and different orders. Mode (0, 2) is removed as it is taken as reference Figure 8: Relative error in cents for the eigenfrequencies of the loaded modes for a regularisation of the composite density of [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF], table II, for four different uniform meshes. The comparison is made with order 3 and 1096 points. The top right corner shows the regularisation used Figure 9: Comparison between the τ 60 obtained by [START_REF] Christian | Effects of air loading on timpani membrane vibrations[END_REF], table III (black curve for the computation, blue curve for the experiment) and the FEM/BEM method (green curve).
The comparison of τ 60 's obtained from the FEM/BEM method exposed in section 3 taking the viscoelastic losses into account are presented on figure 9. As pointed out in [START_REF] Gallardo | Sound model of an orchestral kettledrum considering viscoelastic effects[END_REF], the addition of the viscoelastic losses give more realistic values for all the modes and the remaining differences could come from the model of the room acoustics (cf. [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF]).
Optimisation
The goal of this section is to give the optimisation tools necessary to find a solution to the tuning process. The method chosen here relies on the gradient algorithm as a "natural" way to go toward a minimum. This method is not only suitable because of the big dimension space in which the optimisation process takes place (the dimension of the space is given by the number of forms in the FEM method, which is here about 10 000) but also because it gives a natural, local direction to search for the minimum.
Computation of gradients
The goal of this section is to study the dependence of solutions of the system
-σω 2 η = (1 -jνω)T ∆ η + G ω * η (28)
under small deformations of the functional parameter σ For this, consider the perturbed system
-(σ + ε σ)(ω + ε ω) 2 ( η + ε η) = (1 -jν(ω + ε ω))T ∆( η + ε η)+ ρ 0 (ω + ε ω) 2 G ω+ε ω * ( η + ε η) (29)
where ε denotes a "small" parameter.
Taking the Taylor expansion in ε and keeping only the part in ε give
-σω 2 η + 2σω ω η -jν ωT ∆ η + ρ 0 ω 2ωG ω + ω 2 ∂G ω ∂ω η = σω 2 η + T (1 -jνω)∆ η + ρ 0 ω 2 G ω η. ( 30
)
To simplify this equation, take the scalar product ⟨., η⟩ with the complex conjugate η of η of the whole equation, and use the hermitian symmetries of the operators ∆, G ω * and ∂Gω ∂ω * . The fact that (ω, η) is a solution of (9) yields
σω 2 η + 2σω ω η -jν ωT ∆ η+ ρ 0 ω 2ωG ω + ω 2 ∂G ω ∂ω * η, η = 0. ( 31
)
Thereby giving
grad σ ω = -ω 2 2ω⟨σ η, η⟩ -jνT ∆ η, η + ρ 0 ω 2G ω + ω ∂Gω ∂ω * η, η η 2 (32)
Any perturbation of σ along a direction ε σ can then be computed as
ω(σ + ε σ) = ω(σ) + ε⟨ σ, grad σ ω⟩ (33)
Note that this is true for both real and imaginary parts, so that it is possible to see the tendency of both frequency and decay time in a particular direction. This is what is done on figures 10 (for the frequency) and 11 (for decay time) taking for σ the uniform density σ U = 0.245, for σ a regular function that cancels on the rim
σ(ρ, θ) = σ U (1 -ρ/a) 2
and a not so small parameter δ ∈ [-0.3, 0.3] so that the evolution along δ σ can be compared with the derivative computed using the gradient. Note that δ is also chosen so that the density remains positive. For both figures, the colored curves represent the frequency or decay time for the different modes computed with de FEM/BEM method (using a 1096 points mesh and order 3, together with ℓ = 80) and the dashed lines represent the derivatives computed with formula of the gradient (formulas (32) and (33)).
Evolution of both frequencies and decay times are very well predicted by the gradient alone.
Definition of inharmonicity
As was already noticed by Raman in [START_REF] Raman | Indian musical drums[END_REF], the first nine modes of a tabla are in an almost harmonic progression. The different modes, defined by their nodes, are described on figure 12 together with the approximate ratio to the mode (1, 1) which serves as a reference. It should be noted that this description of the modes hides the fact that most of them are actually degenerate: for symmetry reasons, each mode (α, β) is actually doubled if α > 0. However, among the twin modes, only the mode shapes differ (by a rotation) and not the frequency. The inharmonicity function is therefore be defined as
I(σ) = m∈modes γ m 2ℜ(ω m (σ)) p m ℜ(ω (1,1) (σ)) -1 2 (34)
where γ m designates a weight affected to mode m. The choice of these weights is actually crucial as they favor which modes are to be optimised first. For example, [START_REF] Sathej | the eigenspectra of indian muscial drums[END_REF] chose γ m = p 2 m which gives a relatively strong weight to high frequency modes. This choice leads in our case to the gradient algorithm to not converge.
Another interesting choice is the one by [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF], equation (24), which is closely related to the uniform case but taking into account psychoacoustic phenomena. The present article follows [START_REF] Gaudet | The evolution of harmonic indian musical drums: A mathematical perspective[END_REF] (with a different choice of target ratios p (0,3) ) and [START_REF] Maugeais | How to apply a plaster a drum make it harmonic[END_REF], giving the same weight to all modes γ m = 1. This choice makes the gradient algorithm converge toward a physically realistic solution (cf. figure 17).
Monotonification
As can be seen on figure 13, the gradient of the inharmonicity is neither of constant sign, nor monotonic (as a function of r), and does not even have a precise circular symmetry. All these characteristics are however expected from the density of the syahi. The non constancy of the sign is problematic to build the syahi, as it would necessitate to remove some matter, even maybe to the point of having negative density at some places: the syahi is usually built by adding small layers of mass. The non circularity of the gradient comes from the fact that only a finite set of modes taken into account for the calculation of inharmonicity, together with the choice of the limits in the definition of inharmonicity and the precision of the computations. This "problem" is more of a numerical nature, but is relatively natural for the "monotonification" process described hereafter.
Finally, even for densities with circular symmetry, an optimal solution could have non playable shapes (for example those proposed in [START_REF] Antunes | Harmonic configurations of nonhomogeneous membranes[END_REF] Figure 7) which could be Figure 14: Function σ 0 -δgrad σ I(σ 0 ) as a multivalued function of r (in black) together with its monotonification σ 0 -δgrad σ I(σ 0 )
♮ (in red).
harmful to the player. Moreover, owing to the current building process of the syahi (cf. [START_REF] Roda | Resounding Objects: Musical Materialities and the Making of Banarasi Tablas[END_REF] or [START_REF] Courtney | Manufacture and repair of tabla[END_REF]), which is done by adding small concentric layers of paste of decreasing radius, letting them dry and rubbing them with a stone to produce the characteristic cracks (cf. figure 1), one hypothesis that seems reasonable, and which is found in actual physical tablas, is to suppose that the density is monotonic as a function of r.
A simple way to achieve all three hypotheses is to use a process called here "monotonification" which is done in three steps: for any real valued function λ on Σ
• transform λ into a non negative function,
• take the maximum of λ for every fixed radius r, thus ensuring the circular symmetry,
• take the cumulative maximum, thus ensuring the monotonicity in r.
These steps can be summarized in the formula
λ ♮ (r) = max r ′ ≥r max |r|=r ′ max(λ(r), 0) (35)
giving a new function which depends only on r, is non negative and decreasing. On figure 14, the monotonified version of
σ 0 -δgrad σ I(σ 0 )
♮ is given for a uniform density σ = 0.245 and a parameter δ given by the first step of the gradient algorithm (see algorithm 2). It should be noted that σ 0 -δgrad σ I(σ 0 ) is drawn only as a function of r, which is why it appears as a multivalued function.
Gradient algorithm
The algorithm used in this article is the usual gradient descent algorithm. Although there are more effective algorithms to find a global minimum, this one is very suitable for our application as the dimension of the search space is quite big (about 10 000) and it gives a result in only a few steps (about 30 steps for the order 3 FEM/BEM method with a mesh of 1096 points). The algorithm used for optimisation is detailed hereafter in algorithm 2.
Algorithm 2: Gradient algorithm with monotonification of σ Data: σ Result: locally optimal density n ←-0
σ 0 ←-σ I 0 ←-inharmonicity(σ 0 ) do grad = gradient of inharmonicity(σ n ) δ ←--I n /⟨grad, grad⟩ σ n+1 ←-σ n + δ.grad ♮ I n+1 ←-inharmonicity(σ n+1 ) n ←-n + 1 while I n > I n-1
The only significative difference with the usual gradient algorithm is the use of monotonification in the step
σ n+1 ←-σ n + δ.grad ♮ .
It should be noted that another choice is possible by applying the monotonification on the gradient, rather than on the density. This latter choice has the effect of producing a series of increasing densities : σ n+1 ≥ σ n , which may be closer to the reality of construction of the syahi, but give results that are further from the optimal.
Results of optimisation
The results of the gradient algorithm are given on figures 15 to 18 for an initial uniform density σ 0 = 0.245kg/m 2 as in [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF], Table II. The parameters used for the tabla are therefore T = 1822N/m, a = 0.05m and L = 0.1m. Denoting by σ T the optimal density obtained in loc. cit., we get I(σ T ) = 0.00229 with formula 34, which is different from the inharmonicity computed in [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF] because of the choice of weights. It should be noted that, although σ T was obtained for another choice of weights, it is very close to the optimal inharmonicity, emphasizing the robustness of the composite density. The decrease of inharmonicity is clearly seen on figure 15 and the algorithm stops after 36 steps for an optimal density σ opt for which I(σ opt ) = 0.00019. The fact that I(σ T ) > I(σ opt ) is not a surprise as σ T verifies all the hypotheses (symmetry, monotonicity) that are imposed upon the search space, which therefore include that of [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF]. The evolution of the frequency of each mode, normalised by the frequency of mode (1, 1), can be seen on figure [START_REF] Maugeais | How to apply a plaster a drum make it harmonic[END_REF]. It seems that the higher the frequency, the slower the convergence. This is readily explained by the choice of weights that is made, which favours the low frequency. The evolution of the density is displayed in figure 17, together with the composite optimal σ T in red for reference. The proximity of σ T and σ opt is striking, emphasizing the fact that seeing the syahi as a composite membrane is a very good approximation. It should be noted that from the very beginning of the optimisation, the rightmost point where σ n > σ 0 shifts to the left during optimisation. This is due to the choice of taking the monotonification on σ, rather than on grad σ I. It allows the optimisation process to get a smaller support for the syahi. A third ramp can be seen on the last obtained densities around r = 0.01m. This suggests that to improve even further the composite membrane of [START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF], another ring should be added (see [START_REF] Vautour | Quasiharmonic circular membranes with a central disc or an isolated ring of different density[END_REF] for a study in this direction without the air loading). The evolution of decay time for each mode can be seen on figure 18. During all the optimisation process, the decay times are increasing, giving for the optimal density a sustain that is up to 10 times greater for σ opt than for σ 0 , in particular for the low frequency modes. This is compatible with the knowledge of tabla makers, that adding smaller central circle adds to the resonance (cf. [START_REF] Roda | Resounding Objects: Musical Materialities and the Making of Banarasi Tablas[END_REF], section 6.3).
(s) [0, 1] [1, 1] [2, 1] [0, 2] [3, 1] [1, 2] [3, 1] [1, 2] [0, 3]
Sounds obtained at each step of the simulation are available online at http://perso.univ-lemans.fr/ ~smauge/tabla/
Conclusion
A non parametric optimisation procedure is proposed to produce a non homogeneous membrane loaded by air whose first partials are harmonic. Together with mild hypotheses ensuring the playability of the membrane, this procedure gives a realistic density that is close to that of an actual physical tabla. As a byproduct, it gives an explicit formula for the changes in frequencies and decay times of eigenmodes with respect to an infinitesimal variation of the density. Moreover, being based on finite elements and boundary elements methods, it can readily be applied to non circular geometries. The method developped here gives a density close to those obtained in previous studies, highlighting the good aproximation given by a composite membrane. The main novelty of the present article lies in the numerical scheme, which uses a non parametric optimisation together with mild hypotheses that are inherent to the manufacture of the syahi : circular symmetry and radial monotonicity. As such, it could be used in more complex settings where no obvious solution such as the composite one is known. As for the previously developed methods, the optimisation is done only for the first five frequencies (equivalent to 15 eigenmodes, some of them being degenerate). This is justified as these are the only prevalent frequencies observed in the spectrum of the tabla. It was conjectured by Raman in [START_REF] Raman | Indian musical drums[END_REF] that the reason for this limited spectrum is the presence of a second, annular shaped membrane that rests on the main one. We hope that the tools developed in the present article will help investigate this conjecture. One of the main drawback of the method is the assumption that the stiffness is negligible. This would require a precise investigation as the added mass is important and the material appears to be quite rigid. As stated in the introduction, the usual explanation for this low stiffness is the reticulum of cracks which, to our knowledge, has not been studied in depth and would deserve more attention.
Figure 1 :
1 Figure 1: Close up of the syahi. The reticulum of cracks and different layers can be seen
Σ
Surface of the drumhead F Infinite flange a Radius of the drumhead (in m) = 0.05m for a tabla L Height of the kettledrum (in m) = 0.1m for a tabla x = (ρ, θ, z) Point in space, cylindrical coordinates r = (ρ, θ) Point in the membrane, polar coordinates σ Surface density of the drumhead (in kg/m 2 ) = 0.245kg/m 2 for a uniform tabla membrane η Transverse displacement of the drumhead (in m) T Membrane tension (in N/m) = 1822N/m for a tabla ν Viscoelastic coefficient of the drumhead (in s) = 0.6 × 10 -6 s p in Pressure inside the kettledrum pout Pressure above the flange F ρ 0 Air density = 1.20kg/m 3 ca Speed of sound = 344m/s
Figure 2 :
2 Figure 2: Model of the tabla
Figure 3 :
3 Figure 3: Relative error in cents for the modes in vacuo for a uniform circular membrane compared to theoretical values, for different uniform meshes and different orders.
Figure 4 :
4 Figure 4: Relative error in cents for the eigenvalues of the Laplace operator with zero Neuman boundary condition, for different uniform meshes and different orders.
Figure 10 :
10 Figure 10: Evolution of frequencies along the line σ U +δ σ together with the derivative computed with the real part of formula (32).
Figure 11 :Figure 12 :
1112 Figure 11: Evolution of τ 60 along the line σ U +δ σ together with the derivative computed with the imaginary part of formula (32)
Figure 13 :
13 Figure 13: Gradient of inharmonicity computed for a uniform density
Figure 15 :Figure 16 : 2 )Figure 17 :
1516217 Figure15: Evolution of inharmonicity during optimisation (blue curve). Inharmonicity of the optimal obtained in[START_REF] Tiwari | Effects of air loading on the acoustics of an indian musical drum[END_REF] is printed for reference (dotted red curve)
Figure 18 :
18 Figure 18: Evolution of τ 60 of each mode during optimisation
Table 1 :
1 List of symbols |
00410223 | en | [
"shs.edu"
] | 2024/03/04 16:41:20 | 2009 | https://edutice.hal.science/edutice-00410223/file/Reffay_Betbeder_CSCL_WS15.pdf | Christophe Reffay
Marie-Laure Betbeder
CSCL -Workshop
Extending validation of tools and analyses in CSCL situations: How to collaborate on interaction analysis?
Christophe Reffay, Marie-Laure Betbeder
Introduction
The Mulce Project aims at developing a server that would allow researchers to share Learning and Teaching Corpora (Letec). Our main goals are: first, to facilitate the work of researchers in the CSCL field by reusing existing corpora instead of creating new experiments, collecting and organizing data, and secondly, to connect these corpora to shared analyses and visualization tools. We also hope that this process of sharing would deepen and widen the validity of research tools and analyses. The limited space of our publications generally does not allow the authors to include their data nor the detailed context of their source experiments, which means that these results cannot be reused nor reproduced.
In the first part of this paper we present some of our work trends related to the workshop topics. In particular, visualization and analysis tools for synchronous and asynchronous interactions. In the second part we discuss the lack of validity for indicators and analysis tools and results in this particular domain. The last part presents the Mulce project which is a tentative to face this stated lack of validity.
Our work on Analysis and visualization tools
In previous research, presented in CSCL2003, we built an automatable computing process based on Social Network Analysis to evaluate the cohesion of a learning group using e-mail and forum messages [START_REF] Reffay | How social network analysis can help to measure cohesion in collaborative distance-learning?[END_REF]. Another work, mainly conducted during the phd thesis of A. Mbala [START_REF] Mbala | Analyse, Conception, spécification et développement d'un système multi-agents pour le soutien des activités en formation à distance[END_REF] and presented in ITS2002 (Intelligent Tutoring System) [START_REF] Mbala | Integration of automatic tools for displaying interaction data in computer environments for distance learning[END_REF], is a multi-agent system that tries to predict if some learners are less and less participating, in order to prevent their abandon. We have also worked on the analysis of data resulting from a tailorable framework to support collective activities in a learning context [START_REF] Betbeder | Symba: a tailorable framework to support collective activities in an learning context[END_REF]. We analysed act's contents and worked out the proportion of acts related to the following four categories: activity achievement, group organization, environment tailoring and socialization.
In collaboration with a research team in linguistics, we are also involved in the field of multimodal interaction analysis [START_REF] Betbeder | Interactions multimodales synchrones issues de formations en ligne : problématiques, méthodologie et analyses[END_REF], combining quantitative (macroscopic level) and content analysis (microscopic level) on micro actions that each participant can realize in an audiographic synchronous collaborative environment (audio talking turns, chat acts, votes, paragraph production in a shared text document, objects in a shared concept map or whiteboard). Our contributions in this field deal with (1) pattern discovering in sequences of such actions [START_REF] Betbeder | Recherche de patterns dans un corpus d'actions multimodales[END_REF], (2) a visualization tool [START_REF] Betbeder | Interactions multimodales synchrones issues de formations en ligne : problématiques, méthodologie et analyses[END_REF] which emphasizes the intertwinement of the actors' synchronous acts. In such environment, actors can interact by using different modalities at the same time. Due to the importance of time and sequence of acts, such phenomena are hardly visible through a database representation.
We are currently working on navigation through corpora. This includes the selection of corpora and visualization of archived interaction acts. Once the corpus has been chosen, a system of requests provides selection of parts of the corpus by considering for example: time restriction, selected communication tools, author or group or string search in interaction contents. The resulting set of acts 1560-4292/08/$17.00 © 2008 -IOS Press and the authors. All rights reserved. can be visualized in the appropriate form, according to the communication tool they are recorded from. This heterogeneous set of acts is organised by our XSD schema, specified by the Mulce project, presented in (Reffay & al., 2008) and available on the Mulce project site1 .
The need of validation
In CSCL domain, it seams to us that the lack of validity of results and tools becomes more and important. We can precise our view on this problem in the following terms:
-Lack of transparency and availability (accessibility for other researchers) of interaction data resulting from an online learning situation. These data are unusable for others. Furthermore, the pedagogical and scientific contexts of these learning situations are rarely published.
-Too many analysis tools are developed by a given team and only tested in a (unspecified) given context. Interaction and research data being unreachable for the rest of the community, the scientific results cannot be replicated or reproduced [START_REF] Rourke | Methodological Issues in the Content Analysis of Computer Conference Transcripts[END_REF][START_REF] Garrison | Revisting methodological issues in the analysis of transcripts: Negotiated coding and reliability[END_REF]. As a consequence, there is no possible comparison between methods or results concerning one set of interaction data and another.
Studying collaborative online learning, in order to either understand this specific type of situated human learning, or evaluate scenario and associated devices, or improve technological environments, requires accessibility to interaction data collected from various actors (e.g. learners, teachers, tutors, etc.) that participate to learning situations. Recent international research publications and scientific events are related to this topic. But interdisciplinary communities involved in this research have not been able to characterise a sharable scientific object according to a comprehensible methodology.
On the one hand, we have partial data, not contextualised with pedagogical and technological learning situation elements, or else raw data that are inextricably tangled in specific software using proprietary formats. A simple collection of student's online interaction data is not a scientific object for the research community focused on online learning. In the language learning field, this idea is emphasised by Kern, Ware and Warshauer as follows:
Researchers must carefully document the relationships among media choice, language usage, and communicative purpose, but they must also attend to the increasingly blurry line separating linguistic interaction and extra linguistic variables.
[…] Studies of linguistic interaction will likely need to account for a host of independent variable: the instructor's role as mediator, facilitator, or teacher; cross-cultural differences in communicative purpose and rhetorical structure; institutional convergence or divergence on defining course goals; and the affective responses of students involved in online language learning projects. [START_REF] Kern | Crossing frontiers: new directions in online pedagogy and research[END_REF]) Our research domain is not only concerned by learning, but more widely by all pedagogical aspects and viewpoints and particularly by teaching. Then, some studies aim at "[…] gather evidence about the effects of instructional conditions of instruction" (Chapelle, 2004: 594). Success in such studies requires gathering the context elements, and in particular those characterising the pedagogical situation. From a methodological point of view, this leads to link up the various data sources in order to create a scientific object, worthy of analysis. This idea is emphasised by the following excerpt concerning interaction in discussion board:
Research in computer mediated communication in education aims at describing complex phenomena by using content analysis methods that flavour partially only some aspects of the communication. The method should be able to consider the discourse as a situated verbal interaction, in its various dimensions: linguistic, situational (in the universe of reference and interaction situation) and hierarchical constraints of the discourse. [START_REF] Henri | L'analyse des forums de discussion Pour sortir de l'impasse[END_REF]) On the other hand, inaccessibility to research data, that is in fact the reality in the quite whole international community, hinders online learning situations from being considered as a scientific object: it impedes verifications, controversy, replication, refinement, multiple analyses, etc. Even if multiple analyses are strongly encouraged by research heads, they are still exceptions that we can illustrate by the studies of Kramsch and Thorne [START_REF] Kramsch | Foreign language learning as global communicative practice[END_REF] and [START_REF] Thorne | Artifacts and cultures-of-use in intercultural communication[END_REF]. Analysing some interaction pieces, extracted from a learning situation informally described in [START_REF] Kern | Literacy and Language Teaching[END_REF], they gave a different interpretation to explain the failure in this interaction between learners of different mother tongues (Kern et al., 2004:p251). Reanalysis can be motivated by various factors like verification (in the previous example), use of alternative content analysis methods (i.e. special issue of Computers & Education (Valke & Martens, 2006)), comparison of results provided by distinct disciplinary approaches [START_REF] Corbel | A method for capitalizing upon and synthesizing analyses of human interactions[END_REF], etc. However, except this contrastive or alternative view, one can consider that analysis approach is by nature a cumulative process, built by distinct research teams, supported by previous analysis, each of these giving its set of annotations. In this sense, the transcription process of audio-oral or multimodal interaction (Vetterr & Chanier, 2006;Ciekanski & Chanier, 2008) is the necessary first step before any other analyses. In the same way, we can consider that chat sessions or forum sets have to be prepared in a target format from row data in the proprietary structures tangled in the various platforms in order to explicit speech turns or messages, speaker/scripter, etc.). This first step, giving the content in a textual and structured form, can be followed by a first level of annotation coming from a conversational analysis, and by a second one concerning a discourse analysis. Such research practices are already well rooted in the NLP (Natural Language Processing) research domain, where, from corpus extracts, distinct researchers can cumulate different level of description/annotation: morphologic, syntactic, anaphoric, etc. [START_REF] Salmon-Alt | Un modèle générique d'organisation des corpus en ligne[END_REF].
A research community becomes mature by sharing (contextualised) resources, tools and practices. Resources and tools are components, aside our publications, of the scientific contribution that our research directions ask us to put in open access (Berlin, 2003;Chanier, 2004:121). A necessary condition to share such resources is open access that should be supported by a set of techniques and protocols (standardisation, interoperability, metadata, etc.). We also need to solve the problems of rights in our research domain and, concerning learning and teaching studies, involving human beings in social experiments; the major ethic question has been quoted by Chapelle in the following terms:
Any discussion of technology in second language research would not be complete without raising the ethical challenges that researchers face in SLA [Second Language Acquisition] research in general and particularly in research involving the collection and archiving of personal performance data that reveal personal attributes (Chapelle, 2004: 599).
Mulce project
In order to widen and strengthen the scientific approach in online learning domain, and specifically analysis of online interaction in learning situations, we launched the MULCE (MULtimodal Corpus Exchange) project [START_REF] Mulce | English version of the MULCE Project homepage[END_REF]. The first step of this project has been to define (and exemplified) the notion of "Learning and Teaching Corpus". Its main objectives are:
To define the data structure of a "Learning and Teaching Corpus", To specify and develop a technical support to effectively share such corpora, integrating OLAC specifications (Open Language Archives Community) and OAI-PMH (Open Archives Initiative's Protocol for Metadata Harvesting), To rebuild 2 of our global corpora (Simuligne and Copeas) to be accessible through out this platform, and documented according specifications of a "Learning and Teaching Corpus".
We propose (1) a formalism to describe learning and teaching corpora and (2) a platform to share them among the research community. The formalism defines the information which can be contained in a corpus and the structure of the data. Through the platform, researchers can share their corpora with the community and access the data shared by other members of the community.To share a corpus, a researcher has to provide metadata describing the corpus' components and upload a file describing each component. While accessing a corpus, an identified researcher is provided with a variety of tools to browse the corpus components, to navigate through the contextualized interaction data, to visualize and to analyze them.
Our major goal is to develop efficient tools and technical environments to help the wide variety of actors involved in online teaching and learning.
Mulce data structure available here: http://mulce.univ-fcomte.fr/metadata/mce-schemas/mce_sid.xsd |
00410225 | en | [
"shs.edu"
] | 2024/03/04 16:41:20 | 2009 | https://edutice.hal.science/edutice-00410225/file/Reffay_Betbeder_CSCL_WS4.pdf | Christophe Reffay
Marie-Laure Betbeder
Sharing Corpora, Analysis and Tools for CSCL Interaction Analysis
Sharing Corpora, Analysis and Tools for CSCL Interaction Analysis
Christophe Reffay, Marie-Laure Betbeder, Computer Science Laboratory of the Franche-Comté University, France.
Interaction analysis
In previous research, presented in CSCL2003, we built an automatable computing process based on Social Network Analysis to evaluate the cohesion of a learning group using e-mail and forum messages [START_REF] Reffay | How social network analysis can help to measure cohesion in collaborative distance-learning? In proceeding of CSCL[END_REF]. Another work, mainly conducted during the phd thesis of A. Mbala [START_REF] Mbala | Analyse, Conception, spécification et développement d'un système multi-agents pour le soutien des activités en formation à distance[END_REF] and presented in ITS2002 (Intelligent Tutoring System) [START_REF] Mbala | Integration of automatic tools for displaying interaction data in computer environments for distance learning[END_REF], is a multi-agent system that tries to predict if some learners are less and less participating, in order to prevent their resignation. We have also worked on the analysis of data resulting from a tailorable framework to support collective activities in a learning context [START_REF] Betbeder | Symba: a tailorable framework to support collective activities in an learning context[END_REF]. We analysed the content of the acts and worked out the proportion of acts related to the following four categories: activity achievement, group organization, environment tailoring and socialization.
In collaboration with a research team in linguistics, we are also involved in the field of multimodal interaction analysis [START_REF] Betbeder | Interactions multimodales synchrones issues de formations en ligne : problématiques, méthodologie et analyses[END_REF], combining quantitative (macroscopic level) and content analysis (microscopic level) on micro actions that each participant can realize in an audiographic synchronous collaborative environment (audio talking turns, chat acts, votes, paragraph production in a shared text document, objects in a shared concept map or whiteboard). Our contributions in this field deal with (1) pattern discovering in sequences of such actions [START_REF] Betbeder | Recherche de patterns dans un corpus d'actions multimodales[END_REF], (2) a visualization tool [START_REF] Betbeder | Interactions multimodales synchrones issues de formations en ligne : problématiques, méthodologie et analyses[END_REF] which emphasizes the intertwinement of the actors' synchronous acts. In such an environment, actors can interact by using different modalities at the same time. Due to the importance of time and sequence of acts, such phenomena are hardly visible through a database representation.
We are currently working on navigation through corpora. This includes the selection of corpora and visualization of archived interaction acts and the context of the learning situation. Once the corpus has been chosen, a system of requests provides selection of parts of the corpus by considering for example: time restriction, selected communication tools, author or group or string search in interaction contents. The resulting set of acts can be visualized in the appropriate form, according to the communication tool they are recorded from. This heterogeneous set of acts is organised by our XSD schema, specified by the Mulce project, presented in (Reffay & al., 2008) and available on the Mulce project site1 .
To facilitate analysis on the platform, we also want to propose tools originated from our research team or from partnership. For example we have two running collaborations: the Calico2 project and Tatiana [START_REF] Dyke | Analysing face to face computer-mediated interactions[END_REF]. The Calico project aims at proposing different visualization and analysis tools available as servlets on their platform, specialized in discussion forums. Tatiana is a client application that combines and synchronizes various data sources (i.e. videos, logs, etc.). It includes a replayer and an annotator as well as various visualization facilities typically useful for synchronous interactions (face to face, chat, synchronously shared document editors).
Mulce project
The Mulce3 Project aims at developing a server that would allow researchers to share Learning and Teaching Corpora (Letec). Our main goals are: first, to facilitate the work of researchers in the CSCL field by reusing existing corpora instead of creating new experiments, collecting and organizing data, and secondly, to connect these corpora to shared analyses and visualization tools. We also hope that this process of sharing would deepen and widen the validity of research tools and analyses.
Its main objectives are (1) to define the data structure of a "Learning and Teaching Corpus", (2) to specify and develop a technical support to effectively share such corpora, integrating OLAC specifications (Open Language Archives Community) and OAI-PMH (Open Archives Initiative's Protocol for Metadata Harvesting) and (3) to rebuild 2 of our global corpora (Simuligne and Copeas) to be accessible throughout this platform, and documented according to the specifications of a "Learning and Teaching Corpus".
We propose a formalism to describe learning and teaching corpora and a platform to share them among the research community. The formalism defines the information which can be contained in a corpus and the structure of the data. Through the platform, researchers can share their corpora with the community and access the data shared by other members of the community. To share a corpus, a researcher has to provide metadata describing the corpus' components and upload a file describing each component. While accessing a corpus, an identified researcher is provided with a variety of tools to browse the corpus components, to navigate through the contextualized interaction data, to visualize and to analyze them. Our major goal is to develop efficient tools and technical environments to help the wide variety of actors involved in online teaching and learning.
Mulce data structure available here: http://mulce.univ-fcomte.fr/metadata/mce-schemas/mce_sid.xsd
French national research project coordinated by E. Bruillard (ERTÉ: Technical Research Team in Education) http://calico.inrp.fr/
Multimodal Learning Corpus Exchange (2007-2009). http://mulce.univ-fcomte.fr/axescient.htm#eng |
04102260 | en | [
"info.info-lg",
"info.info-dc"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04102260/file/document.pdf | Ousmane Touat
Towards Robust and Bias-free Federated Learning
Keywords: CCS Concepts:, Computing methodologies → Distributed artificial intelligence, • Social and professional topics → User characteristics, • Computer systems organization → Reliability Federated Learning, Bias Mitigation, Robustness, Byzantine Behavior
INTRODUCTION
Federated Learning (FL) is a distributed learning paradigm that allows multiple entities, denoted FL clients, to collaborate to learn a model with the help of a central node called the server, with privacy in mind, as the entities do not share their training data [START_REF] Mcmahan | Communication-Efficient Learning of Deep Networks from Decentralized Data[END_REF]. The deployment of FL in high-risk sectors such as healthcare has sparked concerns as the models may exhibit bias due to underrepresented minorities in data, leading to poorer performance for specific groups [START_REF] Yahya | FairFed: Enabling Group Fairness in Federated Learning[END_REF]. For example, there have been reports of image classification models used for diagnosing skin cancer showing less accurate results for the person of color [START_REF] Roxana | Disparities in dermatology AI performance on a diverse, curated clinical image set[END_REF]. Furthermore, some FL clients denoted Byzantine client, can significantly impact the FL model training by sending arbitrary updates to the server [START_REF] Blanchard Peva | Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent[END_REF]. Addressing model bias and Byzantine clients in FL systems is critical, and several existing works are addressing such issues separately. For instance, FL bias mitigation methods formulate an optimization approach with bias constraints, requiring FL clients to send additional statistics on their local data [START_REF] Cui | Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning[END_REF][START_REF] Yahya | FairFed: Enabling Group Fairness in Federated Learning[END_REF][START_REF] Wei | Fairness-Aware Agnostic Federated Learning[END_REF]. On the other hand, robust FL systems usually rely on passive outlier detection mechanisms using robust aggregation or gradient clipping [START_REF] Blanchard Peva | Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent[END_REF][START_REF] Pillutla | Robust Aggregation for Federated Learning[END_REF][START_REF] Yin | Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates[END_REF]. However, incorporating both goals into a single FL system is challenging, as the current objectives are contradictory.
FL bias mitigation requires considering outliers clients for adding representativity in their local data, while robust FL systems may eliminate outlier clients, which can affect model bias for worse [START_REF] Sai | Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing[END_REF][START_REF] Wang | Attack of the Tails: Yes, You Really Can Backdoor Federated Learning[END_REF]. Therefore, Authors' addresses: Ousmane Touat, [email protected], INSA Lyon -LIRIS, Lyon, France; Sara Bouchenak, sara.bouchenak@ insa-lyon.fr, INSA Lyon -LIRIS, Lyon, France. finding a balance between these two goals in a single FL system is complex and requires careful consideration to avoid compromising one for the other (see Figure 1 and Figure 2). This paper investigates the challenges of building a FL system that combines bias constraints and robustness guarantees. We empirically showcase where classical robust aggregation degrades model bias and hinders bias mitigation. We then discuss some research directions to build a FL system combining robustness and bias mitigation goals.
Client 1
FL Robustness Mechanism
The FL Robustness Mechanism may also filter the client 3 ("Honest but minority") update, loosing interesting data for the FL Bias Mitigation mechanism.
FL Server
Client 2
...
Client 3 Client 4
Honest clients Byzantine
FL Bias Mitigation Mechanism
Fig.
Client 3 Client 4
Honest clients Byzantine
FL Robustness Mechanism
The FL Bias Mitigation Mechanism is directly exposed to the Byzantine influence Fig. 2. Applying bias mitigation first in Byzantine setting does not work
BACKGROUND AND SYSTEM MODEL
In this section, we recall the principles of FL, demographic bias and the standard Byzantine attack model.
Federated Learning
In Federated Learning (FL), we have 𝑛 clients aiming to optimize a shared objective function collectively managed by a central server. The training process progresses through communication rounds, in which the central server sends the current global ML model with parameters 𝜃 to the FL clients. In this setup, the 𝑖-th client has its local data denoted as D 𝑖 and trains a local model denoted as 𝜃 𝑖 . We denote the union of clients' local data D. At the end of the round, the clients send their updated model to the server, which update the new global model by aggregating the clients' updates. The classical approach for FL aggregation is called FedAvg [START_REF] Mcmahan | Communication-Efficient Learning of Deep Networks from Decentralized Data[END_REF] and consists in averaging the clients' model parameters as follows :
𝜃 = 𝑘=𝑁 ∑︁ 𝑘=1 |D 𝑘 | |D | .𝜃 𝑘 (1)
Bias in Federated Learning
Bias refers to the property of an classifier model to disparately treat different groups based on sensitive attributes, denoted 𝑆. A sensitive attribute is binary as it partition the dataset into two groups. One widely used metric for measuring bias with respect to sensitive attributes is statistical parity difference (SPD) [START_REF] Dwork | Fairness Through Awareness[END_REF].
With 𝑦 the prediction value from the FL model and 𝑆 the value of the sensitive attribute we define 𝑆𝑃𝐷 𝑆 as follows:
𝑆𝑃𝐷 𝑆 = |𝑃𝑟 (𝑦 = 1|𝑆 = 1) -𝑃𝑟 (𝑦 = 1|𝑆 = 0)| (2)
In this formulation, a value of zero for 𝑆𝑃𝐷 𝑆 indicates that the algorithm is statistically fair concerning the sensitive attribute, as it predicts positive outcomes with the same probability for both groups. Minimizing the value of such metric is relevent in cases where the classifier output must be independant from the sensitive attribute (e.g algorithms with any societal impact which encounters issue of demographic discrimination ).
Byzantine Attack Model in Federated Learning
The presence of malicious parties, known as Byzantine clients, denoted B, can threaten the integrity of the FL process. The fraction of Byzantine clients is given by 𝜌, and the number of Byzantine clients denoted by 𝑞 such that 𝑞 ≤ 𝜌𝑛. Byzantine clients can deviate from the standard protocol and send arbitrary updates to the server, and they may even collude and have knowledge of the states of all other clients. The remaining 𝑛 -𝑞 clients who follow the protocol, denoted as honest clients, are represented by set H . FL systems such as FedAvg are directly vulnerable to Byzantine attacks [START_REF] Blanchard Peva | Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent[END_REF]. For example , we take a FL system of 𝑛 clients, including one Byzantine client. We suppose that each honest client sends an update denoted as 𝜃 𝑖 for 𝑖 ∈ {1, . . . , 𝑛 -1}. The Byzantine client can craft an update
𝜃 𝑏𝑦𝑧 = |D |.𝜃 𝑤𝑡 -1 | D | (D 𝑘 .𝜃 𝑖 + • • • + D 𝑛-1 .𝜃 𝑛-1
) to make the server compute any global model 𝜃 𝑤𝑡 desired. This happen as the server, unaware of the presence of the Byzantine client, aggregates all received updates, producing the following global model 𝜃 𝑤𝑡 as the Byzantine model wanted. Such unwanted behavior harms the FL system greatly, making it unable to converge reliably, motivating the research on robust FL methods.
RELATED WORK
Robustness to Byzantine behavior in Federated Learning
The Byzantine fault tolerance problem is well-known in distributed systems, where malicious nodes may provide incorrect information, leading to incorrect results. In FL, this problem can arise when some participating clients behave maliciously and manipulate the aggregated model parameters. Several Byzantine resilient FL methods have been proposed in the literature using the statistical properties of the client updates.
Krum [START_REF] Blanchard Peva | Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent[END_REF], and by extension Multi-Krum, order the clients' updates using a score stating how close they are to its 𝑛 -𝑓 -2 nearest neighbors using Euclidean distances. RFA [START_REF] Pillutla | Robust Aggregation for Federated Learning[END_REF] computes an approximate value of the geometric median of all client updates using Weiszfeld's algorithm. TrimmedMeans [START_REF] Yin | Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates[END_REF] computes the global model by taking the mean value of each coordinate of the client updates, truncating extreme values. For model poisoning attack settings, NDC [START_REF] Sun | Can You Really Backdoor Federated Learning?[END_REF] adopts a norm-thresholding policy that limits the model's parameter norm by clipping it when it exceeds a fixed threshold. However, such robust aggregation techniques fail when used on non-IID data, cannot withstand attacks, which degrade the global model performance [START_REF] Sai | Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing[END_REF]. For non-IID data, some defense mechanisms have been proposed to mitigate the impact of Byzantine behavior. Recently, new defense mechanisms have tackled the impact of Byzantine behavior in non-IID FL context. FedInv [START_REF] Zhao | FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates[END_REF] performs model inversion on the clients' updates to generate dummy datasets and compare clients' Wasserstein distance on the dummy datasets distribution, empirically showing robustness in non-IID FL context. Recently Karimireddy and al. introduced a new bucketing technique where the server randomly partitions client updates into "buckets", providing some guarantee when using this technique alongside the previously mentioned robust aggregation technique in non-IID and Byzantine FL context [START_REF] Sai | Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing[END_REF]. Furthermore, Byzantine resilience can also be addressed by ensuring the integrity of the participating nodes using enclaves such as TEEs [START_REF] Hanieh | Byzantine-Robust and Privacy-Preserving Framework for FedML[END_REF].
Bias Mitigation in Federated Learning
The issue of model bias is well-known in machine learning [START_REF] Ninareh | A Survey on Bias and Fairness in Machine Learning[END_REF] and has also received prime attention in FL. In FL, we distinguish between approaches that mitigate model performance discrepancy between clients [START_REF] Li | Ditto: Fair and Robust Federated Learning Through Personalization[END_REF][START_REF] Song | Profit Allocation for Federated Learning[END_REF] and ones that address model bias between demographic groups in FL, from where our work is positioned. We encounter two approaches to tackle this topic, namely client-side techniques and server-side techniques. The client-side technique aims to apply a local debiasing mechanism, such as data reweighting [START_REF] Abay | Mitigating Bias in Federated Learning[END_REF]. However, such a technique may fail on the global bias objective, especially where the FL setup is highly non-IID [START_REF] Yahya | FairFed: Enabling Group Fairness in Federated Learning[END_REF]. In server-side techniques, most FL methods seeks to solve a global bias-constrained objective, requiring local clients to share information about the local statistic of sensitive attributes [START_REF] Cui | Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning[END_REF][START_REF] Yahya | FairFed: Enabling Group Fairness in Federated Learning[END_REF][START_REF] Wei | Fairness-Aware Agnostic Federated Learning[END_REF][START_REF] Daniel Yue Zhang | FairFL: A Fair Federated Learning Approach to Reducing Demographic Bias in Privacy-Sensitive Classification Models[END_REF]. AgnosticFair [START_REF] Wei | Fairness-Aware Agnostic Federated Learning[END_REF] uses local information from the client to optimize the data reweighting coefficient sent back to the client to seek its globally constrained objective problem. FairFL [START_REF] Daniel Yue Zhang | FairFL: A Fair Federated Learning Approach to Reducing Demographic Bias in Privacy-Sensitive Classification Models[END_REF] train a client-selection policy function using multi-agent reinforcement learning to maximize a gain function based on how bias mitigation is performed on the global model. FairFed [START_REF] Yahya | FairFed: Enabling Group Fairness in Federated Learning[END_REF] linearly scalarize the inputted local model parameters based on their local bias discrepancy to the global bias metric. FCFL [START_REF] Cui | Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning[END_REF] also linearly scalarize the gradient updates from the clients to optimize both a bias constraint and consistency in the client's model performance.
Federated Learning with Bias and Robustness Guarantees
While the issues of demographic bias and robustness have been extensively explored in the literature, combining these two notions within a single system needs to be further examined. Ditto [START_REF] Li | Ditto: Fair and Robust Federated Learning Through Personalization[END_REF] confronts the notions of robustness against client-level fairness, showing that these two notions are competing, and implements a FL system that combines these properties using client model personalization. Recently, Singh and al. [START_REF] Singh | Fair detection of poisoning attacks in federated learning on non-i.i.d. data[END_REF] built a robustness mechanism that minimizes the amount of honest "minority" clients detected as malicious ones. Assuming each client represents only one group, the server employs a decentralized micro aggregation algorithm to cluster the clients based on their published sensitive attribute, identifying malicious clients having updates too far from those cluster centroids.
PROBLEM ILLUSTRATION
While the robustness of FL systems is essential, using robust aggregators in FL may impair model bias, particularly in non-IID FL settings. The robustness methods may eliminate outlier clients with different data distribution, suppressing their contribution to the global model. In this section, we will further illustrate this problem and explore its implications for model bias in FL.
Impact of FL Robustness Methods on Bias
We compared the behavior of 4 robust aggregators that are Multi-Krum [START_REF] Blanchard Peva | Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent[END_REF], RFA [START_REF] Pillutla | Robust Aggregation for Federated Learning[END_REF], TrimmedMeans [START_REF] Yin | Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates[END_REF] and NDC [START_REF] Sun | Can You Really Backdoor Federated Learning?[END_REF], on two tabular datasets, MEPS [START_REF] Cohen | Design Strategies and Innovations in the Medical Expenditure Panel Survey[END_REF] and Adult [START_REF] Kohavi | Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid[END_REF], in non-IID FL settings. We also added FedAvg [START_REF] Yin | Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates[END_REF] as another FL baseline without robustness mechanism for comparison. On MEPS, an hospital expenditure dataset, the target attribute for the binary classification task is the utilization of medical facilities with the sensitive attributes of race and gender. The MEPS dataset is cut into a 4-client FL setup, showing the opposite trend of the three other clients that share similar data distribution, with the minority race becoming the majority. In the Adult dataset, the target attribute is income, and the considered sensitive attributes are gender and age. We use a 10-client FL setup generated with a Dirichlet function [START_REF] Hsu | Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification[END_REF] on the distribution of the sensitive attribute and target feature combo to ensure heterogeneity between FL clients.
We report the model bias for the baseline FedAvg method and the robust aggregators in Figure 3. We observe that the robust aggregators, including Multi-Krum, RFA, and NDC, increase the model bias compared to the FL baseline on both MEPS (Figure 3(c)) and Adult (Figure 3(a),3(b)). The robust aggregators' behavior in such experiments is concerning as it demonstrates a degradation in the model bias in a setup without any Byzantine attacks. We attribute such behavior to the fact that robust aggregators tend to exclude clients with different data distributions. We observe for example that in the 4-client setup of the MEPS experiment, the client that shows the reverse bias trend is systematically excluded, or have its influence reduced by the robustness techniques. In our case, it corresponds to clients showing opposite bias dispositions, which are necessary to offset the biases from the other FL clients, thus showing the robust aggregators' difficulty working in a heterogeneous context when considering demographic bias.
Impact of FL Robustness Methods on FL Bias Mitigation
We study the behavior of two robust aggregators, Multi-Krum and NDC when used with a state-of-the-art bias mitigation method called FCFL [START_REF] Cui | Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning[END_REF]. We add the robust method as the first step after getting clients' updates before sending the updates to the FCFL server solver to perform the optimization task. Using NDC with FCFL means the clients' updates are clipped if their norm exceeds a fixed threshold and sent to the server solver while for Multi-Krum it means that clients are selected to be sent to the server solver based on the set of clients with the lowest Krum score. The solver will then work with fewer updates to compute the global model. For all experiments, we used the same hyper-parameters on FCFL, specifically the SPD threshold and the learning rate. As explained earlier, we perform such experiments on the same datasets and sensitive attributes. We report the model bias of both added robust mechanisms against using FCFL alone and the baseline FL method in Figure 4. We observe how the robust mechanism influences the behavior of FCFL. Specifically, on both MEPS (Figure 4(c)) and Adult (Figure 4(a),4(b)), we observed that NDC influenced the bias mitigation process, getting a higher value of SPD than the typical method behavior. For Multi-Krum, we can see its influence on FCFL the most on Adult (Figure 4(a)) and MEPS (Figure 4(c)). We observe that using classical robustness mechanism in conjunction with FCFL worsens the bias mitigation process, further motivating the issue of building a bias-aware FL robustness mechanism. FL Robustness to Byzantine Behavior. The FL system must be robust against the attack of 𝑓 = 𝜌𝑛 Byzantine client, which is achieved when for any 𝜖 > 0, the server can output a global model 𝜃 such that we can bound the diameter of the gradient of the loss function 𝑓 (𝜃 ) by a constant 𝜖. Ideally, the system can distinguish the honest FL clients, thus selecting only the latter for aggregation. We call such aggregation
𝜃 = 1 | H |
𝑗 ∈ H 𝜃 𝑗 the true average. A robust aggregator must output a model 𝜃 close to the true average. Assuming that we can bound the honest client model heterogeneity ∥𝜃 𝑗 -𝜃 𝑖 ∥ 2 < 𝛿 2 for all 𝑖, 𝑗 ∈ H , the output θ from the robust Byzantine aggregator must respect the following constraint [START_REF] Sai | Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing[END_REF] :
E∥ θ -𝜃 ∥ 2 < 𝑘𝜌𝛿 2 (3)
FL Bias Mitigation. The goal is to learn the optimal value of the global model parameters 𝜃 that solve the bias-constrained optimization on the 𝑆𝑃𝐷 𝑆 value, where 𝜖 is the 𝑆𝑃𝐷 𝑆 constraint and 𝑓 𝑘 (𝜃 ) is the loss of the given model 𝜃 over local data of the honest client 𝑘 :
min 𝜃 𝑓 (𝜃 ) = min 𝜃 1 |H | ∑︁ 𝑘 ∈ H 𝑓 𝑘 (𝜃 ) (4a) s.t |𝑆𝑃𝐷 𝑆 (𝜃 )| ≤ 𝜖 (4b)
WHY HANDLING BIAS AND BYZANTINE BEHAVIOR IN FL IS DIFFICULT
This section shows the challenges of building a FL system combining robustness against Byzantine behavior, and model bias mitigation. We cover the following scenarios: Using classical robust aggregators first and then the FL Bias mitigation process separately (in section 6.1). Using the FL Bias mitigation process first and then the robustness mechanism separately (in section 6.2) and finally combine both robustness and bias mitigation objectives at once (in section 6.3). We provide then interesting research direction to address the cited challenges.
Why Applying a Classical FL Robustness Mechanism Followed by Classical FL Bias Mitigation Does Not Work
Suppose we first apply classical robust aggregation techniques and subsequently perform bias mitigation.
The resulting system may fail to operate effectively in a non-IID environment. With a FL system consisting of 𝑛 clients, with one honest client having a very different data distribution with better representation of the minority group, its contribution to the FL system would then help bring down the global model bias. However, we must expect our FL system to be robust to Byzantine clients by assuming the latter's presence within our system. Let us also suppose the presence of one Byzantine clients in our system. Using classical robust aggregators such as Multi-Krum would mean that one client get eliminated based on how far the model parameters are from all other clients. In section 4, we illustrated how robust aggregators could eliminate the client with different data representativity. Unfortunately, this can lead to the exclusion of honest clients that offer demographic minority representation (see Figure 1), losing crucial data representation that could be used by bias mitigation methods to help reduce the bias between demographic groups.
Observation 1:
Using classical robust aggregators may eliminate honest client, affecting the normal behavior of FL bias mitigation.
6.2 Why Applying Classical FL Bias Mitigation Followed By a Classical FL Robustness
Mechanism Does Not Work
In a FL setup where we suppose the presence of Byzantine clients, we must keep the Byzantine client from being considered by the FL Bias mitigation method as we need to trust their behavior. Suppose a FL system with honest clients and Byzantine clients. At the end of the FL round, each honest client sends regular updates, while the Byzantine sends arbitrary information. In this scenario, the FL bias mitigation component is applied before any robustness mechanism in this scenario. The classical bias mitigation method happening server-side would then get exposed to the Byzantine clients, which can directly harm the model utility (see Figure 2). A tradeoff with model utility is often expected with the bias mitigation methods [START_REF] Cui | Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning[END_REF][START_REF] Yahya | FairFed: Enabling Group Fairness in Federated Learning[END_REF][START_REF] Wei | Fairness-Aware Agnostic Federated Learning[END_REF]. Adding Byzantine clients would further increase the tradeoff, making some bias-constrained optimization tasks like AgnosticFair [START_REF] Wei | Fairness-Aware Agnostic Federated Learning[END_REF] more difficult. Furthermore, as mentioned in section 3.2, some bias mitigation methods that intervene server-side, such as FairFed [START_REF] Yahya | FairFed: Enabling Group Fairness in Federated Learning[END_REF], require clients to send information on their local bias metrics in order to compute the global model. Byzantines can exploit this mechanism by sending falsified local bias information, which can cause the system to consider their harmful model updates even better.
Observation 2:
Using the classical FL bias mitigation method before any robustness mechanism expose the bias mitigation method to the influence of the Byzantine clients.
On the Impossibility to Handle Bias and Byzantine Behavior at Once in Classical Approaches
Each client sends their locally trained model update to the server during the FL process for optimization. In our case, we assume wanting to maximize the global model utility on a data distribution representing the union of honest client data under bias constraint. At the same time, we want our model to have robustness guarantees against the current Byzantine activity in the system. In order to solve the bias-constrained objective problem on the global data distribution, we must optimize our objective based on the set of models updates coming from the honest clients set, and nothing else as we do not trust the updates coming from the Byzantine clients as mentioned in subsection 6.2. However, such a set of honest client updates is heterogeneous due to outlier clients, which we assume represents additional information on minorities compared to the one found in most updates. In [START_REF] Sai | Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing[END_REF], Karimireddy and al. argue that we can construct a set of model updates including all Byzantines that would pass the robust aggregation part. Our system may consider those Byzantine updates for the server-side optimization part, which brings an error component to the global optimization process that we cannot ignore. The output of such a system combining bias mitigation and robustness is not guaranteed to keep the bias constraint under the global data distribution.
Observation 3:
Using both objectives is impossible in heterogeneous settings using only model updates as information, as the sever cannot distinguish between outlier clients and Byzantine ones.
Discussion and Research Directions
Despite the challenges of combining bias mitigation and robustness to Byzantine behavior methods in FL, interesting research directions remain to explore. We have seen the necessity of providing robustness before any other bias mitigation operation at the server level in subsection 6.2. This requires a new method for detecting Byzantine clients in non-IID environments. First, such a FL system must guarantee the selection of outlier clients when there are no Byzantine attacks on the system. Also, the FL system must reduce the amount of falsely predicted Byzantine clients, as these clients probably contain critical data representativity.
The server can try spotting such "honest and minority" clients among the deemed Byzantine clients by the robust aggregators. To distinguish such clients from the Byzantine ones, we could rely on reputation mechanisms to precisely identify Byzantine clients [START_REF] Xu | A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning[END_REF]. Furthermore, we saw how most FL bias mitigation approaches require the clients sending additional information on local bias (see subsection 3.2). Then a new solution could have the server also asking its clients to send additional information on their local data distribution and privately compare such information with their sent model updates. The system could then analyze inconsistencies between the information on the data distribution and what can be extracted from the client model update (i.e., using model inversion [START_REF] Zhao | FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates[END_REF]), revealing the Byzantine nature of the client.
Then selecting of the "honest and minority" clients could be performed on how interesting their added data representativity for minorities are.
CONCLUSION
FL model bias and robustness against Byzantines have been extensively studied over the last few years.
Addressing both problems within a single FL system, while critically needed, has been elusive. This study investigated the challenges of constructing a FL system with both robustness to Byzantine behavior properties and bias guarantees. By design, a robust FL system degrades inclusiveness, particularly for outlier model behavior, which may have the side effect of setting aside data representing minorities, which is necessary for proper functioning FL bias mitigation techniques. Furthermore, we discuss the limitations of combining classical bias mitigation methods with existing robustness mechanisms. Finally we formulate possible research directions for building robust, bias-free FL.
Fig. 3 .
3 Fig. 3. Impact of FL robustness methods on Bias
Fig. 4 .
4 Fig. 4. Interaction between FL bias mitigation and FL robustness mechanisms
ACKNOWLEDGMENTS
This work was partially supported by the ANR French National Research Agency, through SIBIL-Lab LabCom (ANR-17-LCV2-0014) and ByBlos project (ANR-20-CE25-0002-01). Thanks to Paul Wawszczyk, who has also contributed to this work during his internship. |
04102289 | en | [
"phys"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04102289/file/ocnn1c_ER2.pdf | Young-Gu Ju
email: [email protected]
Scalable optical convolutional neural network based on free-space optics using lens arrays and a spatial light modulator
Keywords: optical neural network, convolutional neural network, free-space optics, optical computer, smart pixels
A scalable optical convolutional neural network (SOCNN) based on free-space optics and Koehler illumination was proposed to address the limitations of the previous 4f correlator system. Unlike Abbe illumination, Koehler illumination provides more uniform illumination and reduces crosstalk. SOCNN allows for scaling up of the input array and the use of incoherent light sources. Hence, the problems associated with 4f correlator systems can be avoided. We analyzed the limitations in scaling the kernel size and parallel throughput and found that SOCNN can offer a multilayer convolutional neural network with massive optical parallelism.
Introduction
In recent times, the advent of artificial neural networks with deep learning algorithms has led to considerable advances in applications such as image and speech recognition and natural language processing [START_REF] Hinton | Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups[END_REF][START_REF] Lecun | Deep learning[END_REF]. The convolutional neural network (CNN) is a type of deep learning algorithm that is particularly effective in image and video analysis [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF]. CNNs are specifically designed to automatically detect and extract features such as edges, corners, and textures from images; these features can be used to classify the images into different categories. These applications involve processing an input image by applying convolution operations using kernels of different sizes. The results of these convolutions are then pooled, passed through a nonlinear activation function, and sent to the next layer of convolutional operations. Although CNNs are excellent at solving classification and recognition problems, they require a massive amount of computation, especially when dealing with large images and kernels. When an input image with n × n pixels is convolved with a kernel of size k × k, the amount of computation is proportional to (n 2 × k 2 ). The computational requirement grows further with an increasing number of layers, resulting in high latency and large power consumption in the case of forward inference in the pretrained network. Although the use of graphics processing units can alleviate the issue of latency, real-time inference may still remain a challenge [START_REF] Chetlur | cuDNN: efficient primitives for deep learning[END_REF].
Currently, researchers are exploring the use of free-space optics to implement CNNs in an optical form owing to the high parallelism and energy efficiency of these optics [START_REF] Colburn | Optical frontend for a convolutional neural network[END_REF][START_REF] Chang | Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification[END_REF][START_REF] Lin | All-optical machine learning using diffractive deep neural networks[END_REF][START_REF] Sui | A review of optical neural networks[END_REF]. Optical convolutional neural networks (OCNNs) based on free-space optics traditionally use the well-known 4f correlator system to exploit the Fourier transform property. Although these types of OCNNs have some advantages, they cause several inherent problems because of the use of Fourier optics. The first issue is the limitation in the scalability of the input image array; a lens is used for Fourier transformation, and the lens has a finite space-bandwidth product (SBP) owing to its geometric aberration. The second issue is the latency caused by the time taken to generate the input array. In a Fourier transform-based system, a laser and a spatial light modulator (SLM) are required to generate a coherent input. However, the currently available SLMs are mostly slow and serially addressable, thereby causing remarkable latency. This latency diminishes the advantages of the massive parallelism of the optical neural network. Additionally, this latency makes it challenge to build a cascaded system for a multilayer neural network. The third problem is the difficulty in reconfiguring the kernels. The kernel pattern on the 4f system is the Fourier transform of the kernel pattern; obtaining the Fourier transform requires computation and can lead to significant delays in renewal.
To address these issues, we propose a scalable optical convolutional neural network (SOCNN) based on free-space optics and Koehler illumination, which uses lens arrays and a SLM. The SOCNNs proposed herein are a variation of the previously reported optical neural network. They accommodate the CNN architecture in the context of a linear combination optical engine (LCOE) [START_REF] Ju | A scalable optical computer based on free-space optics using lens arrays and a spatial light modulator[END_REF]. The goal of the LCOE was full interconnection; in contrast, the goal of the SOCNNs was partial connection with an unlimited input array size.
Theory
In a typical 4f correlator system, a mask is located at the focal plane of two lenses, as shown in Fig. 1. Lens1 performs the Fourier transform, while Lens2 performs inverse Fourier transform. The mask represents the complex-valued Fourier transform of a kernel, which is multiplied by the Fourier transform of an input pattern. Thus, the output plane displays the convolution of the input array and kernel.
The SBP of the 4f imaging system is approximately
( 𝐷 2 𝜆𝑓 ) 2 ,
where D, λ, and f are the diameter of the lens, wavelength of the light source, and focal length of the lens, respectively. The SBP can be expressed as ( 𝐷 𝜆 𝑓/# ) 2 using the f-number (f/#) of the lens. If the fixed wavelength and f/# are used, the SBP can increase infinitely with D. However, this SBP arises from the diffraction limit of the lens used in the system. As D increases, it becomes difficult for the system to reach the diffraction limit. A larger system requires less geometric aberration and more elements and tighter alignment tolerance to reach the diffraction limit. The SBP of the system is about
( 𝐷 2 𝑓 𝛿 ) 2 = ( 1 2 𝑓/# 𝛿 ) 2
where is the angular aberration of the lens. When a triplet lens has a f-number of 2, the angular aberration is approximately 3 mrad and the SBP is about 83 × 83. Since a lens system can worsen alignment problems during assembly with an increase in the number of elements, the practical scaling limit of the 4f system can be about 250 × 250 if the angular aberration is 1 mrad. Fig. 1 Example of a 4f correlator system that uses Fourier transform to implement an existing optical convolutional neural network (OCNN). The mask represents the Fourier transform of the kernel used in the CNN.
Fig. 2 Example of a simple OCNN with corresponding mathematical formula; ai (l) represents the i-th input or output node in the l-th layer; wij indicates the weight connecting the j-th input node and the i-th output node; bi is the i-th bias; Nm is the number of weights connected to an input/output or the size of a kernel; σ is a sigmoid function
To understand the architecture of the proposed SOCNN, it is essential to grasp the concept of CNN. An example of a CNN is shown in Figure 2; it shows four input nodes, four output nodes, and their synaptic connections along with mathematical representations. The CNN operates by receiving input signals through the input nodes, which are then transmitted through synaptic connections to the output nodes for processing; thus, the output is obtained. The strengths of the synaptic connections are modeled using mathematical representations that assign weights to each connection. In contrast to the full connection optical neural network such as the LCOE, in the CNN, each input or output node has local or partial connections whose weights are called kernels.
The concept of the SOCNN proposed herein is illustrated in Fig. 3(a). The CNN shown in Fig. 2 was transformed into a hardware schematic, containing laser diodes (LDs), lenses, a liquid crystal display (LCD), detectors, and electronics. The input node was replaced by an LD that sent three rays to lens array 1. The LD used in this architecture can be a multimode laser diode or a light-emitting diode (LED) because, unlike the traditional 4f correlator system, this system accommodates incoherent light sources.
Lens array 1 collimates the rays and sends them to the LCD, where each pixel transmits the corresponding ray according to a pretrained kernel in the CNN. The rays from the LCD pass through lens array 2, which focuses the rays and generates different ray angles depending on the distance of the LCD pixel from the optical axis of the individual lenses in the array.
A detector collects or adds the optical power of the rays arriving at different angles from different neighboring LDs or inputs with preset weights. In this scheme, the summed light is mathematically a convolution of the inputs and the kernel specified by the weights. These SOCNN can perform the calculations in parallel and, most importantly, in one step with the speed of light if the weights and inputs are preset. This type of calculation is called "inference" in the neural network community. Although the SOCNNs based on an LCD are reconfigurable, they are more suitable for inference applications owing to the low switching speed.
Detailed examination of the optical system, as shown in Fig. 3-(a), indicates that lens2 and lens3 form a relay imaging system in which the LCD is typically positioned at the focal plane of lens2, while the detector is placed at the focal plane of lens3. This arrangement ensures that the LCD and detector planes are conjugated-in other words, each pixel on the LCD forms an image on the detector plane. By establishing this conjugate condition, the illumination area of each ray is more clearly defined, and the crosstalk between channels is reduced.
Additionally, if an LD is placed at the focal plane of lens1, lens1 and lens2 form a relay system, and an image of the LD is formed at lens3. Overall, the lens configuration shown in Fig. 3-(a) constitutes a Koehler illumination system [START_REF] Arecchi | Field Guide to Illumination[END_REF][START_REF] Greivenkamp | Field Guide to Geometrical Optics[END_REF]. Figure 3(a) shows dotted magenta lines that represent the chief rays from the perspective of the condenser system and marginal rays from the perspective of the projection system. The red dotted rectangular block located at lens3 represents the image of the light source. Lens2 and lens3 work together to form a projection lens in the Koehler illumination system. In contrast, the rays originating from the LD point of emission spread out over the detector plane, providing uniform illumination. Koehler illumination-based optical computers have several advantages over previously reported architectures based on Abbe illumination in terms of uniformity of the illumination and control of the beam divergence of light sources [START_REF] Ju | A scalable optical computer based on free-space optics using lens arrays and a spatial light modulator[END_REF].
The key difference between this SOCNN and the previous LCOE is that each input of the SOCNN has a relatively small number of connections to the output array, whereas the LCOE has a full interconnection. The feature of partial connection greatly relieves the constraint on the size of the input array. In fact, unlike the traditional 4f correlator system, this SCONN does not have any theoretical limit of input array size. Only the size of the kernel array is limited, since the SLM pixels used for the kernel are imaged through lenses that impose a constraint on SBP; this topic will be explained in detail in the discussion section.
In the example shown in Fig. 3, the number of LCD pixels belonging to each input node is equal to the number of output nodes to which the input is connected. The LCD pixels belonging to each input node can be called a subarray of the SLM. The size of the subarray is the same as that of the receptive field from the viewpoint of the output node. In the case shown in Fig. 3-(b), the subarray comprises a 3 × 3 array where the spacing between the pixels is d. Given that the spacing between the detectors is a, the magnification of the projection system should match the size of the SLM subarray. The magnification of the projection system in SOCNN is written as f3 / f2 using the notations shown in Fig. 3-(b).
If an 8 × 8 pixel area on the LCD is assigned to a single kernel, it can be connected to 64 output nodes. For instance, an LCD with 3840 × 2160 resolution can accommodate up to 480 × 270 input nodes, which translates to 129,600 inputs. Considering the parallelism of the SOCNN, its performance is equal to the number of pixels in the LCD. If the system has N × N inputs and an M × M kernel, it can perform N 2 × M 2 multiplications and N 2 × (M 2 -1) additions in a single step. If the SOCNN takes the full advantage of the LCD resolution, (N × M) 2 equals the total number of pixels in the LCD, and this number is immensely large in modern devices. This LCD can be replaced by other types of SLM arrays for achieving high speeds if a fast refresh rate of weights is required. Furthermore, since the transmission of SLM pixels in the SOCNN is proportional to the weight of the kernel, extra calculations for Fourier transform are not required, unlike in the 4f correlator system-this is another advantage of SOCNNs for use as reconfigurable OCNNs in the future. After the optical process, the detector converts the light into current, and the remaining steps such as signal amplification, bias addition, and application of nonlinear functions (e.g., sigmoid, rectified linear units, local response normalization, and max-pooling) are performed electronically. These nonlinear functions are better handled by electronics than by optics because of their inherent properties. However, when electronics are used, interconnections between farneighboring electronics should be minimized to avoid traffic congestion. As long as the electronics employed are local and distributed, the optical parallelism of the system remains unaffected. The electronic part, including the detectors, is similar to the concept of smart pixels [START_REF] Seitz | Smart Pixels[END_REF].
The proposed system is a cascading system, and it can be extended in the direction of beam propagation. The signal from the output node is directly connected to the corresponding input of the next layer, allowing a detector, its corresponding electronics, and an LD in the next layer to form a synaptic node in an artificial neural network. If the system has L layers, N 2 × M 2 × L calculations can be performed in parallel in a single step; this ability greatly increases the SOCNN throughput for continuous input flow.
In fact, the addition of incoherent light by a detector and an LCD cannot represent the negative weight of a kernel in a CNN. If coherent light and interference effects are used, the system can represent subtraction between inputs. However, the use of coherent light may complicate the system and increase noise. The previous OCNN based on a 4f correlator system used a coherent light source and an SLM to generate an input array. As mentioned in the introduction, this coherent source entails many problems such as latency, noncascadability, and noise. Handling negative weights with incoherent light sources in this study can be solved by using the "difference mode" as described in previous references [START_REF] Ju | A scalable optical computer based on free-space optics using lens arrays and a spatial light modulator[END_REF][START_REF] Glaser | Lenslet array processors[END_REF].
To implement the difference mode in SOCNN, two detectors are required for each output node, or lens3 with two separate channels indicated by a red dotted circle, as shown in Fig. 4, is required. The two optical channels separate the inputs with positive weights from those with negative weights. Each channel adds input values multiplied by their respective weights using optical means. Subsequently, subtraction between the two channels is performed electronically through the communication between neighboring electronics. Note that the weight in the negative channel should be zero when the corresponding positive weight is used and vice versa. For example, if 𝑤 20 and 𝑤 02 are positive, and 𝑤 11 is negative, as shown in Figure 3-(a), then Eq. 1 and Eq. 2 are used to represent the positive and negative weight calculations, respectively, as shown in Fig. 4.
𝑤 200 = 𝑤 20 , 𝑤 002 = 𝑤 02 , 𝑤 101 = 0 (Eq. 1) 𝑤 210 = 0 = 𝑤 012 , 𝑤 111 = -𝑤 [START_REF] Greivenkamp | Field Guide to Geometrical Optics[END_REF] (Eq. 2)
This subtraction scheme simplifies the structure but requires an additional channel.
To implement other functions such as multiple kernels for the same input array, more than two detectors can be used for a single lens3. The number of detectors corresponding to each lens is denoted as Np in Fig. 4. For easy reference, the detectors associated with each lens 3 can be referred to as a "page of detectors." Since Fig. 4 represents a one-dimensional configuration, Nm and Np can be generalized into M × M and P × P, respectively, in a two-dimensional configuration. In this case, the array size of the subarray corresponding to one input node becomes (M × P) 2 . Suppose that the spacing between pixels and that between the subarrays are denoted as d and a, respectively. The side length of the subarray is MPd. Since P detectors are arranged along length a, the detector spacing d2 is equal to Md. This means that the magnification of the projection system consisting of lens2 and lens3 should be M. Thus, a page of detectors can be implemented for difference mode or multiple kernels. Fig. 4 Difference mode configuration of the SOCNN; this mode can also be used for calculating multiple kernels for a single input array; a generalized mathematical formula is given, where Np represents the number of detectors corresponding to a single lens3.
Discussion
Although theoretically, the SOCNN has no limit on the input array size, scaling the size of the kernel is limited due to lens3. The limit on the scaling can be analyzed using the method described in a previous report [START_REF] Ju | A scalable optical computer based on free-space optics using lens arrays and a spatial light modulator[END_REF]. This analysis involves calculating the image spreading of the SLM pixel through the projection system in terms of geometric imaging, diffraction, and geometric aberration. The overlap between the images of neighboring pixels and the required alignment tolerance can be estimated from the calculated image size. The analysis begins by examining an example system with an SOCNN architecture, and this is followed by an exploration of the factors that limit the system scale-up and how diffraction and geometric aberration affect this scaling.
To simplify the analysis, we investigated the architecture shown in Fig. 3-(a) instead of that shown in Fig. 4. The scaling analysis can be easily generalized into a page detector scheme. The proposed SOCNN is a system with two-dimensional (2D) input and output and a four-dimensional (4D) kernel such that the numbers of pixels are N 2 , N 2 M 2 , and N 2 for the input array, SLM array, and output array, respectively, where N and M are the number of rows in the square input array and the kernel array, respectively.
If all the three array components have the same size, the densest part is the SLM array. Therefore, it is better to initially design an array of SLM or LCD pixels for an example system. For the example system, we assumed a 5 × 5 array for the kernel. We assumed that the SLM had 5 μm square pixels, which were placed with a period of 20 μm in a rectangular array. According to the notations of Fig. 3 (c), ε and d are 5 and 20 μm, respectively.
The 5 × 5 SLM subarray accepts light from a single light source through a single lens. The diameter of each lens in the lens array and the side length of the SLM subarray are both 100 μm, and it is denoted by a in Fig. 3-(b) and(c). The distance a is also the pitch of lens array 1, lens array 2, and the detector. Lens2 is supposed to have an f/# of 2. Because the SLM pixel was at the front focus of lens2 and the detector was at the back focus of lens3, the image of the SLM pixel was formed at the detector plane. Since detector pitch a = 5d, the magnification of the projection system should be 5. In general, if the subarray size of the kernel is M × M, a is equal to Md, and the magnification should be M.
The magnification of this projection relay system was a ratio of the focal length (f3) of lens3 to the focal length (f2) of lens2, i.e. f3/f2 = M. Therefore, the geometric image size of a pixel without aberrations and diffraction was Mε. Because the pitch of the SLM pixel array was magnified to the pitch of the detector array, the duty cycle of the image of the SLM pixel was ε/d or 25% of the detector pitch, which is the same as in the SLM pixel pitch. Therefore, the duty cycle of the geometric image in the detector pitch remains the constant regardless of M, which is the kernel size.
The real image of one SLM pixel was enlarged by diffraction and aberration in addition to the geometric image size. The beam diameter that determined the diffraction limit was the image size of the light source for the condenser system composed of lens1 and lens2 according to Koehler illumination concept. However, because the image size of the light source could be as large as the diameter of lens1 or lens2, the beam diameter of the relay system comprising these lenses should be assumed to be the diameter of lens2 (D = a). The spot diameter attributed to diffraction was 2 λ f3/D = 2 λ M f2/D = 2 λ M f2/#, which was approximately 10 μm for a wavelength of 0.5 μm. The duty cycle of the diffraction spread in the detector pitch can be obtained by dividing the size of the diffraction spot by the detector pitch Md. The duty cycle of the diffraction spread corresponds to (2 λ f2/#)/d or 10% in terms of either the SLM pixel pitch or the detector pitch. The general formula for the duty cycle of diffraction spread is independent of M. Hence, the duty cycle of the diffraction origin is kept constant for a fixed f2/# when scaling up.
The effect of geometric aberration on the image spread can be investigated by assuming that f2/# is fixed during scaling Since f3/# = M f2/#, f3/# increases with the scaling factor M. The spherical aberration, coma, astigmatism, and field curvature are proportional to the third power, second power, first power, and first power of 1/(f/#), respectively [START_REF] Greivenkamp | Field Guide to Geometrical Optics[END_REF][START_REF] Geary | Introduction to lens design: with practical ZEMAX examples[END_REF]. In other words, angular aberration 3 of lens3 decreases with scaling However, because f2/# remains constant, the angular aberration due to lens2 becomes dominant. If lens2 has an fnumber of 2 and comprises three elements, the angular aberration 2 is about 3 mrad. The image spread due to the geometric aberration is f3 2 = M f22 = M D2 (f22 = M 2 d (f22. Since the detector pitch is M d, the duty cycle of the image spread in the detector pitch due to geometric aberration is M (f22. This value increases with scaling When M and f2are 5 and 2, respectively, the geometric aberration of lens 2 with 3 elements accounts for 3% of the duty cycle. If the maximum duty cycle of the image spread is 40%, the maximum M is about 66. In this case, the alignment tolerance is a duty cycle of 25% because the geometric image size and the diffraction spread are 25% and 10%, respectively. The duty cycle of 25% corresponds to 5 m in SLM plane-this value is usually feasible to achieve in terms of optomechanics.
As more elements are used for lens3, the angular aberration can be reduced, and the maximum M can be increased. However, for a larger number of elements, tighter alignment tolerance and higher difficulty for the assembly of the lens unit are necessary. From the viewpoint of a full connection optical neural network such as the LCOE, an M value of 66 may be small. However, from the viewpoint of OCNN, which aims for partial connection, an M value of 66 is very large.
In addition, if M is a relatively small number, as is usually the case for kernels in practice, the burden of optics and their alignment can be drastically reduced. For instance, if M = 5, a simple planoconvex lens can be used for lens2. For a planoconvex lens with an f-number of 8, the angular aberration is only 3 mrad, which is the same as that of an f/2 lens with three elements. In this case, when using the abovementioned SLM pixels, lens2 and lens3 have focal lengths of 800 μm and 4.0 mm, respectively, with a diameter of 100 μm. Generally, a larger M value results in a substantial increase in the focal length of lens3 since f3 = M 2 (f2/#) d. However, for small M values, f3 is within a reasonable length and may simplify the optics.
Further, the tangent to the half-field angle of lens3 can be obtained by dividing a half-field size M a/2 by f3. Since f3 = M f2, the tangent of the half-field angle is a/(2 f2) = 1/(2 f2/#); thus, it is independent of scaling factor M. When f2/# = 2, the half-field angle is about 14°, which is within a reasonable range. The half-field angle decreases as f2/# increases, implying less aberration and less of a burden for optics related to the field angle.
The concepts used in the proposed SOCNN architecture are related to those of lenslet array processors (LAP) [START_REF] Glaser | Lenslet array processors[END_REF]. A pseudoconvolution LAP in direct configuration was reported in [START_REF] Glaser | Lenslet array processors[END_REF]. The primary distinction between LAP and SOCNN is that SOCNN uses three layers of lens arrays with more emphasis on distributed electronics and neural network applications, while LAP uses only a single-layer lens array. From the viewpoint of illumination, the SOCNN is based on Koehler illumination, whereas the LAP is based on Abbe illumination [START_REF] Arecchi | Field Guide to Illumination[END_REF][START_REF] Greivenkamp | Field Guide to Geometrical Optics[END_REF]. Koehler illumination provides better uniformity in the detector area than Abbe illumination; this uniformity is especially beneficial when dealing with nonuniform sources such as the LED and multimode LDs. Unfortunately, a detailed description or design of the illumination scheme is not provided in [START_REF] Glaser | Lenslet array processors[END_REF]; such a description or design is critical to the convolution performance. The divergence of the light sources and their control are not specified for the input array in the pseudoconvolution LAP scheme, though they determine the coverage of convolution or the size of the kernel.
The parallel throughput of SOCNN depends on the size of the SLM, as mentioned in the theory. The SLM array can be divided into subarrays, depending on the size of the kernel array. For a given size of the SLM array, a smaller kernel size leads to a larger input array size. Additionally, the SLM can be divided to accommodate multiple kernels by copying the input array into multiple sections. In fact, each section of the SLM can handle different kernels and perform convolution in parallel [START_REF] Colburn | Optical frontend for a convolutional neural network[END_REF]. Therefore, the number of calculations per instruction cycle is equal to the number of SLM pixels. If the SLM has a resolution of 3840 × 2160, the total number of connections is approximately 8.3 × 10 6 , which is also the number of multiply and accumulate (MAC) operations in one instruction cycle. If electronic processing is assumed to be the main source of delay, with a delay time of 10 ns, the proposed optical computer in this study can achieve a throughput of 8.3 × 10 14 MAC/s. This throughput can be further increased by using multiple layers. Although multiple layers may cause delay in data processing, all layers perform calculations simultaneously similar to the pipelining technique used in digital computers. As the number of layers increases, the total throughput can also increase. Therefore, the proposed optical computer can achieve a throughput of 8.3 × 10 15 MAC/s when 10 layers are used.
To achieve massive parallel throughput, it is crucial to input 2D data in parallel at every instruction cycle. If the 2D input is generated by serially reconfiguring individual pixels of the SLM, the parallelism of the optical computer is considerably reduced, similar to the case of the 4f correlator system. The 4f correlator system suffers from serialization between layers when used in a cascading configuration because the input of the next layer can be generated by a single coherent source and SLM array. However, in the SOCNN architecture, this serialization problem occurs only in the first layer, not between the layers, as it can use incoherent, independent light sources for input.
One way to solve the issue of serialization is to start the first layer with a detector array and generate the 2D input using a real-time optical image, as suggested in a previous study [START_REF] Ju | A scalable optical computer based on free-space optics using lens arrays and a spatial light modulator[END_REF]. For instance, imaging optics can form images of a moving object on the detector array. These images serve as the first layer of an optical computer. Thus, serialization or deserialization of input data is unnecessary throughout the entire system because all the inputs in the following layers are fed from local electronics. This approach is similar to that of the human eye and brain, where the eye forms an image of the object on the retina, which is the first layer of the neural network that is connected to the brain. Since the SOCNN is used for the front-end of the optical neural network, this approach appears very reasonable.
Conclusions
Traditionally, 4f correlator systems have been used in optical computing to perform convolution by performing Fourier transforms using two lenses. However, these systems have limitations with regard to the scaling up of the input array because of geometric aberrations, and a single coherent light source together with an SLM is required to generate the input. This adds complexity and latency to the implementation of the multilayer OCNN with cascading configurations. In addition, the Fourier-transformed kernel used in the mask between two lenses requires extra calculation time and can cause latency in future systems that require higher refresh rates.
To address these issues, the SOCNN architecture was proposed. This architecture takes advantage of the Koehler illumination and comprises three lens arrays that form images of SLM pixels on the detector plane. The Koehler illumination scheme offers advantages over the previous Abbe illumination-based LAP architecture by providing more uniform illumination and lower crosstalk between the detectors.
The key advantages of the SOCNN are the scalability of the input array and the use of an incoherent light source. These advantages help avoid many problems inherent to the use of coherent sources, as in case of the 4f system. As a partial connection version of LCOE, the SOCNN inherits many advantages of LCOE, which is also based on free-space optics and Koehler illumination. Compared with the LCOE, the SOCNN has a smaller coverage of connection to the output; in the SOCNN, the use of the last lens limits only the kernel size, not the size of the input array. This is the major advantage of SOCNN over the existing 4f systems in terms of scaling up and parallel throughput of the system. Another advantage of the SOCNN is that the weights of the kernel are directly set by the proportional transmission of SLM pixels, unlike the 4f system, which requires Fourier transform and causes latency in the future reconfigurable system.
Although the SOCNN has an extensively scalable input array, there is a limit to scaling the kernel size because the kernel information spreads out through a lens. The scaling limit of the kernel array was by observing the effect of changes in geometric image size, diffraction, and geometric aberration on the final image size. As M, i.e., the number of rows in the kernel array, increases, the duty cycle of the geometric image size and diffraction spread in the detector pitch remain constant for a fixed f-number of lens2. In contrast, the duty cycle of image spread due to geometric aberration is proportional to M. When M is about 66, the duty cycle due to geometric aberration is equal to 40%, and the alignment tolerance has a duty cycle of 20%, which corresponds to 5 μm in the SLM plane. Usually, convolution does not require such a large array size, and hence, M = 66 seems sufficiently large for practical applications.
To estimate the parallel throughput of the SOCNN architecture, an example system was considered. The number of calculations per instruction cycle is equal to the number of SLM pixels. Assuming that electronic processing requires 10 ns and the system has 10 convolution layers, SLMs with a resolution of 3840 × 2160 can achieve a parallel throughput of 8.3 × 10 15 MAC/s.
In summary, a SOCNN based on free-space optics and Koehler illumination was proposed to overcome the challenges in the previous 4f correlator system. The results reported herein imply that the SOCNN can offer a multilayer CNN with massive optical parallelism.
Fig. 3
3 Fig. 3 Scalable optical convolutional neural network (SOCNN) based on Koehler illumination and free-space optics using lens arrays and a spatial light modulator: (a) Schematics and the corresponding mathematical formula; (b) three-dimensional (3D) view of an example of the system with 3 × 3 inputs and 3 × 3 outputs; (c) the structural parameters of the SLM pixels and its subarrays
Declarations
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
The authors have no relevant financial or non-financial interests to disclose.
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. |
00354473 | en | [
"math.math-ap"
] | 2024/03/04 16:41:20 | 2009 | https://hal.science/hal-00354473v2/file/AM.pdf | Thomas Alazard
Guy Métivier
Paralinearization of the Dirichlet to Neumann operator, and regularity of three-dimensional water waves
This paper is concerned with a priori C ∞ regularity for threedimensional doubly periodic travelling gravity waves whose fundamental domain is a symmetric diamond. The existence of such waves was a long standing open problem solved recently by Iooss and Plotnikov. The main difficulty is that, unlike conventional free boundary problems, the reduced boundary system is not elliptic for three-dimensional pure gravity waves, which leads to small divisors problems. Our main result asserts that sufficiently smooth diamond waves which satisfy a diophantine condition are automatically C ∞ . In particular, we prove that the solutions defined by Iooss and Plotnikov are C ∞ . Two notable technical aspects are that (i) no smallness condition is required and (ii) we obtain an exact paralinearization formula for the Dirichlet to Neumann operator.
After some standard changes of unknowns which are recalled below in §2.1, for a wave travelling in the direction Ox 1 , we are led to a system of two scalar equations which reads
G(σ)ψ -∂ x 1 σ = 0, µσ + ∂ x 1 ψ + 1 2 |∇ψ| 2 - 1 2 ∇σ • ∇ψ + ∂ x 1 σ 2 1 + |∇σ| 2 = 0, (1.1)
where the unknowns are σ, ψ : R 2 → R, µ is a given positive constant and G(σ) is the Dirichlet to Neumann operator, which is defined by
G(σ)ψ(x) = 1 + |∇σ| 2 ∂ n φ| y=σ(x) = (∂ y φ)(x, σ(x))-∇σ(x)•(∇φ)(x, σ(x)),
where φ = φ(x, y) is the solution of the Laplace equation
∆ x,y φ = 0 in Ω := { (x, y) ∈ R 2 × R | y < σ(x) }, (1.2)
with boundary conditions φ(x, σ(x)) = ψ(x), ∇ x,y φ(x, y) → 0 as y → -∞.
(
Diamond waves are the simplest solutions of (1.1) one can think of. These 3D waves come from the nonlinear interaction of two simple oblique waves with the same amplitude. Henceforth, by definition, Diamond waves are solutions (σ, ψ) of System (1.1) such that: (i) σ, ψ are doubly-periodic with period 2π in x 1 and period 2πℓ in x 2 for some fixed ℓ > 0 and (ii) σ is even in x 1 and even in x 2 ; ψ is odd in x 1 and even in x 2 (cf Definition 2.2). It was proved by H. Lewy [START_REF] Lewy | A note on harmonic functions and a hydrodynamical application[END_REF] in the fifties that, in the two-dimensional case, if the free boundary is a C 1 curve, then it is a C ω curve (see also the independent papers of Gerber [START_REF] Gerber | Sur une condition de prolongement analytique des fonctions harmoniques[END_REF][START_REF] Gerber | Sur les solutions exactes des équations du mouvement avec surface libre d'un liquide pesant[END_REF][START_REF] Gerber | Sur un lemme de représentation conforme[END_REF]). Craig and Matei obtained an analogous result for three-dimensional (i.e. for a 2D surface) capillary gravity waves in [START_REF] Craig | Sur la régularité des ondes progressives à la surface de l'eau[END_REF][START_REF] Craig | On the regularity of the Neumann problem for free surfaces with surface tension[END_REF]. For the study of pure gravity waves the main difficulty is that System (1.1) is not elliptic. Indeed, it is well known that G(0) = |D x | (cf §2.5). This implies that the determinant of the symbol of the linearized system at the trivial solution (σ, ψ) = (0, 0) is µ|ξ|ξ 2 1 , so that the characteristic variety {ξ ∈ R 2 : µ|ξ|ξ 2 1 = 0} is unbounded. This observation contains the key dichotomy between two-dimensional waves and three-dimensional waves. Also, it explains why the problem is much more intricate for pure gravity waves (cf §7.2 where we prove a priori regularity for capillary waves by using the ellipticity given by surface tension). More importantly, it suggests that the main technical issue is that small divisors enter into the analysis of three-dimensional waves, as observed by Plotnikov in [START_REF] Plotnikov | Solvability of the problem of spatial gravitational waves on the surface of an ideal liquid[END_REF] and Craig and Nicholls in [START_REF] Craig | Travelling two and three dimensional capillary gravity water waves[END_REF].
In [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], Iooss and Plotnikov give a bound for the inverse of the symbol of the linearized system at a non trivial point under a diophantine condition, which is the key ingredient to prove the existence of non trivial solutions to (1.1) by means of a Nash-Moser scheme. Our main result, which is Theorem 2.5, asserts that sufficiently smooth diamond waves which satisfy a refined variant of their diophantine condition are automatically C ∞ . We shall prove that there are three functions ν, κ 0 , κ 1 defined on the set of H 12 diamond waves such that, if for some 0 ≤ δ < 1 there holds k 2ν(µ, σ, ψ)k 2 1 + κ 0 (µ, σ, ψ) +
κ 1 (µ, σ, ψ) k 2 1 ≥ 1 k 2+δ 1 ,
for all but finitely many (k 1 , k 2 ) ∈ N 2 , then (σ, ψ) ∈ C ∞ . Two interesting features of this result are that, firstly no smallness condition is required, and secondly this diophantine condition is weaker than the one which ensures that the solutions of Iooss and Plotnikov exist.
The main corollary of this theorem states that the solutions of Iooss and Plotnikov are C ∞ . Namely, consider the family of solutions whose existence was established in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. These diamond waves are of the form 4 ), (1.4) where ε ∈ [0, ε 0 ] is a small parameter and
σ ε (x) = εσ 1 (x) + ε 2 σ 2 (x) + ε 3 σ 3 (x) + O(ε 4 ), ψ ε (x) = εψ 1 (x) + ε 2 ψ 2 (x) + ε 3 ψ 3 (x) + O(ε 4 ), µ ε = µ c + ε 2 µ 1 + O(ε
µ c := ℓ √ 1 + ℓ 2 , σ 1 (x) := - 1 µ c cos x 1 cos x 2 ℓ
, ψ 1 (x) := sin x 1 cos x 2 ℓ , so that (σ 1 , ψ 1 ) ∈ C ∞ (T 2 ) solves the linearized system around the trivial solution (0, 0). Then it follows from the small divisors analysis in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] and Theorem 2.5 below that (σ ε , ψ ε ) ∈ C ∞ . The main novelty is to perform a full paralinearization of System (1.1). A notable technical aspect is that we obtain exact identities with remainders having optimal regularity. This approach depends on a careful study of the Dirichlet to Neumann operator, which is inspired by the important paper of Lannes [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF]. The corresponding result about the paralinearization of the Dirichlet to Neumann operator is stated in Theorem 2.12. This strategy has a number of consequences. For instance, we shall see that this approach simplifies the analysis of the diophantine condition (see Remark 6.3 in §6). Also, one might in a future work use Theorem 2.12 to prove the existence of the solutions without the Nash-Moser iteration scheme. These observations might be useful in a wider context. Indeed, it is easy to prove a variant of Theorem 2.12 for time-dependent free boundaries. With regards to the analysis of the Cauchy problem for the water waves, this tool reduces the proof of some difficult nonlinear estimates to easy symbolic calculus questions for symbols.
2 Main results
The equations
We denote the spatial variables by (x, y) = (x 1 , x 2 , y) ∈ R 2 × R and use the notations
∇ = (∂ x 1 , ∂ x 2 ), ∆ = ∂ 2 x 1 + ∂ 2 x 2 , ∇ x,y = (∇, ∂ y ), ∆ x,y = ∂ 2 y + ∆.
We consider a three-dimensional gravity wave travelling with velocity c on the free surface of an infinitely deep fluid. Namely, we consider a solution of the three-dimensional incompressible Euler equations for an irrotational flow in a domain of the form
Ω = { (x, y) ∈ ×R 2 × R | y < σ(x) },
whose boundary is a free surface, which means that σ is an unknown (think of an interface between air and water). The fact that we consider an incompressible, irrotational flow implies that the velocity field is the gradient of a potential which is an harmonic function. The equations are then given by two boundary conditions: a kinematic condition which states that the free surface moves with the fluid, and a dynamic condition that expresses a balance of forces across the free surface. The classical system reads where the unknowns are φ : Ω → R and σ : R 2 → R, c ∈ R 2 is the wave speed and g > 0 is the acceleration of gravity.
A popular form of the water waves equations is obtained by working with the trace of φ at the free boundary. Define ψ : R 2 → R by ψ(x) := φ(x, σ(x)).
The idea of introducing ψ goes back to Zakharov. It allows us to reduce the problem to the analysis of a system of two equations on σ and ψ which are defined on R 2 . The most direct computations show that (σ, ψ) solves
G(σ)ψ -c • ∇σ = 0, gσ + c • ∇ψ + 1 2 |∇ψ| 2 - 1 2 ∇σ • ∇ψ + c • ∇σ 2 1 + |∇σ| 2 = 0.
Up to rotating the axes and replacing g by µ := g/ |c| 2 one may assume that c = (1, 0), thereby obtaining System (1.1).
Remark 2.1. Many variations are possible. In §7.2 we study capillary gravity waves. Also, we consider in §7.1 the case with source terms.
Regularity of three-dimensional diamond waves
Now we specialize to the case of diamond patterns. Namely we consider solutions which are periodic in both horizontal directions, of the form
σ(x) = σ(x 1 + 2π, x 2 ) = σ (x 1 , x 2 + 2πℓ) , ψ(x) = ψ(x 1 + 2π, x 2 ) = ψ (x 1 , x 2 + 2πℓ) ,
and which are symmetric with respect to the direction of propagation Ox 1 .
Definition 2.2. i) Hereafter, we fix ℓ > 0 and denote by T 2 the 2-torus
T 2 = (R/2πZ) × (R/2πℓZ).
Bi-periodic functions on R 2 are identified with functions on T 2 , so that the Sobolev spaces of bi-periodic functions are denoted by H s (T 2 ) (s ∈ R).
ii) Given µ > 0 and s > 3, the set D s µ (T 2 ) consists of the solutions (σ, ψ) of System (1.1) which belong to H s (T 2 ) and which satisfy, for all x ∈ R 2 , σ(x) = σ(-x 1 , x 2 ) = σ(x 1 , -x 2 ), ψ(x) = -ψ(-x 1 , x 2 ) = ψ(x 1 , -x 2 ),
and 1 + (∂ x 1 φ)(x, σ(x)) = 0, (2.2)
where φ denotes the harmonic extension of ψ defined by (1.2)-(1.3).
iii) The set D s (T 2 ) of H s diamond waves is the set of all triple ω = (µ, σ, ψ) such that (σ, ψ) ∈ D s µ (T 2 ).
Remark 2.3. A first remark about these spaces is that they are not empty; at least since 2D waves are obviously 3D waves (independent of x 2 ) and since we know that 2D symmetric waves exist, as proved in the twenties by Levi-Civita [START_REF] Levi-Civita | Détermination rigoureuse des ondes permanentes d'ampleur finie[END_REF], Nekrasov [START_REF] Nekrasov | On waves of permanent type[END_REF] and Struik [START_REF] Struik | Détermination rigoureuse des ondes irrotationnelles périodiques dans un canal à profondeur finie[END_REF]. The existence of really threedimensional pure gravity waves was a well known problem in the theory of surface waves. It has been solved by Iooss and Plotnikov in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. We refer to [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF][START_REF] Bridges | Steady three-dimensional water-wave patterns on a finite-depth fluid[END_REF][START_REF] Craig | Travelling two and three dimensional capillary gravity water waves[END_REF][START_REF] Groves | Steady water waves[END_REF] for references and an historical survey of the background of this problem.
Remark 2.4. Two observations are in order about (2.2), which is not an usual assumption. We first note that (2.2) is a natural assumption which ensures that, in the moving frame where the waves look steady, the first component of the velocity evaluated at the free surface does not vanish (cf the proof of Lemma 5.15, which is the only step in which we use (2.2)).
On the other hand, observe that (2.2) is automatically satisfied for small amplitude waves such that φ = O(ε) in C 1 .
For all s ≥ 23, Iooss and Plotnikov prove the existence of H s -diamond waves having the above form (1.4) for ε ∈ E where E = E(s, ℓ) has asymptotically a full measure when ε tends to 0 (we refer to Theorem 2.7 below for a precise statement). The set E is the set of parameters ε ∈ [0, ε 0 ] (with ε 0 small enough) such that a diophantine condition is satisfied. The following theorem states that solutions satisfying a refined diophantine condition are automatically C ∞ . We postpone to the next paragraph for a statement which asserts that this condition is not empty. As already mentioned, a nice technical feature is that no smallness condition is required in the following statement.
Theorem 2.5. There exist three real-valued functions ν, κ 0 , κ 1 defined on
D 12 (T 2 ) such that, for all ω = (µ, σ, ψ) ∈ D 12 (T 2 ): i) if there exist δ ∈ [0, 1[ and N ∈ N * such that k 2 -ν(ω)k 2 1 + κ 0 (ω) + κ 1 (ω) k 2 1 ≥ 1 k 2+δ 1 , (2.3
)
for all (k 1 , k 2 ) ∈ N 2 with k 1 ≥ N , then (σ, ψ) ∈ C ∞ (T 2 ).
ii) ν(ω) ≥ 0 and there holds the estimate
ν(ω) - 1 µ + κ 0 (ω) -κ 0 (µ, 0, 0) + κ 1 (ω) -κ 1 (µ, 0, 0) ≤ C (σ, ψ) H 12 + µ + 1 µ (σ, ψ) 2 H 12 ,
for some non-decreasing function C independent of (µ, σ, ψ).
Remark 2.6. i) To define the coefficients ν(ω), κ 0 (ω), κ 1 (ω) we shall use the principal, sub-principal and sub-sub-principal symbols of the Dirichlet to Neumann operator. This explains the reason why we need to know that (σ, ψ) belongs at least to H 12 in order to define these coefficients.
ii) The important thing to note about the estimate is that it is second order in (σ, ψ) H 12 . This plays a crucial role to prove that small amplitude solutions exist (see the discussion preceding Proposition 2.10).
The small divisor condition for small amplitude waves
The properties of an ocean surface wave are easily obtained assuming the wave has an infinitely small amplitude (linear Airy theory). To find nonlinear waves of small amplitude, one seeks solutions which are small perturbations of small amplitude solutions of the linearized system at the trivial solution (0, 0). To do this, a basic strategy which goes back to Stokes is to expand the waves in a power series of the amplitude ε. In [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], the authors use a third order nonlinear theory to find 3D-diamond waves (this means that they consider solutions of the form (2.5)). We now state the main part of their results (see [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] for further comments).
Theorem 2.7 (from [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]). Let ℓ > 0 and s ≥ 23, and set
µ c = ℓ √ 1+ℓ 2 . There is a set A ⊂ [0, 1] of full measure such that, if µ c ∈ A then there exists a set E = E(s, µ c ) satisfying lim r→0 2 r 2 E∩[0,r] t dt = 1, (2.4)
such that there exists a family of diamond waves
(µ ε , σ ε , ψ ε ) ∈ D s (T 2 ) with ε ∈ E, of the special form σ ε (x) = εσ 1 (x) + ε 2 σ 2 (x) + ε 3 σ 3 (x) + ε 4 Σ ε (x), ψ ε (x) = εψ 1 (x) + ε 2 ψ 2 (x) + ε 3 ψ 3 (x) + ε 4 Ψ ε (x), µ ε = µ c + ε 2 µ 1 + O(ε 4 ), (2.5)
where
σ 1 , σ 2 , σ 3 , ψ 1 , ψ 2 , ψ 3 ∈ H ∞ (T 2 ) with σ 1 (x) = - 1 µ c cos x 1 cos x 2 ℓ , ψ 1 (x) = sin x 1 cos x 2 ℓ ,
the remainders Σ ε , Ψ ε are uniformly bounded in H s (T 2 ) and
µ 1 = 1 4µ 3 c - 1 2µ 2 c - 3 4µ c + 2 + µ c 2 - 9 4(2 -µ c ) .
To prove this result, the main difficulty is to give a bound for the inverse of the symbol of the linearized system at a non trivial point. Due to the occurence of small divisors, this is proved in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] under a diophantine condition. Now, it follows from the small divisors analysis by Iooss and Plotnikov in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] that, for all ε ∈ E,
k 2 -ν(µ ε , σ ε , ψ ε )k 2 1 + κ 0 (µ ε , σ ε , ψ ε ) ≥ c k 2 1 ,
for some positive constant c and all k = (k 1 , k 2 ) such that k = 0, k = ±(1, 1), k = ±(-1, 1). As a result, Theorem 2.5 implies that, for all ε ∈ E,
(σ ε , ψ ε ) ∈ C ∞ (T 2 ).
The main question left open here is to prove that, in fact, (σ ε , ψ ε ) is analytic or at least have some maximal Gevrey regularity.
To prove Theorem 2.7, the first main step in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] is to define approximate solutions. Then, Iooss and Plotnikov used a Nash-Moser iterative scheme to prove that there exist exact solutions near these approximate solutions. Recall that the Nash method allows to solve functional equations of the form Φ(u) = Φ(u 0 ) + f in situations where there are loss of derivatives so that one cannot apply the usual implicit function Theorem. It is known that the solutions thus obtained are smooth provided that f is smooth (cf Theorem 2.2.2 in [START_REF] Hörmander | The boundary problems of physical geodesy[END_REF]). This remark raises a question: Why the solutions constructed by Iooss and Plotnikov are not automatically smooth? This follows from the fact that the problem depends on the parameter ε and hence one is led to consider functional equations of the form Φ(u, ε) = Φ(u 0 , ε) + f . In this context, the estimates established in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] allow to prove that, for any ℓ ∈ N, one can define solutions (σ, ψ) ∈ C ℓ (T 2 ) for ε ∈ E ∩ [0, ε 0 ], for some positive constant ε 0 depending on ℓ.
The previous discussion raises a second question. Indeed, to prove that the solutions exist one has to establish uniform estimates in the following sense: one has to prove that some diophantine condition is satisfied for all k such that k 1 is greater than a fixed integer independent of ε. In [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], the authors establish such a result by using an ergodic argument. We shall explain how to perform this analysis by means of our refined diophantine condition. This step depends in a crucial way on the fact that the estimate of ν(µ, σ, ψ)ν(µ, 0, 0), κ 0 (µ, σ, ψ)κ 0 (µ, 0, 0) and κ 1 (µ, σ, ψ)κ 1 (µ, 0, 0) are of second order in the amplitude. Namely, we make the following assumption.
Assumption 2.8. Let ν = ν(ε), κ 0 = κ 0 (ε) and κ 1 = κ 1 (ε) be three realvalued functions defined on [0, 1]. In the following proposition it is assumed that
ν(ε) = ν + ν ′ ε 2 + εϕ 1 (ε 2 ), κ 0 (ε) = κ 0 + ϕ 2 (ε 2 ), κ 1 (ε) = κ 1 + ϕ 3 (ε 2 ), (2.6)
for some constants ν, ν ′ , κ 0 , κ 1 ′ with ν ′ = 0, and three Lipschitz functions ϕ j : [0, 1] → R satisfying ϕ j (0) = 0.
Remark 2.9. In [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], the authors prove that the assumption ν ′ = 0 is satisfied for ν(ε) = ν(µ ε , σ ε , ψ ε ) where (µ ε , σ ε , ψ ε ) are the solutions of Theorem 2.7. Assumption 2.8 is satisfied by these solutions.
Proposition 2.10. Let δ and δ ′ be such that
1 > δ > δ ′ > 0.
Assume in addition to Assumption 2.8 that there exists n ≥ 2 such that
k 2 -νk 2 1 -κ 0 ≥ 1 k 1+δ ′ 1 , (2.7
)
for all k ∈ N 2 with k 1 ≥ n. Then there exist K > 0, r 0 > 0, N 0 ∈ N and a set A ⊂ [0, 1] satisfying ∀r ∈ [0, r 0 ], 1 r |A ∩ [0, r]| ≥ 1 -Kr δ-δ ′ 3+δ ′ , (2.8
)
such that, if ε 2 ∈ A and k 1 ≥ N 0 then k 2 -ν(ε)k 2 1 -κ 0 (ε) - κ 1 (ε) k 2 1 ≥ 1 k 2+δ 1 , (2.9
)
for all k 2 ∈ N.
Remark 2.11. (i) It follows from the classical argument introduced by Borel in [START_REF] Borel | Remarques sur certaines questions de probabilité[END_REF] that there exists a null set N ⊂ [0, 1] such that, for all (ν, κ 0 ) ∈ ([0, 1] \ N ) × [0, 1], the inequality (2.7) is satisfied for all (k 1 , k 2 ) with k 1 sufficiently large.
(ii) If A satisfies (2.8) then the set E = {ε ∈ [0, 1] : ε 2 ∈ A} satisfies (2.4). The size of set of those parameters ε such that the diophantine condition (2.9) is satisfied is bigger than the size of the set E given by Theorem 2.7. Proposition 2.10 is proved in Section 6.
Paralinearization of the Dirichlet to Neumann operator
To prove Theorem 2.5, we use the strategy of Iooss and Plotnikov [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. The main novelty is that we paralinearize the water waves system. This approach depends on a careful study of the Dirichlet to Neumann operator, which is inspired by a paper of Lannes [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF].
Since this analysis has several applications (for instance to the study of the Cauchy problem), we consider the general multi-dimensional case and we do not assume that the functions have some symmetries. We consider here a domain Ω of the form
Ω = { (x, y) ∈ T d × R | y < σ(x) },
where T d is any d-dimensional torus with d ≥ 1. Recall that, by definition, the Dirichlet to Neumann operator is the operator G(σ) given by
G(σ)ψ = 1 + |∇σ| 2 ∂ n ϕ| y=σ(x) ,
where n is the exterior normal and ϕ is given by ∆ x,y ϕ = 0, ϕ| y=σ(x) = ψ, ∇ x,y ϕ → 0 as y → -∞.
(2.10)
To clarify notations, the Dirichlet to Neumann operator is defined by
(G(σ)ψ)(x) = (∂ y ϕ)(x, σ(x)) -∇σ(x) • (∇ϕ)(x, σ(x)). (2.11)
Thus defined, G(σ) differs from the usual definition of the Dirichlet to Neumann operator because of the scaling factor 1 + |∇σ| 2 ; yet, as in [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF][START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] we use this terminology for the sake of simplicity.
It is known since Calderón that, if σ is a given C ∞ function, then the Dirichlet to Neumann operator G(σ) is a classical pseudo-differential operator, elliptic of order 1 (see [START_REF] Antoine | Bayliss-Turkel-like radiation conditions on surfaces of arbitrary shape[END_REF][START_REF] Sylvester | Inverse boundary value problems at the boundary-continuous dependence[END_REF][START_REF] Taylor | Pseudodifferential operators[END_REF][START_REF] Trèves | Introduction to pseudodifferential and Fourier integral operators[END_REF]). We have
G(σ)ψ = Op(λ σ )ψ,
where the symbol λ σ has the asymptotic expansion
λ σ (x, ξ) ∼ λ 1 σ (x, ξ) + λ 0 σ (x, ξ) + λ -1 σ (x, ξ) + • • • (2.12)
where λ k σ are homogeneous of degree k in ξ, and the principal symbol λ1 σ is elliptic of order 1, given by
λ 1 σ (x, ξ) = (1 + |∇σ(x)| 2 ) |ξ| 2 -(∇σ(x) • ξ) 2 . (2.13)
Moreover, the symbols λ 0 σ , λ -1 σ , . . . are defined by induction so that one can easily check that λ k σ involves only derivatives of σ of order ≤ |k| + 2 (see [START_REF] Antoine | Bayliss-Turkel-like radiation conditions on surfaces of arbitrary shape[END_REF]). There are also various results when σ ∈ C ∞ . Expressing G(σ) as a singular integral operator, it was proved by Craig, Schanz and C. Sulem [START_REF] Craig | The modulational regime of threedimensional water waves and the Davey-Stewartson system[END_REF] that
σ ∈ C k+1 , ψ ∈ H k+1 with k ∈ N ⇒ G(σ)ψ ∈ H k . (2.14)
Moreover, when σ is a given function with limited smoothness, it is known that G(σ) is a pseudo-differential operator with symbol of limited regularity 1 (see [START_REF] Taylor | Tools for PDE[END_REF][START_REF] Escher | Bounded H ∞ -calculus for pseudodifferential operators and applications to the Dirichlet-Neumann operator[END_REF]). In this direction, for σ ∈ H s+1 (T 2 ) with s large enough, it follows from the analysis by Lannes ([27]) and a small additional work that
G(σ)ψ = Op(λ 1 σ )ψ + r(σ, ψ), (2.15)
where the remainder r(σ, ψ) is such that
ψ ∈ H s (T d ) ⇒ r(σ, ψ) ∈ H s (T d ).
For the analysis of the water waves, the think of great interest here is that this gives a result for G(σ)ψ when σ and ψ have exactly the same regularity. Indeed, (2.15) implies that, if σ ∈ H s+1 (T d ) and ψ ∈ H s+1 (T d ) for some s large enough, then G(σ)ψ ∈ H s (T d ). This result was first established by Craig and Nicholls in [START_REF] Craig | Travelling two and three dimensional capillary gravity water waves[END_REF] and Wu in [START_REF] Wu | Well-posedness in Sobolev spaces of the full water wave problem in 2-D. Invent[END_REF][START_REF] Wu | Well-posedness in Sobolev spaces of the full water wave problem in 3-D[END_REF] by different methods. We refer to [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF] for comments on the estimates associated to these regularity results as well as to [START_REF] Alvarez-Samaniego | Large time existence for 3D water-waves and asymptotics[END_REF] for the rather different case where one considers various dimensional parameters.
A fundamental difference with these results is that we shall determine the full structure of G(σ) by performing a full paralinearization of G(σ)ψ with respect to ψ and σ. A notable technical aspect is that we obtain exact identities where the remainders have optimal regularity. We shall establish a formula of the form
G(σ)ψ = Op(λ σ )ψ + B(σ, ψ) + R(σ, ψ),
where B(σ, ψ) shall be given explicitly and R(σ, ψ) ∼ 0 in the following sense: R(σ, ψ) is twice more regular than σ and ψ.
Before we state our result, two observations are in order. Firstly, observe that we can extend the definition of λ σ for σ ∈ C ∞ in the following obvious manner: we consider in the asymptotic expansion (2.12) only the terms which are meaningful. This means that, for σ ∈ C k+2 \ C k+3 with k ∈ N, we set
λ σ (x, ξ) = λ 1 σ (x, ξ) + λ 0 σ (x, ξ) + • • • + λ -k σ (x, ξ). (2.16)
We associate operators to these symbols by means of the paradifferential quantization (we recall the definition of paradifferential operators in §4.1).
Secondly, recall that a classical idea in free boundary problems is to use a change of variables to reduce the problem to a fixed domain. This suggests to map the graph domain Ω to a half space via the correspondence (x, y) → (x, z) where z = yσ(x).
This change of variables takes ∆ x,y to a strictly elliptic operator and ∂ n to a vector field which is transverse to the boundary {z = 0}. Namely, introduce v :
T d ×] -∞, 0] → R defined by v(x, z) = ϕ(x, z + σ(x)), so that v satisfies v| z=0 = ϕ| y=σ(x) = ψ,
and
(1 + |∇σ| 2 )∂ 2 z v + ∆v -2∇σ • ∇∂ z v -∂ z v∆σ = 0, (2.17)
in the fixed domain
T d ×] -∞, 0[. Then, G(σ)ψ = (1 + |∇σ| 2 )∂ z v -∇σ • ∇v z=0 . ( 2
σ ∈ H s (T d ), v ∈ C 0 ([-1, 0]; H s (T d )), ∂ z v ∈ C 0 ([-1, 0]; H s-1 (T d )), (2.19) then G(σ)ψ = T λσ ψ -T b σ -T V • ∇σ -T div V σ + R(σ, ψ), (2.20)
where T a denotes the paradifferential operator with symbol a (cf §4.1), the function b = b(x) and the vector field
V = V (x) belong to H s-1 (T d ), the symbol λ σ ∈ Σ 1 s-1-d/2 (T d ) (see Definition 4.
3) is given by (2.16) applied with k = s -2d/2, and R(σ, ψ) is twice more regular than the unknowns:
∀ε > 0, R(σ, ψ) ∈ H 2s-2-d 2 -ε (T d ). (2.21)
Explicitly, b and V are given by
b = ∇σ • ∇ψ + G(σ)ψ 1 + |∇σ| 2 , V = ∇ψ -b∇σ.
There are a few further points that should be added to Theorem 2.12.
Remark 2.13. The first point to be made is a clarification of how one passes from an assumption on (σ, v) to an assumption on (σ, ψ). As in [START_REF] Alvarez-Samaniego | Large time existence for 3D water-waves and asymptotics[END_REF], it follows from standard elliptic theory that
σ ∈ H k+ 1 2 (T d ), ψ ∈ H k (T d ) ⇒ v ∈ H k+ 1 2 ([-1, 0] × T d ), so that v ∈ C 0 ([-1, 0]; H k (T d )) and ∂ z v ∈ C 0 ([-1, 0]; H k-1 (T d )).
As a result, we can replace (2.19) by the assumption that σ ∈ H s+ 1 2 (T d ) and ψ ∈ H s (T d ).
Remark 2.14. Theorem 2.12 still holds true for non periodic functions.
Remark 2.15. The case with which we are chiefly concerned is that of an infinitely deep fluid. However, it is worth remarking that Theorem 2.12 remains valid in the case of finite depth where one considers a domain Ω of the form
Ω := { (x, y) ∈ T d × R | b(x) < y < σ(x) },
with the assumption that b is a given smooth function such that b + 2 ≤ σ, and define G(σ)ψ by (2.11) where ϕ is given by
∆ x,y ϕ = 0, ϕ| y=σ(x) = ψ, ∂ n ϕ| y=b(x) = 0.
Remark 2.16. Since the scheme of the proof of Theorem 2.12 is reasonably simple, the reader should be able to obtain further results in other scales of Banach spaces without too much work. We here mention an analogous result in Hölder spaces C s (R d ) which will be used in §7.2.
If σ ∈ C s (R d ), v ∈ C 0 ([-1, 0]; C s (R d )), ∂ z v ∈ C 0 ([-1, 0]; C s-1 (R d )),
for some s ∈ [3, +∞], then we have (2.20) with
b ∈ C s-1 (R d ), V ∈ C s-1 (R d ), λ σ ∈ Σ 1 s-1 (R d ), and R(σ, ψ) ∈ C 2s-2-ε (R d ) for all ε > 0.
Remark 2.17. We can give other expressions of the coefficients. We have
b(x) = (∂ y ϕ)(x, σ(x)) = (∂ z v)(x, 0), V (x) = (∇ϕ)(x, σ(x)) = (∇v)(x, 0) -(∂ z v)(x, 0)∇σ(x),
where ϕ is as defined in (2.10). This clearly shows that b, V ∈ H s-1 (T d ).
As mentioned earlier, Theorem 2.12 has a number of consequences. For instance, this permits us to reduce estimates for commutators with the Dirichlet to Neumann operator to symbolic calculus questions for symbols. Similarly, we shall use Theorem 2.12 to compute the effect of changes of variables by means of the paracomposition operators of Alinhac. As shown by Hörmander in [START_REF] Hörmander | The Nash-Moser theorem and paradifferential operators[END_REF], another possible application is to prove the existence of the solutions by using elementary nonlinear functional analysis instead of using the Nash-Moser iteration scheme.
The proof of Theorem 2.12 is given in §4. The heart of the entire argument is a sharp paralinearization of the interior equation performed in Proposition 4.12. To do this, following Alinhac [START_REF] Alinhac | Existence d'ondes de raréfaction pour des systèmes quasilinéaires hyperboliques multidimensionnels[END_REF], the idea is to work with the good unknown ψ -T b σ.
At first we may not expect to have to take this unknown into account, but it comes up on its own when we compute the linearized equations (cf §3).
For the study of the linearized equations, this change of unknowns amounts to introduce δψbδσ. The fact that this leads to a key cancelation was first observed by Lannes in [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF].
An example
We conclude this section by discussing a classical example which is Example 3 in [START_REF] Kinderlehrer | Regularity in free boundary problems[END_REF] (see [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] for an analogous discussion). Consider φ = 0 and σ = σ(x 2 ).
Then, for any σ ∈ C 1 , this defines a solution of (2.1) with g = 0, and no further smoothness of the free boundary can be inferred. Therefore, if g = 0 (i.e. µ = 0) then there is no a priori regularity.
In addition, the key dichotomy d = 1 or d = 2 is well illustrated by this example. Indeed, consider the linearized system at the trivial solution (σ, φ) = (0, 0). We are led to analyse the following system (cf §3):
∆ z,x v = 0 in z < 0, ∂ z v -∂ x 1 σ = 0 on z = 0, µσ + ∂ x 1 v = 0 on z = 0, ∇ z,x v → 0 as z → -∞.
For σ = 0, it is straightforward to compute the Dirichlet to Neumann operator G(0). Indeed, we have to consider the solutions of (|ξ| 2 -∂ 2 z )V (z) = 0, which are bounded when z < 0. It is clear that V must be proportional to e z|ξ| , so that ∂ z V = |ξ| V . Reduced to the boundary, the system thus becomes
|D x |v -∂ x 1 σ = 0 on z = 0, µσ + ∂ x 1 v = 0 on z = 0. The symbol of this system is |ξ| -iξ 1 iξ 1 µ , (2.22)
whose determinant is µ|ξ|ξ 2 1 .
(2.23) The simplification presents itself: this is an elliptic matrix-valued symbol for all µ ∈ R and all d ≥ 1.
If d = 1 (or if µ < 0),
Linearization
Although it is not essential for the rest of the paper, it helps if we begin by examining the linearized equations. Our goal is twofold. First we want to prepare for the paralinearization of the equations. And second we want to explain some technical but important points related to changes of variables. We consider the system
∂ 2 y φ + ∆φ = 0 in {y < σ(x)}, ∂ y φ -∇σ • ∇φ -c • ∇σ = 0 on {y = σ(x)}, µσ + 1 2 |∇φ| 2 + 1 2 (∂ y φ) 2 + c • ∇φ = 0 on {y = σ(x)}, ∇ x,y φ → 0 as y → -∞,
where µ > 0 and c ∈ R 2 . We shall perform the linearization of this system. These computations are well known. In particular it is known that the Dirichlet to Neumann operator G(σ) is an analytic function of σ ( [START_REF] Coifman | Nonlinear harmonic analysis and analytic dependence[END_REF][START_REF] Nicholls | A new approach to analyticity of Dirichlet-Neumann operators[END_REF]). Moreover, the shape derivative of G(σ) was computed by Lannes [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF] (see also [START_REF] Bona | Asymptotic models for internal waves[END_REF][START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]). Here we explain some key cancelations differently, by means of the good unknown of Alinhac [START_REF] Alinhac | Existence d'ondes de raréfaction pour des systèmes quasilinéaires hyperboliques multidimensionnels[END_REF].
Change of variables
One basic approach toward the analysis of solutions of a boundary value problem is to flatten the boundary. To do so, most directly, one can use the following change of variables, involving the unknown σ,
z = y -σ(x), (3.1)
which means we introduce v given by
v(x, z) = φ(x, z + σ(x)).
This reduces the problem to the domain {-∞ < z < 0}.
The first elementary step is to compute the equation satisfied by the new unknown v in {z < 0} as well as the boundary conditions on {z = 0}. We easily find the following result.
Lemma 3.1. If φ and σ are C 2 , then v(x, z) = φ(x, z + σ(x)) satisfies (1 + |∇σ| 2 )∂ 2 z v + ∆v -2∇σ • ∇∂ z v -∂ z v∆σ = 0 in z < 0, (3.2)
(1 + |∇σ| 2 )∂ z v -∇σ • (∇v + c) = 0 on z = 0, (3.3) µσ + c • ∇v + 1 2 |∇v| 2 - 1 2 ∇σ • (∇v + c) 2 1 + |∇σ| 2 = 0 on z = 0. (3.4) Remark 3.2.
It might be tempting to use a general change of variables of the form y = ρ(x, z) (as in [START_REF] Craig | Sur la régularité des ondes progressives à la surface de l'eau[END_REF][START_REF] Craig | On the regularity of the Neumann problem for free surfaces with surface tension[END_REF][START_REF] Koch | On optimal regularity of free boundary problems and a conjecture of De Giorgi[END_REF][START_REF] Lannes | Well-posedness of the water-waves equations[END_REF]). However, these changes of variables do not modify the behavior of the functions on z = 0 and hence they do not modify the Dirichlet to Neumann operator (see the discussion in [START_REF] Uhlmann | On the local Dirichlet-to-Neumann map[END_REF]). Therefore, the fact that we use the most simple change of variables one can think of is an interesting feature of our approach.
Remark 3.3. By following the strategy used in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], a key point below is to use a change of variables in the tangential variables, of the form x ′ = χ(x).
In [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], this change of variables is performed before the linearization. Our approach goes the opposite direction. We shall paralinearize first and then compute the effect of this change of variables by means of paracomposition operators. This has the advantage of simplifying the computations.
Linearized interior equation
Introduce the operator
L := (1 + |∇σ| 2 )∂ 2 z + ∆ -2∇σ • ∇∂ z , (3.5)
and set
E(v, σ) := Lv -∆σ∂ z v, so that the interior equation (3.2) reads E(v, σ) = 0. Denote by E ′ v and E ′
σ , the linearization of E with respect to v and σ respectively, which are given by
E ′ v (v, σ) v := lim ε→0 1 ε E(v + ε v, σ) -E(v, σ) , E ′ σ (v, σ) σ := lim ε→0 1 ε E(v, σ + ε σ) -E(v, σ) .
To linearize the equation E(v, σ) = 0, we use a standard remark in the comparison between partially and fully linearized equations for systems obtained by the change of variables z = yσ(x).
Lemma 3.4.
There holds
E ′ v (v, σ) v + E ′ σ (v, σ) σ = E ′ v (v, σ) v -(∂ z v) σ . (3.6)
Proof. See [START_REF] Alinhac | Existence d'ondes de raréfaction pour des systèmes quasilinéaires hyperboliques multidimensionnels[END_REF] or [START_REF] Métivier | Stability of multidimensional shocks[END_REF].
The identity (3.6) was pointed out by S. Alinhac ([2]) along with the role of what he called "the good unknown" u defined by
u = v -(∂ z v) σ.
Since E(v, σ) is linear with respect to v, we have
E ′ v (v, σ) v = E( v, σ) = L v -∆σ∂ z v,
from which we obtain the following formula for the linearized interior equation.
Proposition 3.5. There holds
(1 + |∇σ| 2 )∂ 2 z u + ∆ u -2∇σ • ∇∂ z u -∆σ∂ z u = 0, where u := v -(∂ z v) σ.
We conclude this part by making two remarks concerning the good unknown of Alinhac.
Remark 3.6. The good unknown u = v -(∂ z v) σ was introduced by Lannes [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF] in the analysis of the linearized equations of the Cauchy problem for the water waves. The computations of Lannes play a key role in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. We have explained differently the reason why u simplifies the computations by means of the general identity (3.6) (compare with the proof of Prop. 4.2 in [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF]). We also refer to a very recent paper by Trakhinin ([42]) where the author also uses the good unknown of Alinhac to study the Cauchy problem. Remark 3.7. A geometrical way to understand the role of the good unknown v -∂ z v σ is to note that the vector field D x := ∇ -∇σ∂ z commutes with the interior equation (3.2) for v: we have
L -∆σ∂ z D x v = 0.
The previous result can be checked directly. Alternatively, it follows from the identity
L -∆σ∂ z D x v = (D 2 x + ∂ 2 z )D x v,
and the fact that D x commutes with ∂ z . This explains why u is the natural unknown whenever one solves a free boundary problem by straightening the free boundary.
Linearized boundary conditions
It turns out that the good unknown u is also useful to compute the linearized boundary conditions. Indeed, by differentiating the first boundary condition (3.3), and replacing v by u + (∂ z v) σ we obtain
(1 + |∇σ| 2 )∂ z u -∇σ • ∇ u -(c + ∇v -∂ z v∇σ) • ∇ σ + σ (1 + |∇σ| 2 )∂ 2 z v -∇σ • ∇∂ z v = 0.
The interior equation (3.2) for v implies that
(1 + |∇σ| 2 )∂ 2 z v -∇σ • ∇∂ z v = -div ∇v -∂ z v∇σ .
which in turn implies that
(1 + |∇σ| 2 )∂ z u -∇σ • ∇ u -div c + ∇v -∂ z v∇σ σ = 0.
With regards to the second boundary condition, we easily find that
a σ + c + ∇v -∂ z v∇σ • ∇ u = 0, with a := µ + c + ∇v -∂ z v∇σ • ∇∂ z v.
Hence, we have the following proposition.
Proposition 3.8. On {z = 0}, the linearized boundary conditions are
N u -div(V σ) = 0, a σ + (V • ∇) u = 0, (3.7)
where N is the Neumann operator
N = (1 + |∇σ| 2 )∂ z -∇σ • ∇, (3.8
)
and V = c + ∇v -∂ z v∇σ, a = µ + V • ∇∂ z v.
Remark 3.9. On {z = 0}, directly from the definition, we compute
V = c + (∇φ)(x, σ(x)).
With regards to the coefficient a, we have (cf Lemma 5.4)
a = -(∂ y P )(x, σ(x)).
Paralinearization of the Dirichlet to Neumann operator
In this section we prove Theorem 2.12.
Paradifferential calculus
We start with some basic reminders and a few more technical issues about paradifferential operators.
Notations
We denote by u or Fu the Fourier transform acting on temperate distributions u ∈ S ′ (R d ), and in particular on periodic distributions. The spectrum of u is the support of Fu. Fourier multipliers are defined by the formula
p(D x )u = F -1 (pFu) ,
provided that the multiplication by p is defined at least from
S(R d ) to S ′ (R d ); p(D x
) is the operator associated to the symbol p(ξ).
According to the usual definition, for ρ ∈]0, +∞[\N, we denote by C ρ the space of bounded functions whose derivatives of order [ρ] are uniformly Hölder continuous with exponent ρ -[ρ].
Paradifferential operators
The paradifferential calculus was introduced by J.-M. Bony [START_REF] Bony | Calcul symbolique et propagation des singularités pour les équations aux dérivées partielles non linéaires[END_REF] (see also [START_REF] Hörmander | Lectures on nonlinear hyperbolic differential equations[END_REF][START_REF] Métivier | Para-differential calculus and applications to the cauchy problem for nonlinear systems[END_REF][START_REF] Meyer | Remarques sur un théorème de J.-M. Bony[END_REF][START_REF] Taylor | Pseudodifferential operators and nonlinear PDE[END_REF]). It is a quantization of symbols a(x, ξ), of degree m in ξ and limited regularity in x, to which are associated operators denoted by T a , of order ≤ m.
We consider symbols in the following classes.
Definition 4.1. Given ρ ≥ 0 and m ∈ R, Γ m ρ (T d
) denotes the space of locally bounded functions a(x, ξ) on T d ×(R d \0), which are C ∞ with respect to ξ for ξ = 0 and such that, for all α ∈ N d and all ξ = 0, the function x → ∂ α ξ a(x, ξ) belongs to C ρ (T d ) and there exists a constant C α such that,
∀ |ξ| ≥ 1 2 , ∂ α ξ a(•, ξ) C ρ ≤ C α (1 + |ξ|) m-|α| . (4.1)
Remark 4.2. The analysis remains valid if we replace C ρ by W ρ,∞ for ρ ∈ N.
Note that we consider symbols a(x, ξ) that need not be smooth for ξ = 0 (for instance a(x, ξ) = |ξ| m with m ∈ R * ). The main motivation for considering such symbols comes from the principal symbol of the Dirichlet to Neumann operator. As already mentioned, it is known that this symbol is given by
λ 1 σ (x, ξ) := (1 + |∇σ(x)| 2 ) |ξ| 2 -(∇σ(x) • ξ) 2 . If σ ∈ C s (T d ) then this symbol belongs to Γ 1 s-1 (T d ). Of course, this symbol is not C ∞ with respect to ξ ∈ R d .
The consideration of the symbol λ σ also suggests that we shall be led to consider pluri-homogeneous symbols.
Definition 4.3. Let ρ ≥ 1, m ∈ R. The classes Σ m ρ (T d ) are defined as the spaces of symbols a such that a(x, ξ) = 0≤j<ρ a m-j (x, ξ),
where a m-j ∈ Γ m-j ρ-j (T d ) is homogeneous of degree mj in ξ, C ∞ in ξ for ξ = 0 and with regularity C ρ-j in x. We call a m the principal symbol of a.
The definition of paradifferential operators needs two arbitrary but fixed cutoff functions χ and ψ.
Introduce χ = χ(θ, η) such that χ is a C ∞ function on R d × R d \ 0, homogeneous of degree 0 and satisfying, for 0 < ε 1 < ε 2 small enough, χ(θ, η) = 1 if |θ| ≤ ε 1 |η| , χ(θ, η) = 0 if |θ| ≥ ε 2 |η| . We also introduce a C ∞ function ψ such that 0 ≤ ψ ≤ 1, ψ(η) = 0 for |η| ≤ 1, ψ(η) = 1 for |η| ≥ 2.
Given a symbol a(x, ξ), we then define the paradifferential operator T a by
T a u(ξ) = (2π) -d χ(ξ -η, η) a(ξ -η, η)ψ(η) u(η) dη, (4.2)
where a(θ, ξ) = e -ix•θ a(x, ξ) dx is the Fourier transform of a with respect to the first variable. We call attention to the fact that this notation is not quite standard since u and a are periodic in x. To clarify notations, fix
T d = R d /L
for some lattice L. Then we can write (4.2) as
T a u(ξ) = (2π) -d η∈L * χ(ξ -η, η) a(ξ -η, η)ψ(η) u(η).
Also, we call attention to the fact that, if Q(D x ) is a Fourier multiplier with symbol q(ξ), then we do not have Q(D x ) = T q , because of the cut-off function ψ. However, this is obviously almost true since we have
Q(D x ) = T q + R where R maps H t to H ∞ for all t ∈ R.
Recall the following definition, which is used continually in the sequel.
Definition 4.4. Let m ∈ R. An operator T is said of order ≤ m if, for all s ∈ R, it is bounded from H s+m to H s . Theorem 4.5. Let m ∈ R. If a ∈ Γ m 0 (T d ), then T a is of order ≤ m.
We refer to (4.7) below for operator norms estimates. We next recall the main feature of symbolic calculus, which is a symbolic calculus lemma for composition of paradifferential operators. The basic property, which will be freely used in the sequel, is the following
a ∈ Γ m 1 (T d ), b ∈ Γ m ′ 1 (T d ) ⇒ T a T b -T ab is of order ≤ m + m ′ -1.
More generally, there is an asymptotic formula for the composition of two such operators, whose main term is the pointwise product of their symbols.
Theorem 4.6. Let m, m ′ ∈ R. Consider a ∈ Γ m ρ (T d ) and b ∈ Γ m ′ ρ (T d ) where ρ ∈]0, +∞[, and set a♯b(x, ξ) = |α|<ρ 1 i α α! ∂ α ξ a(x, ξ)∂ α x b(x, ξ) ∈ j<ρ Γ m+m ′ -j ρ-j (T d ).
Then, the operator
T a T b -T a♯b is of order ≤ m + m ′ -ρ.
Proofs of these two theorems can be found in the references cited above. Clearly, the fact that we consider symbols which are periodic in x does not change the analysis. Also, as noted in [START_REF] Métivier | Para-differential calculus and applications to the cauchy problem for nonlinear systems[END_REF], the fact that we consider symbols which are not smooth at the origin ξ = 0 is not a problem. Here, since we added the extra function ψ in the definition (4.2), following the original definition in [START_REF] Bony | Calcul symbolique et propagation des singularités pour les équations aux dérivées partielles non linéaires[END_REF], the argument is elementary: if a ∈ Γ m ρ (T d ), then ψ(ξ)a(x, ξ) belongs to the usual class of symbols.
Paraproducts
If a = a(x) is a function of x only, the paradifferential operator T a is a called a paraproduct. For easy reference, we recall a few results about paraproducts.
We already know from Theorem 4.5 that, if β > d/2 and b ∈ H β (T d ) ⊂ C 0 (T d ), then T b is of order ≤ 0 (note that this holds true if we only assume that b ∈ L ∞ ). An interesting point is that one can extend the analysis to the case where b ∈ H
β (T d ) with β < d/2. Lemma 4.7. For all α ∈ R and all β < d/2, a ∈ H α (T d ), b ∈ H β (T d ) ⇒ T b a ∈ H α+β-d 2 (T d ).
We also have the two following key lemmas about paralinearization.
Lemma 4.8. For a ∈ H α (T d ) with α > d/2 and F ∈ C ∞ , F (a) -T F ′ (a) a ∈ H 2α-d 2 (T d ). (4.3) For all α, β ∈ R such that α + β > 0, a ∈ H α (T d ), b ∈ H β (T d ) ⇒ ab -T a b -T b a ∈ H α+β-d 2 (T d ). (4.4)
There is also one straightforward consequence of Theorem 4.6 that will be used below.
Lemma 4.9. Assume that t > d/2 is such that t -d/2 ∈ N. If a ∈ H t and b ∈ H t , then T a T b -T ab is of order ≤ -t - d 2 .
Maximal elliptic regularity
In this paragraph, we are concerned with scalar elliptic evolution equations of the form
∂ z u + T a u = T b u + f (z ∈ [-1, 0], x ∈ T d ),
where b ∈ Γ 0 0 (T d ) and a ∈ Γ 1 2 (T d ) is a first-order elliptic symbol with positive real part and with regularity C 2 in x.
With regards to further applications, we make the somewhat unconventional choice to take the Cauchy datum on z = -1. Recall that we denote by
C 0 ([-1, 0]; H m (T d )) the space of continuous functions in z ∈ [-1, 0] with values in H m (T d ). We prove that, if f ∈ C 0 ([-1, 0]; H s (T d )), then u(0) ∈ H s+1-ε (T d ) for any ε > 0 (where u(0)(x) = u| z=0 = u(x, 0)
). This corresponds the usual gain of 1/2 derivative for the Poisson kernel. This result is not new. Yet, for lack of a reference, we include a detailed analysis.
∀(x, ξ) ∈ T d × R d , Re a(x, ξ) ≥ c |ξ| . If u ∈ C 1 ([-1, 0]; H -∞ (T d )) solves the elliptic evolution equation ∂ z u + T a u = T b u + f, with f ∈ C 0 ([-1, 0]; H s (T d )) for some s ∈ R, then u(0) ∈ H s+r (T d ).
Proof. The following proof gives the stronger conclusion that u is continuous in z ∈] -1, 0] with values in H s+r (T d ). Therefore, by an elementary induction argument, we can assume without loss of generality that b = 0 and u ∈ C 0 ([-1, 0]; H s (T d )). In addition one can assume that u(t, x, z) = 0 for z ≤ -1/2.
Introduce the symbol e(z; x, ξ) := e 0 (z; x, ξ) + e -1 (z; x, ξ)
= exp (za(x, ξ)) + exp (za(x, ξ)) z 2 2i ∂ ξ a(x, ξ) • ∂ x a(x, ξ),
so that e(0; x, ξ) = 1 and
∂ z e = e 0 a + e -1 a + 1 i ∂ ξ e 0 • ∂ x a. (4.5)
According to our assumption that Re a ≥ c |ξ|, we have the simple estimates
(z |ξ|) ℓ exp (za(x, ξ)) ≤ C ℓ .
Therefore
e 0 ∈ C 0 ([-1, 0]; Γ 0 1+r (T d )), e -1 ∈ C 0 ([-1, 0]; Γ -1 r (T d )).
According to (4.5) and Theorem 4.6, then, T ∂ze -T e T a is of order ≤ -r.
Write ∂ z (T e u) = T e f + F, with F ∈ C 0 ([-1, 0]; H s+r (T d ))
and integrate on [-1, 0] to obtain
T 1 u(0) = 0 -1 F (y) dy + 0 -1 (T e f )(y) dy. (4.6) Since F ∈ C 0 ([-1, 0]; H s+r (T d ))
, the first term in the right-hand side belongs to H s+r (T d ). Moreover u(0) -T 1 u(0) ∈ H +∞ (T d ) and hence it remains only to study the second term in the right-hand side of (4.6). Set
u(0) := 0 -1
(T e f )(y) dy.
To prove that u(0) belongs to H s+r (T d ), the key observation is that, since Re a ≥ c |ξ|, the family
{ (|y| |ξ|) r e(y; x, ξ) | -1 ≤ y ≤ 0 } is bounded in Γ r 0 (T d ).
Once this is granted, we use the following result (see [START_REF] Métivier | Para-differential calculus and applications to the cauchy problem for nonlinear systems[END_REF]) about operator norms estimates. Given s ∈ R and m ∈ R, there is a constant C such that, for all τ ∈ Γ m 0 (T d ) and all v ∈ H s+m (T d ),
T τ v H s ≤ C sup |α|≤ d 2 +1 sup |ξ|≥1/2 (1 + |ξ|) |α|-m ∂ α ξ τ (•, ξ) C 0 (T d ) v H s+m . (4.7)
This estimate implies that there is a constant K such that, for all -1 ≤ y ≤ 0 and all v ∈ H s (T d ),
(|y| |D x |) r (T e v) H s ≤ K v H s .
By applying this result we obtain that there is a constant K such that, for all y ∈ [-1, 0[,
(T e f )(y) H s+r ≤ K |y| r f (y) H s . Since |y| -r ∈ L 1 (]-1, 0[), this implies that u(0) ∈ H s+r (T d
). This completes the proof.
Paralinearization of the interior equation
With these preliminaries established, we start the proof of Theorem 2.12.
From now on we fix
s ≥ 3 + d/2 such that s -d/2 ∈ N, σ ∈ H s (T d ) and ψ ∈ H s (T d ).
As already explained, we use the change of variables z = yσ(x) to reduce the problem to the fixed domain
{(x, z) ∈ T d × R : z < 0}. That is, we set v(x, z) = ϕ(x, z + σ(x)),
which satisfies
(1 + |∇σ| 2 )∂ 2 z v + ∆v -2∇σ • ∇∂ z v -∂ z v∆σ = 0 in {z < 0}, (4.8)
and the following boundary condition
(1 + |∇σ| 2 )∂ z v -∇σ • ∇v = G(σ)ψ on {z = 0}. (4.9)
Henceforth we denote simply by C 0 (H r ) the space of continuous functions in z ∈ [-1, 0] with values in H r (T d ). By assumption, we have
v ∈ C 0 (H s ), ∂ z v ∈ C 0 (H s-1 ). (4.10)
There is one observation that will be useful below.
Lemma 4.11. For k = 2, 3,
∂ k z v ∈ C 0 (H s-k ). (4.11)
Proof. This follows directly from the equation (4.8), the assumption (4.10) and the classical rule of product in Sobolev spaces which we recall here. For t 1 , t 2 ∈ R, the product maps
H t 1 (T d ) × H t 2 (T d ) to H t (T d ) whenever t 1 + t 2 ≥ 0, t ≤ min{t 1 , t 2 } and t ≤ t 1 + t 2 -d/2,
with the third inequality strict if t 1 or t 2 or -t is equal to d/2 (cf [START_REF] Hörmander | Lectures on nonlinear hyperbolic differential equations[END_REF]).
We use the tangential paradifferential calculus, that is the paradifferential quantization T a of symbols a(z, x, ξ) depending on the phase space variables The following result is the key technical point.
Proposition 4.12. The good unknown u = v -T ∂zv σ satisfies the paradifferential equation
T (1+|∇σ| 2 ) ∂ 2 z u -2T ∇σ • ∇∂ z u + ∆u -T ∆σ ∂ z u = f 0 , (4.13)
where
f 0 ∈ C 0 (H 2s-3-d 2 ).
Proof. Introduce the notations
E := (1 + |∇σ| 2 )∂ 2 z -2∇σ • ∇∂ z + ∆ -∆σ∂ z and P := T (1+|∇σ| 2 ) ∂ 2 z -2T ∇σ • ∇∂ z + ∆ -T ∆σ ∂ z .
We begin by proving that v satisfies
Ev -P v -T ∂ 2 z v |∇σ| 2 + 2T ∇∂zv ∇σ + T ∂zv ∆σ ∈ C 0 (H 2s-3-d 2 ). (4.14)
This follows from the paralinearization lemma 4.8, which implies that
∇σ • ∇∂ z v -T ∇σ • ∇∂ z v -T ∇∂zv • ∇σ ∈ C 0 (H 2s-3-d 2 ), |∇σ| 2 ∂ 2 z v -T |∇σ| 2 ∂ 2 z v -T ∂ 2 z v |∇σ| 2 ∈ C 0 (H 2s-3-d 2 ), ∂ z v∆ x σ -T ∂zv ∆σ -T ∆σ ∂ z v ∈ C 0 (H 2s-3-d 2 )
.
We next substitute v = u + T ∂zv σ in (4.14). Directly from the definition of u, we obtain
∂ 2 z u = ∂ 2 z v -T ∂ 3 z v σ, ∇∂ z u = ∇∂ z v -T ∂ 2 z v ∇σ -T ∇∂ 2 z v σ, ∆u = ∆v -T ∂zv ∆σ + 2T ∇∂z v • ∇σ -T ∆∂zv σ. Since (1 + |∇σ| 2 )∂ 2 z v -2∇σ • ∇∂ z v + ∆v -∆σ∂ z v =
0, by using Lemma 4.9 and (4.11) we obtain the key cancelation
T (1+|∇σ| 2 ) T ∂ 3 z v σ -2T ∇σ T ∇∂ 2 z v σ + T ∆∂zv σ -T ∂ 2 z v∆σ σ ∈ C 0 (H 2s-3-d 2 ). (4.15)
Then,
P u -P v + T ∂zv ∆σ -2T ∇σ • T ∂ 2 z v ∇σ -2T ∇∂z v ∇σ ∈ C 0 (H 2s-3-d 2 ), so that
Ev -P u + 2T ∇σ • T ∂ 2 z v ∇σ -T ∂ 2 z v |∇σ| 2 ∈ C 0 (H 2s-3-d 2 ),
The symbolic calculus implies that
2T ∂ 2 z v T ∇σ • ∇σ -T ∂ 2 z v |∇σ| 2 ∈ C 0 (H 2s-2-d 2 ).
Which concludes the proof.
Reduction to the boundary
As already mentioned, it is known that, if σ is a C ∞ given function, then the Dirichlet to Neumann operator G(σ) is a classical pseudo-differential operator. The proof of this result is based on elliptic factorization. We here perform this elliptic factorization for the equation for the good unknown.
We next apply this lemma to determine the normal derivatives of u at the boundary in terms of tangential derivatives.
We have just proved that
T (1+|∇σ| 2 ) ∂ 2 z u -2T ∇σ • ∇∂ z u + ∆u -T ∆σ ∂ z u = f 0 ∈ C 0 (H 2s-3-d 2 ). (4.16) Set b = 1 1 + |∇σ| 2 .
Since b ∈ H s-1 (T d ), by applying Lemma 4.9, we find that one can equivalently rewrite equation (4.16) as
∂ 2 z u -2T b∇σ • ∇∂ z u + T b ∆u -T b∆σ ∂ z u = f 1 ∈ C 0 (H 2s-3-d 2 ). (4.17)
Following the strategy in [START_REF] Taylor | Pseudodifferential operators[END_REF], we shall perform a full decoupling into a forward and a backward elliptic evolution equations. Recall that the classes Σ m ρ (T d ) have been defined in §4.1.2. Lemma 4.13. There exist two symbols a, A ∈ Σ 1 s-1-d/2 (T d ) such that,
(∂ z -T a )(∂ z -T A )u = f ∈ C 0 (H 2s-3-d 2 ). (4.18)
Proof. We seek a and A in the form
a(x, ξ) = 0≤j<t a 1-j (x, ξ), A(x, ξ) = 0≤j<t A 1-j (x, ξ), (4.19)
where
t := s -3 -d/2, and a m , A m ∈ Γ m t+1+m (-t < m ≤ 1).
We want to solve the system
a♯A := a k ♯A ℓ = -b |ξ| 2 + r(x, ξ), a + A = a k + A k = 2b(i∇σ • ξ) + b∆σ, (4.20)
for some admissible remainder r ∈ Γ -t 0 (T d ). Note that the notation ♯, as given in Theorem 4.6 depends on the regularity of the symbols. To clarify notations, we explicitly set
a♯A := |α|<t+min{k,ℓ} 1 i α α! ∂ α ξ a k ∂ α x A ℓ .
Assume that we have defined a and A such that (4.20) is satisfied, and let us then prove the desired result (4.18). For r ∈ [1, +∞), use the notation
a♯ r b(x, ξ) = |α|<r 1 i α α! ∂ α ξ a(x, ξ)∂ α x b(x, ξ).
Then, Theorem 4.6 implies that
T a 1 T A 1 -T a 1 ♯ s-1 A 1 is of order ≤ 1 + 1 -(s -1) - d 2 = -t, T a 1 T A 0 -T a 1 ♯ s-2 A 0 is of order ≤ 1 + 0 -(s -2) - d 2 = -t, T a 0 T A 1 -T a 0 ♯ s-2 A 1 is of order ≤ 0 + 1 -(s -2) - d 2 = -t,
and, for -t ≤ k, ℓ ≤ 0,
T a k T A ℓ -T a k ♯ s-2+min{k,ℓ} A ℓ is of order ≤ k+ℓ-(s-2+min{k, ℓ})- d 2 ≤ -t-1.
Consequently, T a T A -T a♯A is of order ≤ -t. The first equation in (4.20) then implies that
T a T A u -b∆u ∈ C 0 (H s+t ),
while the second equation directly gives
∂ z T A + T a ∂ z u -2T b∇σ • ∇∂ z u -T b∆σ ∂ z u ∈ C 0 (H s+t ).
We thus obtain the desired result (4.18) from (4.17).
To define the symbols a m , A m , we remark first that a♯A = τ + ω with ω ∈ Γ 3-s 0 (T d ) and
τ := |α|<s-3+k+ℓ 1 i α α! ∂ α ξ a k ∂ α x A ℓ . (4.21)
We then write τ = τ m where τ m is of order m. Together with the second equation in (4.20), this yields a cascade of equations that allows to determine a m and A m by induction.
Namely, we determine a and A as follows. We first solve the principal system:
a 1 A 1 = -b |ξ| 2 , a 1 + A 1 = 2ib∇σ • ξ, by setting a 1 (x, ξ) = ib∇σ • ξ -b|ξ| 2 -(b∇σ • ξ) 2 , A 1 (x, ξ) = ib∇σ • ξ + b|ξ| 2 -(b∇σ • ξ) 2 .
Note that b|ξ| 2 -(b∇σ • ξ) 2 ≥ b 2 |ξ| 2 so that the symbols a 1 , A 1 are well defined and belong to Γ 1 s-1-d/2 (T d ). We next solve the sub-principal system
a 0 A 1 + a 1 A 0 + 1 i ∂ ξ a 1 ∂ x A 1 = 0, a 0 + A 0 = b∆σ.
It is found that
a 0 = i∂ ξ a 1 • ∂ x A 1 -b∆σa 1 A 1 -a 1 , A 0 = i∂ ξ a 1 • ∂ x A 1 -b∆σA 1 a 1 -A 1 .
Once the principal and sub-principal symbols have been defined, one can define the other symbols by induction. By induction, for -t + 1 ≤ m ≤ 0, suppose that a 1 , . . . , a m and A 1 , . . . , A m have been determined. Then define a m-1 and A m-1 by
A m-1 = -a m-1 , and
a m-1 = 1 a 1 -A 1 1 i α α! ∂ α ξ a k ∂ α x A ℓ
where the sum is over all triple (k, ℓ, α)
∈ Z × Z × N d such that m ≤ k ≤ 1, m ≤ ℓ ≤ 1, |α| = k + ℓ -m.
By definition, one has a m , A m ∈ Γ m t+1+m for -t + 1 < m ≤ 0. Also, we obtain that τ = -b |ξ| 2 and a + A = 2b(i∇σ • ξ) + b∆σ.
This completes the proof.
We now complete the reduction to the boundary. As a consequence of the precise parametrix exhibited in Lemma 4.13, we describe the boundary value of ∂ z u up to an error in H 2s-2-d 2 -0 (T d ).
Corollary 4.14. Let ε > 0. On the boundary {z = 0}, there holds
(∂ z u -T A u)| z=0 ∈ H 2s-2-d 2 -ε (T d ), (4.22)
where A is given by Lemma 4.13.
Proof. Introduce w := (∂ z -T A )u and write a = a 1 + a where
a 1 ∈ Γ 1 2 (T d ) is the principal symbol of a and a ∈ Γ 0 0 (T d ). Then w satisfies ∂ z w -T a 1 w = T a w + f. Since f ∈ C 0 (H 2s-3-d 2 )
, and since Re a 1 ≤ -K |ξ|, Proposition 4.10 applied with r = 1ε implies that
(∂ z u -T A u)| z=0 = w(0) ∈ H 2s-2-d 2 -ε (T d ).
Paralinearization of the Neumann boundary condition
We now conclude the proof of Theorem 2.12. Recall that, by definition,
G(σ)ψ = (1 + |∇σ| 2 )∂ z v -∇σ • ∇v| z=0 .
As before, on {z = 0} we find that
T (1+|∇σ| 2 ) ∂ z v + 2T ∂z v∇σ • ∇σ -T ∇σ • ∇v -T ∇v • ∇σ ∈ H 2s-2-d 2 (T d ).
We next translate this equation to the good unknown u = v -T ∂zv σ. It is found that
T (1+|∇σ| 2 ) ∂ z u -T ∇σ • ∇u -T ∇v-∂z v∇σ • ∇σ + T α σ ∈ H 2s-2-d 2 (T d ), with α = (1 + |∇σ| 2 )∂ 2 z v -∇σ • ∇∂ z v. The interior equation for v implies that α = -div(∇v -∂ z v∇σ), so that T (1+|∇σ| 2 ) ∂ z u -T ∇σ • ∇u -T ∇v-∂zv∇σ • ∇σ -T div(∇v-∂z v∇σ) σ ∈ H 2s-2-d
2 . (4.23) Furthermore, Corollary 4.14 implies that
T (1+|∇σ| 2 ) ∂ z u -T ∇σ • ∇u = T λσ u + R, (4.24)
with R ∈ H 2s-2-d 2 -ε (T d ) and
λ σ = (1 + |∇σ| 2 )A -i∇σ • ξ.
In particular, λ σ ∈ Σ 1 s-1-d/2 (T d ) is a complex-valued elliptic symbol of degree 1, with principal symbol
λ 1 σ (x, ξ) = (1 + |∇σ(x)| 2 )|ξ| 2 -(∇σ(x) • ξ) 2 .
By combining (4.23) and (4.24), we conclude the proof of Theorem 2.12.
Regularity of diamond waves
In this section, we prove Theorem 2.5, which is better formulated as follows.
Theorem 5.1. There exist three real-valued functions ν, κ 0 , κ 1 defined on
D 12 (T 2 ) such that, for all ω = (µ, σ, ψ) ∈ D s (T 2 ) with s ≥ 12, i) if there exist δ ∈ [0, 1[ and N ∈ N * such that k 2 -ν(ω)k 2 1 + κ 0 (ω) + κ 1 (ω) k 2 1 ≥ 1 k 2+δ 1 , for all (k 1 , k 2 ) ∈ N 2 with k 1 ≥ N , then (σ, ψ) ∈ H s+ 1-δ 2 (T 2 ),
and hence (σ, ψ) ∈ C ∞ (T 2 ) by an immediate induction argument.
ii) ν(ω) ≥ 0 and there holds the estimate
ν(ω) - 1 µ + κ 0 (ω) -κ 0 (µ, 0, 0) + κ 1 (ω) -κ 1 (µ, 0, 0) ≤ C (σ, ψ) H 12 + µ + 1 µ (σ, ψ) 2 H 12 ,
for some non-decreasing function C independent of (µ, σ, ψ).
We shall define explicitly the coefficients ν, κ 0 , κ 1 . The proof of the estimate is left to the reader.
Paralinearization of the full system
From now on, we fix ℓ > 0 and s ≥ 12 and consider a given diamond wave (µ, σ, ψ) ∈ D s (T 2 ). Recall that the system reads
G(σ)ψ -c • ∇σ = 0, µσ + c • ∇ψ + 1 2 |∇ψ| 2 - 1 2 ∇σ • (∇ψ + c) 2 1 + |∇σ| 2 = 0, (5.1)
where c = (1, 0) so that c • ∇ = ∂ x 1 . In analogy with the previous section, we introduce the coefficient
b := ∇σ • (c + ∇ψ) 1 + |∇σ| 2 ,
and what we call the good unknown
u := ψ -T b σ.
A word of caution: this corresponds to the trace on {z = 0} of what we called the good unknown (also denoted by u) in the previous section.
The first main step is to paralinearize System (5.1).
Proposition 5.2. The good unknown u = ψ -T b σ and σ satisfy
T λσ u -T V • ∇σ -T div V σ = f 1 ∈ H 2s-5 (T 2 ), (5.2)
T a σ + T V • ∇u = f 2 ∈ H 2s-3 (T 2 ), (5.3)
where the symbol
λ σ = λ σ (x, ξ) ∈ Σ 1 s-2 (T 2
) is as given by Theorem 2.12. The coefficient a = a(x) ∈ R and the vector field V = V (x) ∈ R 2 are given by
V := c + ∇ψ -b∇σ, a := µ + V • ∇b. Remark 5.3. The Sobolev embedding gives λ σ ∈ Σ 1 s-2 (T 2 ) if and only if s ∈ N. For s ∈ N we only have λ σ ∈ Σ 1 s-2-ε (T 2
) for all ε > 0. Since this changes nothing in the following analysis, we allow ourself to write abusively
λ σ ∈ Σ 1 s-2 (T 2 ) for all s ≥ 12.
Proof. The main part of the result, which is (5.2), follows directly from Theorem 2.12 and the regularity result in Remark 2.13. The proof of (5.3) is easier. Indeed, note that for
F (a, b) = 1 2 (a • b) 2 1 + |a| 2 ,
there holds
∂ b F = (a • b) 1 + |a| 2 a, ∂ a F = (a • b) 1 + |a| 2 b - (a • b) 1 + |a| 2 a .
Using these identities for a = ∇σ and b = c + ∇ψ, the paralinearization lemma (i.e. Lemma 4.8) implies that
µσ + T V • ∇ψ -T V b • ∇σ ∈ H 2s-3 (T 2 ).
Since s -1 > d/2, the product rule in Sobolev spaces successively implies that b = ∇σ • (c + ∇ψ)
1 + |∇σ| 2 ∈ H s-1 (T 2 ), V = c + ∇ψ -b∇σ ∈ H s-1 (T 2 ).
Then we use Lemma 4.9 (applied with t = s-1), which implies the following:
T V b • ∇σ -T V • ∇(T b σ) + T V • T ∇b σ = T V b -T V T b • ∇σ ∈ H 2s-3 (T 2 ).
Similarly, Lemma 4.9 applied with t = s -2 implies that
T V • T ∇b σ -T V •∇b σ ∈ H 2s-3 (T 2 ).
As a corollary, with a = µ + V • ∇b, there holds
T a σ + T V • ∇u ∈ H 2s-3 (T 2 ).
This completes the proof.
The Taylor sign condition
Let φ be the harmonic extension of ψ as defined in §2.1, so that
∂ 2 y φ + ∆φ = 0 in Ω, ∂ y φ -∇σ • ∇φ -c • ∇σ = 0 on ∂Ω, µσ + 1 2 |∇φ| 2 + 1 2 (∂ y φ) 2 + c • ∇φ = 0 on ∂Ω, (∇φ, ∂ y φ) → (0, 0) as y → -∞, (5.4)
where
Ω = { (x, y) ∈ R 2 × R | y < σ(x) }.
Define the pressure by
P (x, y) := -µy - 1 2 |∇φ(x, y)| 2 - 1 2 (∂ y φ(x, y)) 2 -c • ∇φ(x, y).
The following result gives the coefficient a (which appeared in Proposition 5.2) in terms of the pressure P .
Lemma 5.4. There holds a(x) = -(∂ y P )(x, σ(x)).
Proof. We have
a(x) = µ + (c + (∇φ)(x, σ(x))) • ∇ (∂ y φ)(x, σ(x)) .
This yields
a(x) = µ + ∂ y 1 2 |∇φ| 2 + c • ∇φ + (c + ∇φ) • ∇σ∂ 2 y φ (x, σ(x)). The Neumann condition ∂ y φ -∇σ • ∇φ -c • ∇σ = 0 implies that a(x) = ∂ y µy + 1 2 |∇φ| 2 + 1 2 (∂ y φ) 2 + c • ∇φ (x, σ(x)) = -(∂ y P )(x, σ(x)),
which concludes the proof.
The Taylor sign condition is the physical assumption that the normal derivative of the pressure in the flow at the free surface is negative. The equation
∂ y φ -∇σ • ∇φ -c • ∇φ = 0,
implies that ∂ n P = ∂ y P at the free surface. Therefore, the Taylor sign condition reads ∀x ∈ T 2 , (∂ y P )(x, σ(x)) < 0.
(5.5)
It is easily proved that this property is satisfied under a smallness assumption (see [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]). Indeed, if (σ, ψ)
C 2 = O(ε), then (∂ y P )(x, σ(x)) + µ L ∞ = O(ε).
Our main observation is that diamond waves satisfy (5.5) automatically: No smallness assumption is required to prove (5.5). This a consequence of the following general proposition, which is a variation of one of Wu's key results ( [START_REF] Wu | Well-posedness in Sobolev spaces of the full water wave problem in 2-D. Invent[END_REF][START_REF] Wu | Well-posedness in Sobolev spaces of the full water wave problem in 3-D[END_REF]). Since the result is not restricted to diamond waves, the following result is stated in a little more generality than is needed.
Proposition 5.5. Let µ > 0 and c ∈ R 2 . If (σ, φ) is a C 2 solution of (5.4) which is doubly periodic in the horizontal variables x 1 and x 2 , then the Taylor sign condition is satisfied: (∂ y P )(x, σ(x)) < 0 for all x ∈ T 2 . As a result it follows from the previous lemma that a(x) > 0, for all x ∈ T 2 .
Remark 5.6. (i) In the case of finite depth, it was shown by Lannes ( [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF]) that the Taylor sign condition is satisfied under an assumption on the second fundamental form of the bottom surface (cf Proposition 4.5 in [START_REF] Lannes | Well-posedness of the water-waves equations[END_REF]).
(ii) Clearly the previous result is false for µ = 0. Indeed, if µ = 0 then (σ, φ) = (0, 0) solves (5.4).
Proof (from [START_REF] Wu | Well-posedness in Sobolev spaces of the full water wave problem in 3-D[END_REF][START_REF] Lannes | Well-posedness of the water-waves equations[END_REF]). We have P = 0 on the free surface {y = σ(x)}. On the other hand, since µ > 0 and since ∇ x,y φ → 0 when y tends to -∞, there exists h > 0 such that P (x, y) ≥ 1 for y ≤ -h.
Define Ω h = {(x, y) ∈ R 2 × R : -h ≤ y ≤ σ(x)}.
Since P is bi-periodic in x, P reaches its minimum on Ω h . The key observation is that the equation ∆ x,y φ = 0 implies that ∆ x,y P = -|∇ y,x ∇ y,x φ| 2 ≤ 0, and hence P is a super-harmonic function. In particular P reaches its minimum on ∂Ω h and at such a point we have ∂ n P < 0. We conclude the proof by means of the following three ingredients: (i) P reaches its minimum on the free surface since P | y=-h ≥ 1 > 0 = P | y=σ(x) ; (ii) P = 0 on the free surface so that P reaches its minimum at any point of the free surface, hence ∂ n P < 0 on {y = σ(x)}; and (iii) ∂ n P = ∂ y P on the free surface .
Remark 5.7. According to the Acknowledgments in [START_REF] Wu | Well-posedness in Sobolev spaces of the full water wave problem in 3-D[END_REF], the idea of expressing a as -(∂ y P )(x, σ(x)) and using the maximum principle to prove that -(∂ y P )(x, σ(x)) < 0 is credited to R. Caflisch and Th. Hou.
After this short détour, we return to the main line of our development. By using the fact that a does not vanish, one can form a second order equation from (5.2)-(5.3). Lemma 5.8. Set
V(x, ξ) := -a(x) -1 (V (x) • ξ) 2 + i div a(x) -1 (V (x) • ξ)V (x) .
(5.6)
Then, T λσ+V u ∈ H 2s-5 (T 2 ).
(5.7)
Remark 5.9. The fact that a is positive implies that the symbol λ σ + V may vanish or be arbitrarily small. If a were negative, the analysis would have been much easier (cf Section 7.1).
Proof. As already seen we have b ∈ H s-1 (T 2 ) and V ∈ H s-1 (T 2 ). Therefore the product rule in Sobolev space implies that
a = µ + V • ∇b ∈ H s-2 (T 2 ).
Then, by applying Theorem 4.6 with ρ = s -3 we obtain that,
T V u -(T V • ∇ + T div V ) T a -1 T V • ∇u ∈ H s+(s-5) (T 2 ).
On the other hand, since a, a -1 ∈ H s-2 (T 2 ), Lemma 4.9 implies that
T a -1 T a -I is of order ≤ -(s -3),
and hence σ -(-T a -1 T V • ∇u) ∈ H 2s-4 (T 2 ).
(5.8)
The desired result (5.7) is an immediate consequence of (5.2)-(5.3).
Notations
The following notations are used continually in this section. ii) The set Γ(o, e) is the set of symbols a = a(x 1 , x 2 , ξ 1 , ξ 2 ) such that
a(-x 1 , x 2 , -ξ 1 , ξ 2 ) = -a(x 1 , x 2 , ξ 1 , ξ 2 ), a(x 1 , -x 2 , ξ 1 , -ξ 2 ) = a(x 1 , x 2 , ξ 1 , ξ 2 ).
Similarly we define the sets Γ(o, o), Γ(e, o) and Γ(e, e).
= ∂ x 1 σ(1 + ∂ x 1 ψ) + ∂ x 2 σ∂ x 2 ψ 1 + |∇σ| 2 ∈ C(o, e),
and hence u = ψ -T b σ ∈ C(o, e). Also, we check that the vector field
V = (V 1 , V 2 ) = c + ∇ψ -b∇σ is such that V 1 ∈ C(e, e) and V 2 ∈ C(o, o). (ii) If v ∈ C(o, e
) and a ∈ Γ(e, e) then T a v ∈ C(o, e) (provided that T a v is well defined). Clearly, the same property is true for the three other classes of symmetric functions.
To simplify the presentation, we will often only check only one half of the symmetry properties which are claimed in the various statements below. We will only check the symmetries with respect to the axis {x 1 = 0}. To do this, it will be convenient to use the following notation (as in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]).
Notation 5.12. By notation, given z
= (z 1 , z 2 ) ∈ R 2 , z ⋆ = (-z 1 , z 2 ).
Change of variables
We have just proved that T λσ+V u ∈ H 2s-5 (T 2 ). We now study the sum of the principal symbols of λ σ and V. Introduce
p(x, ξ) = (1 + |∇σ(x)| 2 ) |ξ| 2 -(∇σ(x) • ξ) 2 -a(x) -1 (V (x) • ξ) 2 .
By following the analysis in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], we shall prove that there exists a change of variables R 2 ∋ x → χ(x) ∈ R 2 such that p x, t χ ′ (x)ξ has a simple expression.
Since we need to consider change of variables x → χ(x) such that u • χ is doubly periodic whenever u is doubly periodic, we introduce the following definition.
Definition 5.13. Let χ : R 2 → R 2 be a continuously differentiable diffeomorphism. For r > 1, we say that χ is a
C r (T 2 )-diffeomorphism if there exists χ ∈ C r (T 2 ) such that ∀x ∈ R 2 , χ(x) = x + χ(x).
(Recall that bi-periodic functions on R 2 are identified with functions on T 2 .)
In this paragraph we show Proposition 5.14. There exist a C s-4 (T 2 )-diffeomorphism χ, a constant ν ≥ 0, a positive function γ ∈ C s-4 (T 2 ) and a symbol α ∈ Γ 0 s-6 (T 2 ) homogeneous of degree 0 in ξ such that, for all (x, ξ)
∈ T 2 × R 2 , p x, t χ ′ (x)ξ = γ(x) |ξ| -νξ 2 1 + iα(x, ξ)ξ 1 ,
and such that the following properties hold: Proposition 5.14 will be deduced from the following lemma.
Lemma 5.15. There exists a C s-3 (T 2 )-diffeomorphism χ 1 of the form
χ 1 (x 1 , x 2 ) = x 1 d(x 1 , x 2 )
, such that d solves the transport equation
V (x) • ∇d(x) = 0,
with initial data d(0, x 2 ) = x 2 on x 1 = 0, and such that, for all x ∈ R 2 ,
d(x 1 , x 2 ) = d(-x 1 , x 2 ) = -d(x 1 , -x 2 ), (d ∈ C(e, o)) (5.9) d(x 1 , x 2 ) = d(x 1 + 2π, x 2 ) = d (x 1 , x 2 + 2πℓ) -2πℓ.
(5.10)
Remark 5.16. The result is obvious for the trivial solution (σ, ψ) = (0, 0) with d(x) = x 2 . It can be easily inferred from the analysis in Appendix C in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF] that this result is also satisfied in a neighborhood of (0, 0). The new point here is that we prove the result under the only assumption that (σ, φ) satisfies condition (2.2) in Definition 2.2.
Proof. Assumption (2.2) implies that V 1 (x) = 0 for all x ∈ T 2 . We first write that, if d satisfies V • ∇d = 0 with initial data d(0, x 2 ) = x 2 on x 1 = 0, then w = ∂ x 2 d solves the Cauchy problem
∂ x 1 w + V 2 V 1 ∂ x 2 w + w∂ x 2 V 2 V 1 = 0, w(0, x 2 ) = 1. (5.11)
To study this Cauchy problem, we work in the Sobolev spaces of 2πℓ-periodic functions H s (T ) where T is the circle T := R/(2πℓZ). Since
V 2 /V 1 , ∂ x 2 (V 2 /V 1 ) ∈ H s-2 (T 2 ) ⊂ L ∞ (R; H s-3 (T )),
and since s > 4, standard results for hyperbolic equations imply that (5.11) has a unique solution w ∈ C 0 (R; H s-3 (T )). We define d by
d(x 1 , x 2 ) := x 2 0 w(x 1 , t) dt.
(5.12)
We then obtain a solution of V • ∇d = 0, and we easily checked that
d(x) -x 2 ∈ 0≤j<s-2 C j (R; H s-2-j (T )).
The Sobolev embedding thus implies that d(x) -
x 2 ∈ C s-3 (R 2 ).
We next prove that d satisfies (5.9)-(5.10). Firstly, by uniqueness for the Cauchy problem (5.11), using that V 1 , V 2 are 2πℓ-periodic in x 2 and using the symmetry properties of V 1 , V 2 (cf Remark 5.11) we easily obtain
w(x 1 , x 2 ) = w(-x 1 , x 2 ) = w(x 1 , -x 2 ) = w (x 1 , x 2 + 2πℓ) .
To prove that w is periodic in x 1 , following [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], we use in a essential way the fact that w is even in x 1 to write w(-π, x 2 ) = w(+π, x 2 ), which yields, by uniqueness for the Cauchy problem,
w(x 1 -π, x 2 ) = w(x 1 + π, x 2 ).
This proves that w is 2π-periodic in x 1 . Next, directly from the definition (5.12), we obtain that d is 2π-periodic in x 1 and that d satisfies (5.9). Moreover, this yields
d (x 1 , x 2 + 2πℓ) -d(x 1 , x 2 ) = 2πℓ 0 w(x 1 , x 2 ) dx 2 .
Differentiating the right-hand side with respect to x 1 , and using the identity
∂ x 1 w = -∂ x 2 (V 2 w/V 1 ), we obtain 2πℓ 0 w(x 1 , x 2 ) dx 2 = 2πℓ 0 w(0, x 2 ) dx 2 = 2πℓ,
which completes the proof of (5.10).
We next prove that ∀x ∈ T 2 , w(x) = ∂ x 2 d(x) = 0.
(5.13) Suppose for contradiction that there exists x ∈ [0, 2π) × [0, 2πℓ) such that w(x) = 0. Set
α = inf{x 1 ∈ [0, 2π) : ∃x 2 ∈ [0, 2πℓ] s.t. w(x 1 , x 2 ) = 0}.
By continuity, there exists y such that w(α, y) = 0. Since w(0, x 2 ) = 1, we have α > 0. For 0 ≤ x 1 < α, we compute that 1/w satisfies
∂ x 1 + V 2 V 1 ∂ x 2 -∂ x 2 V 2 V 1 1 w = 0, 1 w (0, x 2 ) = 1.
Let 0 < δ < 1. By Sobolev embedding, there exists a constant K such that sup
(x 1 ,x 2 )∈[0,δα]×[0,2πℓ] 1 w (x 1 , x 2 ) ≤ K sup x 1 ∈[0,δα] 1 w (x 1 , •) H 1 (T )
.
Therefore, classical energy estimates imply that sup
(x 1 ,x 2 )∈[0,δα]×[0,2πℓ] 1 w (x) ≤ (K + KC)e 4Cα
with C := sup
x 1 ∈[0,δα] V 2 V 1 (x 1 , •) C 2 (T )
.
Therefore, if 0 ≤ x 1 < α, then w(x) > (K + KC) -1 e -4Cα .
This gets us to w(α, •) > 0, whence the contradiction which proves (5.13).
Consequently, det χ ′ (x) = 0 for all x ∈ R 2 . The above argument also establishes that there exists a constant c ≥ 1 such that, for all x ∈ R 2 ,
x 2 c ≤ d(x) ≤ cx 2 .
This implies that χ 1 is a diffeomorphism of R 2 , which completes the proof.
The end of the proof of Proposition 5.14 follows from Section 3 in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. Firstly, directly from the identity V • ∇d = 0, we obtain
V (x) • ( t χ ′ 1 (x)ξ) = V 1 (x)ξ 1 ,
and hence
p(x, t χ ′ 1 (x)ξ) = λ 1 σ (x, t χ ′ 1 (x)ξ) -a(x) -1 (V 1 (x)ξ 1 ) 2 .
Our next task is to rewrite λ 1 σ (x, t χ ′ 1 (x)ξ) in an appropriate form.
Lemma 5.17. There holds
λ 1 σ (x, t χ ′ 1 (x)ξ) = λ 1 σ x, t χ ′ 1 (x) 0 1 |ξ| + ir(x, ξ)ξ 1 , (5.14)
where r ∈ Γ 0 s-4 (T 2 ) is homogeneous of degree 0 in ξ and such that r ∈ Γ(o, e).
Proof. This follows from the fact that λ 1 σ (x, t χ ′ 1 (x)ξ) is the square root of an homogeneous polynomial function of degree 2 in ξ. Indeed, write λ 1 σ (x, t χ ′ 1 (x)ξ) 2 under the form
λ 1 σ (x, t χ ′ 1 (x)ξ) 2 = P 1 (x)ξ 2 1 + P 12 (x)ξ 1 ξ 2 + P 2 (x)ξ 2 2 .
(5.15)
Then
λ 1 σ (x, t χ ′ 1 (x)ξ) 2 -P 2 (x) |ξ| 2 = (P 1 (x) -P 2 (x)) ξ 2 1 + P 12 (x)ξ 1 ξ 2 .
Directly from (5.15) applied with ξ = e 2 := ( 0 1 ), we get
P 2 (x) = λ 1 σ x, t χ ′ 1 (x)e 2 2 .
(5.16)
Since λ σ ≥ 0, using the elementary identity
√ a - √ b = (a -b)/( √ a + √ b)
, we obtain the desired result (5.14) with
r(x, ξ) = -i (P 1 (x) -P 2 (x)) ξ 1 + P 12 (x)ξ 2 λ 1 σ (x, t χ ′ 1 (x)ξ) + P 2 (x) |ξ| . Since d ∈ C s-3 (R 2
), we verify that r belongs to Γ 0 s-4 (T 2 ). It remains to show that r(x, ξ) ∈ Γ(o, e). To fix idea we study the symmetry with respect to the change of variables (x, ξ) → (x ⋆ , ξ ⋆ ) (see Notation 5.12). We want to prove that r(x ⋆ , ξ ⋆ ) = -r(x, ξ). To see this, r is better written as
r(x, ξ) = -i 1 ξ 1 (λ 1 σ (x, t χ ′ 1 (x)ξ)) 2 -P 2 (x) |ξ| 2 λ 1 σ (x, t χ ′ 1 (x)ξ) + P 2 (x) |ξ| . Note that (5.9) implies that t χ ′ 1 (x ⋆ )ξ ⋆ = t χ ′ 1 (x)ξ ⋆ which in turn implies that λ 1 σ (x ⋆ , t χ ′ 1 (x ⋆ )ξ ⋆ ) = λ 1 σ (x, t χ ′ 1 (x)ξ)
. Moreover, directly from the definition of P 2 we see that P 2 (x ⋆ ) = P 2 (x). This yields the desired result r(x ⋆ , ξ ⋆ ) = -r(x, ξ).
Introduce next the symbol p 1 given by
p 1 (χ 1 (x), ξ) := p(x, t χ ′ 1 (x)ξ).
Lemma 5.18. There exists a constant ν ≥ 0, a C s-4 (T 2 )-diffeomorphism χ 2 , a positive function M ∈ C s-4 (T 2 ) and a symbol α ∈ Γ 0 s-6 (T 2 ) homogeneous of degree 0 in ξ such that, for all (x, ξ)
∈ T 2 × R 2 , p 1 x, t χ ′ 2 (x)ξ = M (x) |ξ| -νξ 2 1 + i α(x, ξ)ξ 1 ,
and such that the following properties hold: M ∈ C(e, e) and α ∈ Γ(o, e), and χ 2 is of the form
χ 2 (x 1 , x 2 ) = x 1 + d(x 1 , x 2 )
x 2 + e(x 2 ) , (5.17)
where d ∈ C s-4 (T 2 ) is odd in x 1 and even in x 2 , and e ∈ C s-3 (R/2πℓZ) is odd in x 2 .
40
Proof. If χ 2 is of the form (5.17), then
t χ ′ 2 (x) ξ 1 ξ 2 = (1 + ∂ x 1 d(x))ξ 1 ∂ x 2 d(x)ξ 1 + (1 + ∂ x 2 e(x 2 ))ξ 2
.
By transforming the symbols as in the proof of Lemma 5.17, we find that it is sufficient to find ν ≥ 0, d = d(x 1 , x 2 ) and e = e(x 2 ) such that
(1 + ∂ x 1 d(x 1 , x 2 )) 2 = νΓ(x)(1 + ∂ x 2 e(x 2 )), where Γ(x) := P 2 a V 2 1 (χ -1 1 (x)).
Therefore, we set
d(x 1 , x 2 ) = x 1 0 νΓ(t, x 2 )(1 + ∂ x 2 e(x 2 )) dt -x 1 .
Then, d is 2π-periodic in x 1 if and only if,
∀x 2 ∈ [0, 2πℓ], ν(1 + ∂ x 2 e(x 2 )) 2π 0 Γ(x 1 , x 2 ) dx 1 -2π = 0.
This yields an equation for e = e(x 2 ) which has a (2πℓ)-periodic solution if and only if ν is given by
ν := 2π ℓ 2πℓ 0 2π 0 Γ(x 1 , x 2 ) dx 1 -2 dx 2 .
Now, notice that there exists a positive constant C such that P 2 ≥ C (by definition of P 2 , cf (5.16)), a ≥ C (by Proposition 5.5), V 2 1 ≥ C (by the assumption (2.2)). Hence, with ν, d and e as previously determined, we easily check that χ 2 is a C s-4 (T 2 )-diffeomorphism.
To complete the proof of Proposition 5.14, set
χ(x) = χ 2 (χ 1 (x)), to obtain p x, t χ ′ (x)ξ = p x, t χ ′ 1 (x) t χ ′ 2 (χ 1 (x))ξ = p 1 χ 1 (x), t χ ′ 2 (χ 1 (x))ξ = M (χ 1 (x)) |ξ| -νξ 2 1 + i α(χ 1 (x), ξ)ξ 1 ,
so that we obtain the desired result with
γ(x) = M (χ 1 (x)), α(x, ξ) = α(χ 1 (x), ξ).
Paracomposition
To compute the effect of the change of variables x → χ(x), we shall use Alinhac's paracomposition operators. We refer to [START_REF] Alinhac | Paracomposition et opérateurs paradifférentiels[END_REF][START_REF] Taylor | Tools for PDE[END_REF] for general theory about Alinhac's paracomposition operators. We here briefly state the basic definitions and results for periodic functions. Roughly speaking, these results assert that, given r > 1, one can associate to a C r (T 2 )-diffeomorphism χ an operator χ * of order ≤ 0 such that, on the one hand,
∀α ∈ R, u ∈ H α (T 2 ) ⇔ χ * u ∈ H α (T 2 ),
and on the other hand, there is a symbolic calculus to compute the commutator of χ * to a paradifferential operator. Let φ : R → R be a smooth even function with φ(t) = 1 for |t| ≤ 1.1 and φ(t) = 0 for |t| ≥ 1.9. For k ∈ Z, we introduce the symbol
φ k (ξ) = φ 2 -k (1 + |ξ| 2 ) 1/2 ,
and then the operator
∆ k f (ξ) := (φ k (ξ) -φ k-1 (ξ)) f (ξ).
For all temperate distribution f ∈ S ′ (R d ), the spectrum of ∆ k f satisfies spec ∆ k f ⊂ {ξ : 2 k-1 < ξ < 2 k+1 }. Hence ∆ k f = 0 when k < 0. Thus, one has the Littlewood-Paley decomposition:
f = k≥0 ∆ k f. Definition 5.19. Let χ be a C r (T 2 ) diffeomorphism with r > 1. By defini- tion χ * f = j∈N |k-j|≤N ∆ k ((∆ j f ) • χ)) ,
where N is large enough (depending on χ C 1 only, where χ(x) = χ(x)x).
Two of the principal facts about paracomposition operators are the following theorems, whose proofs follow from [START_REF] Alinhac | Paracomposition et opérateurs paradifférentiels[END_REF] by adapting the analysis to the case of C r (T 2 )-diffeomorphisms. The first basic result is that χ * is an operator of order ≤ 0 which can be inverted in the following sense.
Theorem 5.20. Let χ be a C r -diffeomorphism with r > 1. For all α ≥ 0,
f ∈ H α (T 2 ) if and only if χ * f ∈ H α (T 2 ). Moreover, χ * (χ -1 ) * -I is of order ≤ -(r -1).
This theorem reduces the study of the regularity of u to the study of the regularity of χ * u. To study the regularity of χ * u we need to compute the equation satisfied by χ * u. To do this, we shall use a symbolic calculus theorem which allows to compute the equation satisfied by χ * u in terms of the equation satisfied by u (in analogy with the paradifferential calculus). For what follows, it is convenient to work with (χ -1 ) * . Theorem 5.21. Let m ∈ R, r > 1, ρ > 0 and set σ := inf{ρ, r -1}. Consider a C r (T 2 )-diffeomorphism χ and a symbol a ∈ Σ m ρ (T 2 ), then there exists a * ∈ Σ m σ (T 2 ) such that
(χ -1 ) * T a -T a * (χ -1 ) * is order ≤ m -σ.
Moreover, one can give an explicit formula for a * : If one decomposes a as a sum of homogeneous symbols, then
a * (χ(x), η) = 1 i α α! ∂ α ξ a m-k (x, t χ ′ (x)η)∂ α y (e iΨx(y)•η )| y=x , (5.18)
where the sum is taken over all α ∈ N 2 such that the summand is well defined, χ ′ (x) is the differential of χ, t denotes transpose and
Ψ x (y) = χ(y) -χ(x) -χ ′ (x)(y -x).
(5.19)
The first reduction
We here apply the previous results to perform a first reduction to a case where the "principal" part of the equation has constant coefficient.
Proposition 5.22. Let χ be as given by Proposition 5.14. Then χ -1 * u satisfies an equation of the form
|D x | + ν∂ 2 x 1 + T A ∂ x 1 + T B χ -1 * u = f ∈ H s+2 (T 2 ), ( 5
.20)
where A ∈ Γ 0 s-6 (T 2 ) and B ∈ Σ 0 s-6 (T 2 ) are such that:
i) A is homogeneous of degree 0 in ξ; B = B 0 + B -1 where B ℓ is homoge- neous of degree ℓ in ξ;
ii) A ∈ Γ(o, e) and B ℓ ∈ Γ(e, e) for ℓ = 0, 1.
Remark 5.23. (i)
For what follows it is enough to have remainders in H s+2 (T 2 ). From now on, to simplify the presentation we do not try to give results with remainders having a regularity higher than what is needed.
(ii) A ∈ Γ(o, e) and B ℓ ∈ Γ(e, e) is equivalent to the fact that the symbol of
|D x | + ν∂ 2 x 1 + T A ∂ x 1 + T B is invariant by the symmetries (x 1 , x 2 , ξ 1 , ξ 2 ) → (-x 1 , x 2 , -ξ 1 , ξ 2 ), (x 1 , x 2 , ξ 1 , ξ 2 ) → (x 1 , -x 2 , ξ 1 , -ξ 2 ).
Proof. We begin by applying the results in §5.5 to compute the equation satisfies by χ -1 * u. Recall that, by notation, V is as given by (5.6) and
p(x, ξ) = (1 + |∇σ| 2 ) |ξ| 2 -(∇σ • ξ) 2 -a(x) -1 (V (x) • ξ) 2 .
We define λ * σ by (5.18) applied with m = 1 and ρ = s -1. Similarly, we define V * and p * by (5.18) applied with m = 2 and ρ = s -4. To prove Proposition 5.22, the key step is to compare the principal symbol of p * with λ * σ + V * . Lemma 5.24. There exist
a ∈ Γ 0 s-6 (T 2 ) and b = b 0 + b -1 ∈ Σ 0 s-6 (T 2 ) and a remainder r ∈ Σ -2 s-8 (T 2 ) such that λ * σ (χ(x), ξ) + V * (χ(x), ξ) = p(x, t χ ′ (x)ξ) + ia(x, ξ)ξ 1 + b(x, ξ) + r(x, ξ),
and such that a ∈ Γ(o, e), b ∈ Γ(e, e).
Proof. The proof is elementary. We first study λ * σ (χ(x), ξ). Since λ σ is a symbol of order 1, to obtain a remainder r which is of order -2, we need to compute the first three terms in the symbolic expansion of λ * σ . To do this, note that there are some cancelations which follow directly from the definition (5.19):
|α| = 1 ⇒ ∂ α y (e iΨx(y)•ξ )| y=x = 0, 2 ≤ |α| ≤ 3 ⇒ ∂ α y (e iΨx(y)•ξ )| y=x = i∂ α x (χ(x) • ξ), |α| = 4 ⇒ ∂ α y (e iΨx(y)•ξ )| y=x = i∂ α x (χ(x) • ξ) - ∂ β x (χ(x) • ξ)∂ γ x (χ(x) • ξ),
where in the last line the sum is taken over all decompositions β + γ = α such that |β| = 2 = |γ|. Therefore, it follows from (5.18) that
λ * σ (χ(x), ξ) = λ 1 σ (x, t χ ′ (x)ξ) + b 0 (x, ξ) + b -1 (x, ξ) + r(x, ξ), (5.21)
where r ∈ Σ -2 s-8 (T 2 ) and
b 0 (x, ξ) := λ 0 σ (x, t χ ′ (x)ξ) -i |α|=2 1 α! (∂ α ξ λ 1 σ )(x, t χ ′ (x)ξ)∂ α x χ(x) • ξ, b -1 (x, ξ) := λ -1 σ (x, t χ ′ (x)ξ) + 1 ℓ=0 |α|=2+ℓ 1 i α α! (∂ α ξ λ ℓ σ )(x, t χ ′ (x)ξ)∂ α x χ(x) • ξ - |β|=2=|γ| (∂ β+γ ξ λ 1 σ )(x, t χ ′ (x)ξ)(∂ β x χ(x) • ξ)(∂ γ x χ(x) • ξ).
Recall that χ = (χ 1 , χ 2 ) where χ 1 is odd in x 1 and even in x 2 , and χ 2 is even in x 1 and odd in x 2 . Therefore, to prove the desired symmetry properties, it is sufficient to prove that λ 1 σ , λ 0 σ , λ -1 σ are invariant by the two changes of variables
(x 1 , x 2 , ξ 1 , ξ 2 ) → (-x 1 , x 2 , -ξ 1 , ξ 2 ), (x 1 , x 2 , ξ 1 , ξ 2 ) → (x 1 , -x 2 , ξ 1 , -ξ 2 ).
We consider the first case only and use the notation
f ⋆ (x 1 , x 2 ) = f (-x 1 , x 2 ).
Observe that, since σ ⋆ = σ, it follows directly from the definition of the Dirichlet to Neumann operator (see (2.11)) that
G(σ)f ⋆ = G(σ)f ⋆ .
On the symbol level, this immediately gives the desired result:
λ σ (x ⋆ , ξ ⋆ ) = λ σ (x, ξ).
Alternatively, one may use the explicit definition of the symbols A m given in the proof of Lemma 4.13.
Since, by notation,
V(x, ξ) := -a(x) -1 (V (x) • ξ) 2 + i div a(x) -1 (V (x) • ξ)V (x) ,
the same reasoning implies that
V * (χ(x), ξ) = -a -1 (V (x) • t χ ′ (x)ξ) 2 + ia(x) -1 (V (x) • t χ ′ (x)ξ) div V (x) + ia(x) -1 (V (x) • ∇V (x)) • t χ ′ (x)ξ + ia(x) -1 1≤k,ℓ≤2 V k (x)V ℓ (x)∂ x k ∂ x ℓ (χ(x) • ξ).
Gathering the last two terms in the right-hand side we thus obtain
V * (χ(x), ξ) = -a -1 (V (x) • t χ ′ (x)ξ) 2 + ia(x) -1 (V (x) • t χ ′ (x)ξ) div V (x) + ia(x) -1 V (x) • ∇ V (x) • t χ ′ (x)ξ
and hence, by construction of χ, it follows that
V * (χ(x), ξ) = -a(x) -1 (V (x) • t χ ′ (x)ξ) 2 + ia(x, ξ)ξ 1 ,
where a ∈ Γ 0 s-5 (T 2 ) is such that a ∈ Γ(o, e). This completes the proof of Lemma 5.24.
We now are now in position to prove Proposition 5.22. By using Theorem 5.21, it follows from Lemma 5.24 and Proposition 5.14 that
T γ |D x | + ν∂ 2 x 1 + T α+a ∂ x 1 + T b χ -1 * u ∈ H s+2 (T 2 ).
We thus obtain (5.20) with A = (α + a)/γ and B = b/γ.
Elliptic regularity far from the characteristic variety
As usual, the analysis makes use of the division of the phase space into a region in which the inverse of the symbol remains bounded by a fixed constant and a region where the symbol is small and may vanish. Here we consider the easy part and prove the following elliptic regularity result.
Proposition 5.25. Let χ be the diffeomorphism determined in Proposition 5.14. Consider Θ = Θ(ξ) homogenous of degree 0 and such that there exists a constant K such that
|ξ 2 | ≥ K |ξ 1 | ⇒ Θ(ξ 1 , ξ 2 ) = 0.
Then,
Θ(D x )( χ -1 * u) ∈ H s+2 (T 2 ).
Remark 5.26. Note that, on the characteristic variety, we have |ξ 2 | ∼ νξ 2 1 . On the other hand, on the spectrum of Θ(D x )( χ -1 * u), we have |ξ 2 | ≤ K |ξ 1 |. Therefore, the previous result establishes elliptic regularity very far from the characteristic variety.
Proof. Recall that χ -1 * u satisfies (5.20). Set
℘(x, ξ) := |ξ| -νξ 2 1 + iA(x, ξ)ξ 1 + B(x, ξ).
We have
T ℘ ( χ -1 * u) ∈ H s+2 (T 2 ). (5.22)
Note that |℘(x, ξ)| ≥ c |ξ| 2 for some constant c > 0 for all (x, ξ) such that
|ξ 2 | ≤ K |ξ 1 | and |ξ| ≥ M, for some large enough constant M depending on sup x,ξ |A(x, ξ)| + |B(x, ξ)|. Introduce a C ∞ function Θ such that Θ(ξ) = 0 for |ξ| ≤ M, Θ(ξ) = Θ(ξ) for |ξ| ≥ 2M.
Since Θ and Θ differ only on a bounded neighborhood of the origin, we have
Θ(D x )( χ -1 * u) -Θ(D x )( χ -1 * u) ∈ C ∞ (T 2 ).
Note that, since Θ is positively homogeneous of degree 0, Θ belongs to our symbol class ( Θ ∈ Γ 0 ρ (T 2 ) for all ρ ≥ 0). Set
q = Θ ℘ - 1 i ∂ ξ Θ ℘ ∂ x ℘ ℘ ∈ Γ -2 s-8 (T 2 ).
According to Theorem 4.6 applied with (m, m ′ , ρ) = (2, -2, 2), then,
T q T ℘ ( χ -1 * u) -Θ(D x )( χ -1 * u) ∈ H s+2 (T 2 ).
On the other hand, since T q is of order ≤ 0, it follows from (5.22) that
T q T ℘ ( χ -1 * u) ∈ H s+2 (T 2 ).
Which completes the proof.
5.8
The second reduction and the end of the proof of Theorem 2.5
We first set a few notations. Introduce a C ∞ function η such that 0 ≤ η ≤ 1,
η(t) = 0 for t ∈ [-1/2, 1/2], η(t) = 1 for |t| ≥ 1. (5.23) Given k ∈ N, ∂ -k x 1 denotes the Fourier multiplier (defined on S ′ (R d )) with symbol η(ξ 1 )(iξ 1 ) -k . Note that, if f is 2π-periodic in x 1 , then ∂ 0 x 1 f = f - 1 2π 2π 0 f (x 1 , x 2 ) dx 2 ,
and
(∂ -1 x 1 f )(x 1 , x 2 ) = x 1 0 f (s, x 2 ) - 1 2π 2π 0 f (x 1 , x 2 ) dx 1 ds.
In particular,
∂ x 1 ∂ -1 x 1 f = ∂ 0 x 1 f = f if and only if f has zero mean value in x 1 . We also have ∂ -k-1 x 1 f = ∂ -k x 1 ∂ -1 x 1 f.
It will be convenient to divide the frequency space into three pieces so that, in the two main parts, ξ 2 is either positive or negative. To do this, we need to use Fourier multipliers whose symbols belong to our symbol class, which is necessary to apply symbolic calculus in the forthcoming computations. Here is one way to define such Fourier multipliers: consider a C ∞ function J satisfying 0 ≤ J ≤ 1 and such that,
J(s) = 0 for s ≤ 0.8, J(s) = 1 for s ≥ 0.9, (5.24)
We define three C ∞ functions 0 , -and + by
0 = 1 - -- + , -(ξ) = J |ξ| -ξ 2 2 |ξ| , + (ξ) = J |ξ| + ξ 2 2 |ξ| ,
and then the Fourier multipliers
ε (D x )f (ξ) = ε (ξ) f (ξ) (ε ∈ {0, -, +}).
Note that there are constants 0 < c 1 < c 2 such that
ξ 2 ≤ c 1 |ξ 1 | ⇒ j + (ξ) = 0, ξ 2 ≥ c 2 |ξ 1 | ⇒ j + (ξ)= 1, (5.25) ξ 2 ≥ -c 1 |ξ 1 |⇒ j -(ξ) = 0, ξ 2 ≤ -c 2 |ξ 1 | ⇒ j -(ξ)= 1. (5.26)
Also, note that ± is positively homogeneous of degree 0 and hence satisfies
∂ α ξ ± (ξ) ≤ C α |ξ| -|α| .
In view of (5.25) and (5.26), Proposition 5.25 implies that
0 (D x )( χ -1 * u) ∈ H s+2 (T 2 ). (5.27)
As a result, it remains only to concentrate on the two other terms:
± (D x ) χ -1 * u .
Here is one other obvious observation that enable us to reduce the analysis to the study of only one of these two terms: Since χ -1 * u is even in x 2 , we have
χ -1 * u(ξ 1 , ξ 2 ) = χ -1 * u(ξ 1 , - ξ 2 ), Therefore,
-(D x ) χ -1 * u and + (D x ) χ -1 * u have the same regularity.
(5.28) Consequently, it suffices to study one of these two terms. We chose to work with
U := + (D x ) χ -1 * u .
We shall prove that one can transform further the problem to a linear equation with constant coefficients, using the method of Iooss and Plotnikov [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. The key to proving Theorem 5.1 is the following. Proposition 5.27. There exist two constants κ, κ ′ ∈ R and an operator
Z c = 0≤j≤4 T c j ∂ -j x 1 ,
where c 0 , . . . , c 4 ∈ C 1 (T 2 ) 5 and |c 0 | > 0, such that
-i∂ x 2 + ν∂ 2 x 1 + κ + κ ′ ∂ -2 x 1 Z c U ∈ H s+2 (T 2 ).
(5.29)
Proof. Proposition 5.27 is proved in §5.9 and §5.10.
We here explain how to conclude the proof of Theorem 5.1.
Proof of Theorem 5.1 given Proposition 5.27. Since the symbol of the operator -i∂ x 2 + ν∂2
x 1 + κ + κ ′ ∂ -2 x 1 is ξ 2 -νξ 2 1 + κ -κ ′ ξ -2 1 , we set ν(ω) = ℓν, κ 0 (ω) = -ℓκ, κ 1 (ω) = ℓκ ′ .
Assume that there exist δ ∈ [0, 1[ and N ∈ N * such that,
k 2 -ν(ω)k 2 1 -κ 0 (ω) - κ 1 (ω) k 2 1 ≥ 1 k 2+δ 1 , for all k ∈ N 2 with k 1 sufficiently large.
Directly from the definitions of the coefficients, this assumption implies that
ξ 2 -νξ 2 1 + κ - κ ′ ξ 2 1 ≥ 1 ℓξ 2+δ 1 ,
for all ξ = (ξ 1 , ξ 2 ) ∈ N × (ℓ -1 N) with ξ 1 sufficiently large. Since ν ≥ 0, the previous inequality holds for all ξ ∈ Z × ℓ -1 Z with |ξ 1 | sufficiently large. Now, since |ξ| ∼ νξ 2 1 on the set where the above inequality is not satisfied, this in turn implies that,
ξ 2 -νξ 2 1 + κ - κ ′ ξ 2 1 ≥ ν ℓ |ξ| (2+δ)/2 , (5.30)
for all ξ ∈ Z × (ℓ -1 Z) with |ξ 1 | sufficiently large. Similarly, we obtain that
ξ 2 -νξ 2 1 + κ - κ ′ ξ 2 1 ≥ √ ν |ξ 1 | ℓ |ξ| (3+δ)/2 , (5.31)
for |ξ 1 | sufficiently large.
To use these inequalities, we take the Fourier transform of (5.29):
ξ 2 -νξ 2 1 + κ - κ ′ η(ξ 1 ) ξ 2 1 Z c U (ξ) =: f (ξ).
A key point is that Z c U is doubly periodic. Thus, if ξ belongs to the support of the Fourier transform of Z c U , then ξ ∈ Z × (ℓ -1 Z). Therefore, it follows from (5.30) and (5.31) that,
Z c U ∈ H s+1-δ 2 (T 2 ), ∂ x 1 Z c U ∈ H s+ 1 2 -δ 2 (T 2 ).
It follows that
U ∈ H s+1-δ 2 (T 2 ), ∂ x 1 U ∈ H s+ 1 and hence σ ∈ H s+ 1 2 -δ 2 (T 2 ). (5.38)
Since T b is of order ≤ 0, this yields
T b σ ∈ H s+ 1 2 -δ 2 (T 2 ). (5.39)
Writing ψ = u + T b σ (by definition of u), from (5.34) and (5.39) we deduce that
ψ ∈ H s+ 1 2 -δ 2 (T 2 ).
This completes the proof of Theorem 5.1 and hence of Theorem 2.5.
Preparation
We have proved that there exist a change of variables x → χ(x) and two zero order symbols A = A(x, ξ) and B(x, ξ) such that
|D x | + ν∂ 2 x 1 + T A ∂ x 1 + T B χ -1 * u = f ∈ H s+2 (T 2 ). (5.40)
We want to prove Proposition 5.27 which asserts that it is possible to conjugate (5.40) to a constant coefficient equation. Since the symbols A and B depend on the frequency variable, one more reduction is needed.
In this paragraph we shall prove the following preliminary result towards the proof of Proposition 5.27.
Proposition 5.28. There exist five functions
a j = a j (x) ∈ C s-6-j (T 2 ) (0 ≤ j ≤ 4),
where a j is odd in x 1 for j ∈ {0, 2, 4}, a j is even in x 1 for j ∈ {1, 3},
such that -i∂ x 2 + ν∂ 2 x 1 + 0≤j≤4 T a j ∂ 1-j x 1 U ∈ H s+2 (T 2 ),
where recall that
U = j + (D x )( χ -1 * u).
To prove this result, we begin with the following localization lemma.
Lemma 5.29. Let A = A(x, ξ) and B = B(x, ξ) be as in (5.20). Then,
|D x | + ν∂ 2 x 1 + T A ∂ x 1 + T B U ∈ H s+2 (T 2 ). (5.41)
Proof. This follows from Corollary 5.22 and Proposition 5.25. Indeed, since + is positively homogeneous of degree 0, it is a zero-order symbol. According to Theorem 4.6 (applied with a = + (ξ), b = A(x, ξ)iξ 1 + B(x, ξ) and (m, m ′ , ρ) = (0, 1, 3)), then,
j + (D x ) |D x | + ν∂ 2 x 1 + T A ∂ x 1 + T B ( χ -1 * u) = |D x |+ν∂ 2 x 1 +T A ∂ x 1 +T B U + 1≤|α|≤2 1 i α α! T ∂ α ξ + (ξ)∂ α x b(x,ξ) ( χ -1 * u)+f,
with f ∈ H s+2 (T 2 ). Corollary 5.22 implies that the left hand side belongs to H s+2 (T 2 ). As regards the second term in the right hand side, observe that
|ξ 2 | ≥ 3 4 |ξ 1 | ⇒ ∂ ξ + (ξ) = 0. Moreover ∂ α ξ + (ξ)∂ α x b(x, ξ) is of order ≤ 0 in ξ.
Hence, by means of a simple symbolic calculus argument, it follows from Proposition 5.25 that
T ∂ α ξ + (ξ)∂ α x b(x,ξ) ( χ -1 * u) ∈ H s+2 (T 2 ).
This proves the lemma.
We are now in position to transform the equation. To clarify the articulation of the proof, we proceed step by step. We first prove that i) |D x | U may be replaced by -i∂ x 2 U (-i∂ x 2 is the Fourier multiplier with symbol +ξ 2 ). This point essentially follows from the fact that |ξ| ∼ ξ 2 on the support of + (ξ).
ii) One may replace the symbols A and B by a couple of symbols which are symmetric with respect to {ξ 2 = 0} and vanish for |ξ Lemma 5.30. There exists two symbols
à ∈ Γ 0 s-6 (T 2 ), B ∈ Σ 0 s-6 (T 2 ) such that -i∂ x 2 + ν∂ 2 x 1 + T Ã∂ x 1 + T B U ∈ H s+2 (T 2 ), (5.42)
and such that i) Ã is homogeneous of degree 0 in ξ; B = B0 + B-1 where Bℓ is homogeneous of degree ℓ in ξ;
ii) Ã(x ⋆ , ξ ⋆ ) = -Ã(x, ξ) and B(x ⋆ , ξ ⋆ ) = B(x, ξ); iii) Ã(x, ξ 1 , -ξ 2 ) = Ã(x, ξ 1 , ξ 2 ) and B(x, ξ 1 , -ξ 2 ) = B(x, ξ 1 , ξ 2 ); iv) Ã(x, ξ) = 0 = B(x, ξ) for |ξ 2 | ≤ |ξ 1 | /5.
Proof. The proof depends on Lemma 5.29 and the fact that the Fourier multiplier + (D x ) is essentially a projection operator. Namely, we make use of a C ∞ function J ′ satisfying 0 ≤ J ′ ≤ 1 and such that, J ′ (s) = 0 for s ≤ 0.7, J ′ (s) = 1 for s ≥ 0.8, and set
′ ± (ξ) = J ′ |ξ| ± ξ 2 2 |ξ| . Then ′ ± (ξ) = 0 for |ξ 2 | ≤ |ξ 1 | /5, and
′ + (ξ) + (ξ) = + (ξ), ′ -(ξ) + (ξ) = 0. ( 5
.43)
With A = A(x, ξ) and B = B(x, ξ) as in (5.20), set
Ã(x, ξ 1 , ξ 2 ) = ′ + (ξ) A(x, ξ 1 , ξ 2 ) - iξ 1 |ξ| + |ξ 2 | + ′ -(ξ) A(x, ξ 1 , -ξ 2 ) - iξ 1 |ξ| + |ξ 2 | , B(x, ξ 1 , ξ 2 ) = ′ + (ξ)B(x, ξ 1 , ξ 2 ) + ′ -(ξ)B(x, ξ 1 , -ξ 2 ).
Note that these symbols satisfy the desired properties.
On the symbol level, we have
|ξ| = |ξ 2 | + ξ 2 1 |ξ| + |ξ 2 | = |ξ 2 | - iξ 1 |ξ| + |ξ 2 | iξ 1 .
On the other hand, by the very definition of paradifferential operators, for any couple of symbols c 1 = c 1 (x, ξ) and c 2 = c 2 (ξ) depending only on ξ, we have
T c 1 T c 2 = T c 1 c 2 .
Therefore, by means of (5.43) we easily check that
-i∂ x 2 + ν∂ 2 x 1 + T Ã∂ x 1 + T B U = |D x | + ν∂ 2 x 1 + T A ∂ x 1 + T B U.
The desired result then follows from (5.41) To prepare for the next transformation, we need a calculus lemma to handle commutators of the form [T p , ∂ -j
x 1 ]. Note that η(ξ 1 )(iξ 1 ) -j does not belong to our symbol classes. However, we have the following result. Proposition 5.31. Let p ∈ Γ 0 4 (T 2 ) and v ∈ H -∞ (T 2 ) be such that
∂ -5 x 1 v ∈ H s+2 (T 2 ). If π -π T p v dx 1 = 0 = π -π v dx 1 , (5.44)
then
∂ -1 x 1 T p v = T p ∂ -1 x 1 v -T ∂x 1 p ∂ -2 x 1 v + T ∂ 2 x 1 p ∂ -3 x 1 v -T ∂ 3 x 1 p ∂ -4 x 1 v + f,
where f ∈ H s+2 (T 2 ).
Proof. We begin by noticing that (5.44) implies that
∂ 0 x 1 T p v = T p v, ∂ 0 x 1 v = v,
and hence
∂ x 1 ∂ -1 x 1 T p v -T p ∂ -1 x 1 v = T p v -T ∂x 1 p ∂ -1 x 1 v -T p v = -T ∂x 1 p ∂ -1 x 1 v. Since u = ∂ x 1 U implies U = ∂ -1 x 1 u, this yields ∂ -1 x 1 T p v -T p ∂ -1 x 1 v = -∂ -1 x 1 T ∂x 1 p ∂ -1 x 1 v. (5.45)
To repeat this argument we first note that, by definition of ∂ -1
x 1 , we have
π -π ∂ -1 x 1 v dx 1 = 0.
On the other hand,
π -π T ∂x 1 p ∂ -1 x 1 v dx 1 = π -π ∂ x 1 T p ∂ -1 x 1 v -T p ∂ 0 x 1 v dx 1 = π -π ∂ x 1 T p ∂ -1 x 1 v dx 1 - π -π T p v dx 1 = 0,
by periodicity in x 1 and (5.44). We can thus apply (5.45) with (p, v) replaced by (∂ x 1 p, ∂ -1
x 1 v) to obtain
We have an analogous result for commutators [T p , ∂ -j
x 1 ] for 2 ≤ j ≤ 4.
Corollary 5.32. Let p ∈ Γ 0 4 (T 2 ) and v ∈ H -∞ (T 2 ) be such that ∂ -5
x 1 v ∈ H s+2 (T 2 ). If π -π T p v dx 1 = 0 = π -π v dx 1 , then ∂ -2 x 1 T p v = T p ∂ -2 x 1 v -T ∂x 1 p ∂ -3 x 1 v + T ∂ 2 x 1 p ∂ -4 x 1 v + f 2 , ∂ -3 x 1 T p v = T p ∂ -3 x 1 v -T ∂x 1 p ∂ -4 x 1 v + f 3 , ∂ -4 x 1 T p v = T p ∂ -4 x 1 v + f 4 , where f 2 , f 3 , f 4 ∈ H s+2 (T 2 ).
Proof. This follows from the previous Proposition.
An important remark for what follows is that ∂ x 1 and ∂ x 2 do not have the same weight. Roughly speaking, the form of the equation (5.40)
indicates that ν∂ 2 x 1 ∼ |D x | ∼ |∂ x 2 | .
In particular, we shall make extensive use of
ν -1 ∂ -2 x 1 ∼ |D x | -1 .
The following lemma gives this statement a rigorous meaning.
Lemma 5.33. There holds ∂ -2
x 1 U ∈ H s+1 (T 2 ) and ∂ -4
x 1 U ∈ H s+2 (T 2 ).
Proof. Recall that A(x ⋆ , ξ ⋆ ) = -A(x, ξ) and that U is odd in x 1 and 2πperiodic in x 1 , so we have π -π
T A ∂ x 1 U dx 1 = 0, π -π ∂ x 1 U dx 1 = 0, π -π U dx 1 = 0. (5.47) Since |D x | U + ν∂ 2 x 1 U + T A ∂ x 1 U + T B U ∈ H s+2 (T 2 ), by applying Λ -1 ∂ -2
x 1 with Λ -1 = T |ξ| -1 , we have
∂ -2 x 1 U = -νΛ -1 U -Λ -1 ∂ -2 x 1 (T A ∂ x 1 U ) -Λ -1 ∂ -2 x 1 (T B U ) + F,
where F ∈ H s+2 (T 2 ). The first term and the third term in the right hand side obviously belong to H s+1 (T 2 ). Moving to the second term in the righthand side, in view of (5.47), the argument establishing (5.45) also gives
∂ -2 x 1 T A ∂ x 1 U = ∂ -1 x 1 T A U -∂ -1 x 1 T ∂x 1 A U.
Hence, using that ∂ -1 x 1 , T A and T ∂x 1 A are of order ≤ 0, we have that
∂ -2 x 1 T A ∂ x 1 U ∈ H s (T 2 ), so that Λ -1 ∂ -2 x 1 (T A ∂ x 1 U ) ∈ H s+1 (T 2
) and hence
∂ -2 x 1 U ∈ H s+1 (T 2 ).
(5.48)
To study ∂ -4
x 1 U we start from
∂ -4 x 1 U = -νΛ -1 ∂ -2 x 1 U -Λ -1 ∂ -4 x 1 (T A ∂ x 1 U ) -Λ -1 ∂ -4 x 1 (T B U ) + ∂ -2 x 1 F,
We have just proved that the first term in the right hand side belongs to H s+2 (T 2 ). With regards to the second term we use the third identity in Corollary 5.32 to obtain
∂ -4 x 1 (T A ∂ x 1 U ) -T A ∂ -3 x 1 U ∈ H s+2 (T 2 ).
On the other hand, by symbolic calculus, Λ
-1 T A -T A Λ -1 is of order ≤ -2. Hence, Λ -1 ∂ -4 x 1 (T A ∂ x 1 U ) -T A Λ -1 ∂ -3 x 1 U ∈ H s+2 (T 2
). In view of (5.48) this yields
Λ -1 ∂ -4 x 1 (T A ∂ x 1 U ) ∈ H s+2 (T 2 ).
Similarly we obtain that Λ -1 ∂ -4
x 1 (T B U ) ∈ H s+2 (T 2 ). We thus end up with
∂ -4 x 1 U ∈ H s+2 (T 2 ),
which concludes the proof.
The following definition is helpful for what follows.
Definition 5.34. We say that an operator R is of anisotropic order ≤ -2 if R is of the form
R = R 0 Λ -2 + R 1 Λ -1 ∂ -2 x 1 + R 2 ∂ -4 x 1 ,
where R 0 , R 1 and R 2 are operators of order ≤ 0 and
Λ -1 = T |ξ| -1 , Λ -2 = T |ξ| -2 .
It follows from the previous lemma that operators of anisotropic order ≤ -2 may be seen as operators of order ≤ -2. Namely, the previous lemma implies the following result.
Lemma 5.35. If R is of anisotropic order ≤ -2, then RU ∈ H s+2 (T 2 ).
With these preliminaries established, to prove Proposition 5.28 the key point is a decomposition of zero-order symbols. We want to decompose zero-order operators as sums of the form 0≤j≤4 T a j ∂ -j
x 1 + R,
where T a j are paraproducts (a j = a j (x) does not depend on ξ) and R is of anisotropic order ≤ -2 plus an admissible remainder. More generally, we consider below symbols of order -2 ≤ m ≤ 0 and not only zero-order symbols.
Lemma 5.36. Let m ∈ {0, 1, 2} and ρ ≥ 0. Let S ∈ Γ -m ρ (T 2 ) be an homogeneous symbol of degree -m in ξ such that
S(x, ξ 1 , -ξ 2 ) = S(x, ξ 1 , ξ 2 ),
and such that, for some positive constant c,
|ξ 2 | ≤ c |ξ 1 | ⇒ S(x, ξ) = 0.
Then, for all v whose spectrum is included in the semi-cone {ξ 2 ≥ c |ξ 1 |},
T S(x,ξ) ∂ x 1 v = 4 j=2m T S j (x) ∂ 1-j x 1 v + Q(-i∂ x 2 + ν∂ 2 x 1 )v + Rv, (5.49)
where R is of anisotropic order ≤ -2, S j (x) = 1 i j ν j j! (∂ j ξ 1 S)(x, 0, 1), (5.50)
and Q = 4 k=2m+1 T q k (x,ξ) ∂ 1-k x 1 ,
where
q k ∈ Γ -1 ρ (T 2 ) for 1 ≤ k ≤ 4 is explicitly defined in (5.55) below and satisfies q k (x, ξ 1 , -ξ 2 ) = q k (x, ξ 1 , ξ 2 ), (5.51
)
and |ξ 2 | ≤ c 2 |ξ 1 | ⇒ q k (x, ξ) = 0.
(5.52) Remark 5.37. This is a variant of the decomposition used in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. The main difference is that, having performed the reduction to the case where ξ 2 ≥ |ξ 1 | /2, we do not need to consider the so-called elementary operators in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. Hence we obtain a decomposition where the S j 's do not depend on ξ.
where c 1 := 3 -6ν + 4ν 2 , c 2 := 3ν + 6ν 2 -8ν 3 , c 3 := 3ν 2 + 4ν 4 and c 4 := ν 3 . Similarly, we have
iξ 1 i ξ 1 |ξ 2 | 5 = - 1 ν 3 |ξ 2 | 2 + Q5 (ξ)L(ξ) with Q5 (ξ) := |ξ 2 | 2 + 4ν |ξ 2 | ξ 2 1 + ν 2 ξ 4 1 ν 3 |ξ 2 | 5 .
Our analysis of (5.54) is complete; by inserting the previous identities in (5.54) premultiplied by J c (ξ)η(ξ 1 )iξ 1 and next by dividing by iξ 1 , we obtain the desired decomposition (5.53) with q k (x, ξ) = J c (ξ)q k (x, ξ) where
q1 (x, ξ) = S 1 (x) |ξ 2 | - νS 3 (x) |ξ 2 | 2 + iξ 1 iν 2 S 3 (x)ξ 1 |ξ 2 | 3 - νS 2 (x) |ξ 2 | 2 - (c 3 |ξ 2 | + c 4 ξ 2 1 )S 4 (x) |ξ 2 | 4 + r x, ξ 1 |ξ 2 | Q5 (ξ), q2 (x, ξ) = -S 2 (x) ν 2 |ξ 2 | -i c 2 S 4 |ξ 2 | 2 , q3 (x, ξ) = S 3 (x) ν 3 |ξ 2 | , q4 (x, ξ) = -c 1 S 4 (x) ν 4 |ξ 2 | .
(5.55)
Note that each term making up q k (x, ξ) is well-defined and C ∞ for ξ = 0 and homogeneous of degree -1 or -2 or -3 in ξ, so q k ∈ Γ -1 ρ (T 2 ).
We are now prepared to conclude the proof of Proposition 5.28. We want to prove that there exist five functions
a j = a j (x) ∈ C s-6-j (T 2 ) (0 ≤ j ≤ 4),
where a j is odd in x 1 for j ∈ {0, 2, 4}, a j is even in x 1 for j ∈ {1, 3}, such that
-i∂ x 2 + ν∂ 2 x 1 + 0≤j≤4 T a j ∂ 1-j x 1 U ∈ H s+2 (T 2 ).
(5.56)
To this end, since à and B satisfy properties iii) and iv) in Lemma 5.30, since U = ∂ 0
x 1 U and since the spectrum of U is contained in the semi-cone {ξ 2 ≥ 3 4 |ξ 1 |}, we can use the above symbol-decomposition process to obtain for U an equation of the form
(I + Q) -i∂ x 2 U + ν∂ 2 x 1 U + 0≤j≤4 T α j ∂ 1-j x 1 U = f ∈ H s+2 (T 2 ),
where α j ∈ C s-6 (T 2 ) and
Q = 4 k=1 T q k (x,ξ) ∂ 1-k x 1 ,
where q k ∈ Γ -1 s-6 (T 2 ). Write
(I + Q) -i∂ x 2 U + ν∂ 2 x 1 U + 0≤j≤4 T α j ∂ 1-j x 1 U -Q 0≤j≤4 T α j ∂ 1-j x 1 U = f.
Now, by using Proposition 5.31 and its corollary 5.32, notice that we can write the term
Q 0≤j≤4 T α j ∂ 1-j x 1 U under the form (-1) ℓ T q k T ∂ ℓ x 1 α j ∂ 2-j-k-ℓ x 1 U + F,
where F ∈ H s+2 (T 2 ) where the sum is taken over indices k, j, ℓ such that 1 ≤ k ≤ 4, 0 ≤ j ≤ 4 and j + k + ℓ ≤ 3.
Indeed, for those indices such that j+k+ℓ ≥ 4 the operator
T q k T ∂ ℓ x 1 α j ∂ 2-j-k-ℓ x 1 is of anisotropic index ≤ -2 and hence T q k T ∂ ℓ x 1 α j ∂ 2-j-k-ℓ x 1 U ∈ H s+2 (T 2
), so that these terms can be handled as source terms. Next, noticing that T q k T ∂ ℓ x 1 α j -T q k ∂ ℓ x 1 α j is order less than -3, this implies that we have
Q 0≤j≤4 T α j ∂ 1-j x 1 U = 0≤p≤4 T Sp ∂ 1-p x 1 .
for some symbols S p of order -1 in ξ, with regularity s-8 in x and satisfying the spectral assumptions in Lemma 5.36. As a consequence, by applying Lemma 5.36 we obtain
Q 0≤j≤4 T α j ∂ 1-j x 1 U = 2≤k≤4 T β k (x) ∂ 1-k x 1 U + Q ′ -i∂ x 2 + ν∂ 2 x 1 U + R ′ U, where β k ∈ C s-8 (T 2 ), R ′ is of anisotropic order ≤ -2 and Q ′ is of the form Q ′ = 4 k=3 T q ′ k (x,ξ) ∂ 1-k x 1 ,
where q ′ k (x, ξ) is of order -1 in ξ (and regularity C s-8 in x). Then
I + Q -Q ′ -i∂ x 2 + ν∂ 2 x 1 + 0≤j≤4 T α j ∂ 1-j x 1 - 2≤k≤4 T β k (x) ∂ 1-k x 1 U + (Q -Q ′ ) 2≤k≤4 T β k (x) ∂ 1-k x 1 U + Q ′ 0≤j≤4 T α j ∂ 1-j x 1 ∈ H s+2 (T 2 ).
Again, one has
(Q -Q ′ ) 2≤k≤4 T β k (x) ∂ 1-k x 1 U + Q ′ 0≤j≤4 T α j ∂ 1-j x 1 = T γ 4 (x) ∂ -3 x 1 + R ′′ U. Now (Q -Q ′ )T γ 4 ∂ -3
x 1 is of anisotropic order ≤ -2. This yields
(I + Q) -i∂ x 2 U + ν∂ 2 x 1 U + 0≤j≤4 T a j ∂ 1-j x 1 U ∈ H s+2 (T 2 ),
where
Q = Q -Q ′ and a 0 = α 0 , a 1 = α 1 , a 2 = α 2 -β 2 , a 3 = α 3 -β 3 , a 4 = α 4 -β 4 + γ 4 .
Now we have an obvious left parametrix for I + Q, in the following sense:
I -Q + Q 2 -Q 3 (I + Q) = I -Q 4 ,
where Q 4 is of order ≤ -4 so that
Q 4 -i∂ x 2 U + ν∂ 2 x 1 U + 0≤j≤4 T a j ∂ 1-j x 1 U ∈ H s+2 (T 2 ).
This gives (5.56).
The symmetries of the coefficients a j can be checked on the explicit expressions which are involved. Indeed, it follows from (5.50) that the function s = (s 0 , . . . , s 4 ) given by Lemma 5.36 satisfies the same symmetry as S does: given ε ∈ {-1, +1} and 0 ≤ j ≤ 4, we have
S(x ⋆ , ξ ⋆ ) = εS(x, ξ) ⇒ S j (x ⋆ ) = ε(-1) j S j (x).
This concludes the proof of Proposition 5.28.
Proof of Proposition 5.27
Given Proposition 5.28, the proof of Proposition 5.27 reduces to establishing the following result. Notation 5.38. Given five complex-valued functions a 0 , . . . , a 4 , we define
Z a = 0≤j≤4 T a j ∂ -j x 1 .
Proposition 5.39. There exist two constant κ, κ ′ ∈ R and five functions c 0 , . . . , c 4 with c j ∈ C 6-j (T 2 ) satisfying |c 0 | > 0 and
c k is even in x 1 for k ∈ {0, 2, 4}, c k is odd in x 1 for k ∈ {1, 3}, (5.57) such that Z c -i∂ x 2 + ν∂ 2 x 1 + Z a ∂ x 1 U = -i∂ x 2 + ν∂ 2 x 1 + κ + κ ′ ∂ -2 x 1 Z c U + f, (5.58)
where f ∈ H s+2 (T 2 ).
Proof. The equation (5.58) is equivalent to a sequence of five transport equations for the coefficients c j (0 ≤ j ≤ 4), which can be solved by induction. Indeed, directly from the Leibniz rule we compute that
-i∂ x 2 u + ν∂ 2 x 1 + κ + κ ′ ∂ -2 x 1 Z c u -Z c -i∂ x 2 U + ν∂ 2 x 1 u = Z δ ∂ x 1 U,
where
δ 0 = 2ν∂ x 1 c 0 , δ 1 = 2ν∂ x 1 c 1 -i∂ x 2 c 0 + ν∂ 2 x 1 c 0 + c 0 κ, δ 2 = 2ν∂ x 1 c 2 -i∂ x 2 c 1 + ν∂ 2 x 1 c 1 + c 1 κ, δ 3 = 2ν∂ x 1 c 3 -i∂ x 2 c 2 + ν∂ 2 x 1 c 2 + c 2 κ + c 0 κ ′ , δ 4 = 2ν∂ x 1 c 4 -i∂ x 2 c 3 + ν∂ 2 x 1 c 3 + c 3 κ + c 1 κ ′ .
On the other hand, if (5.57) is satisfied, then we can apply Proposition 5.31 and its corollary to obtain
Z c Z a ∂ x 1 U = Z δ ′ ∂ x 1 U + f, where f ∈ H s+2 (T 2 ) and δ ′ k = l+m+n=k c l (-∂ x 1 ) m a n (0 ≤ k, l, m, n ≤ 4).
Hence, our purpose is to define c = (c 0 , . . . , c 4 ) satisfying (5.57) and two constants κ and κ ′ such that δ = δ ′ .
Step 1: Definition of c 0 . We first define the principal symbol c 0 by solving the equation δ 0 = δ ′ 0 , which reads
2ν∂ x 1 c 0 = c 0 a 0 .
We get a unique solution of this equation by imposing the initial condition c 0 (0, x 2 ) = C 0 (x 2 ) on x 1 = 0, where C 0 is to be determined. That is, we set
c 0 (x) = C 0 (x 2 )e γ(x)/(2ν) ,
where γ := ∂ -1
x 1 a 0 . Since a 0 is odd in x 1 , we have π -π a 0 dx 1 = 0 and hence
∂ x 1 γ = a 0 .
Note that, directly from the definition, we have
γ ∈ C s-6 (T 2 ), γ ∈ C(e, e), π -π γ dx 1 = 0, γ(x) ∈ iR.
Step 2: Definition of c 1 , C 0 and κ. We next define c 1 by solving δ
1 = δ ′ 1 . This yields 2ν∂ x 1 c 1 -a 0 c 1 = G 1 with G 1 := i∂ x 2 c 0 -ν∂ 2 x 1 c 0 -κc 0 + c 0 a 1 -c 0 ∂ x 1
a 0 where κ is determined later. We impose the initial condition c 1 (0, x 2 ) = 0 on x 1 = 0, so that
c 1 (x 1 , x 2 ) := 1 2ν e γ/(2ν) x 1 0 e -γ/(2ν) G 1 ds. Note that c 1 is 2π-periodic in x 1 if and only if 2π 0 e -γ/(2ν) G 1 dx 1 = 0. (5.59)
Directly from the definition of G 1 , we compute that
G 1 = e γ/(2ν) i∂ x 2 C 0 + C 0 i 2ν ∂ x 2 γ -κ + a 1 - 3 2 ∂ x 1 a 0 - 1 4ν a 2 0 , Using 2π 0 ∂ x 1 a 0 dx 1 = 0 = 2π 0 γ dx 1 , this gives 2π 0 e γ/(2ν) G 1 dx 1 = 2iπC ′ 0 (x 2 ) + C 0 (x 2 ) 2π 0 -κ + a 1 - a 2 0 4ν dx 1 . Set β(x 2 ) := -2πκ + 2π 0 a 1 - a 2 0 4ν dx 1 . (5.60) so that 2π 0 e -γ/(2ν) G 1 dx 1 = 2iπC ′ 0 (x 2 ) + β(x 2 )C 0 (x 2 ).
We thus define κ by and hence
κ = 1 |T 2 | T 2 a 1 - a 2 0 4ν dx 1 dx 2 . ( 5
C 0 (x 2 ) := exp - 1 2iπ x 2 0 β(s) ds is 2πℓ-periodic in x 2 .
With this particular choice of C 0 , the condition (5.59) is satisfied and hence c 1 is bi-periodic.
Moreover, directly from these definitions, we have C 0 ∈ C 6 and, by performing an integration by parts to handle the term x 1 0 e γ/(2ν) (∂ 2 c 1 /∂x 1 2 ) ds, we obtain that c 1 ∈ C 5 (T 2 ).
Step 3: κ ∈ R. It remains to prove that κ ∈ R. To do this, we first observe that a 0 (x) = A(x, 0, 1) where A is given by Proposition 5.22. In particular we easily check that a 0 (x) ∈ iR so that a 0 (x) 2 ∈ R. On the other hand, we claim that Im a 1 (x) is odd in x 2 , (5.62) so that
κ = 1 |T 2 | T 2 Re a 1 - a 2 0 4ν dx 1 dx 2 ∈ R.
To prove (5.62), still with the notations of Proposition 5.22 and Lemma 5.30, we first observe that the identity (5.50) and the definition of a 1 imply that
a 1 (x) = 1 iν (∂ ξ 1 Ã)(x, ξ) + B0 (x, ξ) ξ=(0,1) = 1 iν ∂ ξ 1 A(x, ξ) - iξ 1 |ξ| + |ξ 2 | + B 0 (x, ξ) ξ=(0,1) = 1 iν (∂ ξ 1 A)(x, 0, 1) - 1 2ν
+ B 0 (x, 0, 1), so that Im a 1 (x) = Im B 0 (x, 0, 1). Now, we claim that Im B 0 (x, -ξ) = -Im B 0 (x, ξ).
(5.63) Indeed, this follows from the definition of B 0 (cf Proposition 5.22) and the following symmetry of the symbol λ σ of the Dirichlet to Neumann operator
λ σ (x, ξ) = λ σ (x, -ξ).
(5.64) That (5.64) has to be true is clear since this symmetry means nothing more than the fact that G(σ)f is real-valued for any real-valued function f . Once (5.63) is granted, using B ∈ Γ(e, e) we obtain the desired result:
Im a 1 (x 1 , -x 2 ) = Im B 0 (x 1 , -x 2 , 0, 1) = -Im B 0 (x 1 , -x 2 , 0, -1) = -Im B 0 (x 1 , x 2 , 0, 1) = -Im a 1 (x 1 , x 2 ).
Step 4: General formula. We can now give the scheme of the analysis. For k = 2, 3, 4, we shall define c k inductively by
2ν∂ x 1 e -γ/(2ν) c k = e -γ/(2ν) G k ,
where G k is to be determined by means of the equation
δ k = δ ′ k +δ ′′ k-1 . That is, we set c k (x 1 , x 2 ) = exp γ(x 1 , x 2 ) 2ν C k (x 2 ) + Γ k (x 1 , x 2 ) ,
where C k is to be determined and Γ k is given by
Γ k (x 1 , x 2 ) = 1 2ν x 1 0 exp -γ(s, x 2 ) 2ν G k (s, x 2 ) ds.
As in the previous step, we have to chose the initial data C k-1 (x 2 ) = c k-1 (0, x 2 ) such that Γ k is 2π-periodic in x 1 . Now we note the following fact: Starting from the fact that a 0 , a 2 , a 4 are odd in x 1 and a 1 , a 3 are even in x 1 , we successively check that:
c 1 is odd in x 1 ; G 2 is odd in x 1 ; c 2 is even in x 1 ; G 3 is even in x 1 ; c 3 is odd in x 1 ; G 4 is odd in x 1 . As a result, we have π -π e -γ/(2ν) G 2 dx 1 = 0 = π -π e -γ/(2ν) G 4 dx 1 ,
which in turn implies that Γ 2 and Γ 4 are bi-periodic. Consequently, one can impose
C 1 (x 2 ) = c 1 (0, x 2 ) = 0 and C 3 (x 2 ) = c 3 (0, x 2 ) = 0.
Moreover, we impose C 4 = 0 (there is no restriction on C 4 since we stop the expansion at this order). Therefore, it remains only to prove that one can so define C 2 and κ ′ that Γ 3 is bi-periodic.
Step 5: Definition of C 2 and κ ′ . We turn to the details and compute that
G 3 = i∂ x 2 c 2 -ν∂ 2 x 1 c 2 -(κ + ∂ x 1 a 0 -a 1 )c 2 -κ ′ c 0 + c 0 a 3 + c 1 a 2 -c 1 ∂ x 1 a 1 .
The function c 3 is 2π-periodic in x 1 if and only if 2π 0 e -γ/(2ν) G 3 dx 1 = 0.
(5.65)
Directly from the definition of G 3 , we have
2π 0 e -γ/(2ν) G 3 dx 1 = 2iπC ′ 2 (x 2 ) + C 2 (x 2 ) 2π 0 -κ + a 1 - a 2 0 4ν dx 1 + Γ 2 (x 1 , x 2 ) 2π 0 i 2ν ∂ x 2 γ -κ + a 1 - 3 2 ∂ x 1 a 0 - 1 4ν a 2 0 dx 1 + 2π 0 i∂ x 2 Γ 2 -ν∂ 2 x 1 Γ 2 -∂ x 1 γ∂ x 1 Γ 2 dx 1 + 2π 0 (a 3 -κ ′ )C 0 dx 1 .
Now since a 3 is odd in x 1 , the last term simplifies to -2πκ ′ C 0 (x 2 ). Note also that 2π 0 ∂ 2 x 1 Γ 2 dx 1 = 0, and
2π 0 -∂ x 1 γ∂ x 1 Γ 2 dx 1 = 2π 0 -a 0 ∂ x 1 Γ 2 dx 1 = 2π 0 Γ 2 ∂ x 1 a 0 dx 1 .
By using the previous cancelations, we obtain that for (5.65) to be true, a sufficient condition is that C 2 solves the equation 2iπC ′ 2 (x 2 ) + β(x 2 )C 2 (x 2 ) = F 2 (x 2 ) -2πκ ′ C 0 (x 2 ), (5.66) with
F 2 (x 2 ) := 2π 0 i 2ν ∂ x 2 γ -κ + a 1 - 1 2 ∂ x 1 a 0 - 1 4ν a 2 0 Γ 2 + i∂ x 2 Γ 2 dx 1 .
If we impose the initial condition C 2 = 1 on x 2 = 0, then equation (5.66) has a 2πℓ-periodic solution if and only if κ ′ is given by
κ ′ := 1 |T 2 | 2πℓ 0 exp 1 2iπ x 2 0 β(s) ds F 2 (x 2 ) dx 2 .
We are then in position to define a function C 2 such that c 3 is bi-periodic.
Step 6: κ ′ ∈ R. It remains to prove that κ ′ ∈ R. Firstly, observe that similar arguments to those used in the third step establish that a k (x 1 , x 2 ) = a k (x 1 , -x 2 ) for 0 ≤ k ≤ 4.
(5.67)
Then, we successively check that c 0 (x 1 , x 2 ) = c 0 (x 1 , -x 2 ), c 1 (x 1 , x 2 ) = c 1 (x 1 , -x 2 ), c 2 (x 1 , x 2 ) = c 2 (x 1 , -x 2 ).
(5.68)
To complete the proof we express κ ′ as a function of these coefficients.
Lemma 5.40. There holds
κ ′ = 1 |T| 2 T 2 c 2 c 0 i 2π ∂ x 2 γ + a 1 - 1 2 ∂ x 1 a 0 - 1 4ν a 2 0 -κ dx 1 dx 2 + 1 |T| 2 T 2 c 1 c 0 a 2 -∂ x 1 a 1 + ∂ 2
x 1 a 0 dx 1 dx 2
(5.69)
Proof. Introduce γ 1 = c 1 c 0 , γ 2 = c 2 c 0 , γ 3 = c 3 c 0 .
With this notation, directly from the equation δ 3 = δ ′ 3 we have
κ ′ = -2ν∂ x 1 γ 3 - β 2π γ 2 + i 2ν (∂ x 2 γ)γ 2 -ν∂ 2 x 1 γ 2 - 1 2 a 0 ∂ x 1 γ 2 - 1 4ν a 2 0 γ 2 + 1 2 (∂ x 1 a 0 )γ 2 -κγ 2 + a 3 -∂ x 1 a 2 + ∂ 2 x 1 a 1 -∂ 3 x 1 a 0 + γ 1 a 2 -γ 1 ∂ x 1 a 1 + γ 1 ∂ 2 x 1 a 0 + γ 2 a 1 -γ 2 ∂ x 1 a 0 ,
where we used various cancelations. By integrating over T 2 , taking into accounts obvious cancelations of the form π -π ∂ x 1 f dx 1 = 0 and integrating by parts in x 1 the term a 0 ∂ x 1 γ 2 dx 1 dx 2 , we obtain the desired identity. Using (5.69), (5.67) and (5.68), we obtain κ ′ = κ ′ and hence κ ′ ∈ R.
This completes the proof of the proposition, and hence of Theorem 2.5.
The small divisor condition for families of diamond waves
In this section, we prove Proposition 2.10, whose statement is recalled here. Proposition 6.1. Let δ and δ ′ be such that
1 > δ > δ ′ > 0.
Let ν = ν(ε), κ 0 = κ 0 (ε) and κ 1 = κ 1 (ε) be three real-valued functions defined on [0, 1] such that ν(ε) = ν + ν ′ ε 2 + εϕ 1 (ε 2 ), κ 0 (ε) = κ 0 + ϕ 2 (ε 2 ), κ 1 (ε) = κ 1 + ϕ 3 (ε 2 ), (6.1) for some constants ν, ν ′ , κ 0 , κ 1 ′ with ν ′ = 0, and three Lipschitz functions ϕ j : [0, 1] → R satisfying ϕ j (0) = 0. Assume in addition that there exists n ≥ 2 such that
k 2 -νk 2 1 -κ 0 ≥ 1 k 1+δ ′ 1 , (6.2)
for all k ∈ N 2 with k 1 ≥ n. Then there exist K > 0, r 0 > 0, N 0 ∈ N and a set A ⊂ [0, 1] satisfying
∀r ∈ [0, r 0 ], 1 r |A ∩ [0, r]| ≥ 1 -Kr δ-δ ′ 3+δ ′ , such that, if ε 2 ∈ A and k 1 ≥ N 0 then k 2 -ν(ε)k 2 1 -κ 0 (ε) - κ 1 (ε) k 2 1 ≥ 1 k 2+δ 1 ,
for all k 2 ∈ N.
Proof. As in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], it is convenient to introduce
λ = ε 2 , ν(λ) = ν( √ λ), κ0 (λ) = κ 0 ( √ λ), κ1 (λ) = κ 1 ( √ λ).
Introduce also the notations .
d(k) = d(k 1 , k 2 ) := k 2 k 2 1 -ν - κ 0 k 2 1 - κ 1 k 4
By assumption, ω(λ, k 1 ) = ν ′ λ + λ We have to prove that there exist r 0 > 0, K > 0 and N 0 ∈ N * such that ∀r ∈]0, r 0 ], |[0, r] \ A(r, N 0 )| ≤ Kr The proof then depends on the following lemma. Lemma 6.2. There exist r 0 > 0, N 0 ∈ N and three positive constant A j , j = 1, 2, 3,such that, for all r ∈ [0, r 0 ], for all k 1 ≥ N 0 and for all k 2 ∈ N, we have Assume this lemma for a moment and continue the proof. This lemma implies that, for r ∈ [0, r 0 ] and k 1 ≥ N 0 , we have
1. If E(r, k 1 , k 2 ) = ∅ then k 1 ≥ A 1 r -1 3+δ ′ . ( 6
k 2 ∈N E(r, k 1 , k 2 ) ≤ A 2 A 3 rk 2 1 + 1 k 4+δ 1 .
Consequently,
|[0, r] \ A(r, N 0 )| ≤ A 2 A 3 k 1 ≥A 1 r -1 3+δ ′ rk 2 1 + 1 k 4+δ 1 ≤ A r × r 1+δ 3+δ ′ + r 3+δ 3+δ ′
, for some positive constant A independent of r, which yields the desired bound (6.5).
It remains to prove Lemma 6.2. Suppose that E(r, k 1 , k 2 ) = ∅ and chose λ ∈ E(r, k 1 , k 2 ). To prove (6.6), we first use assumption (6.2) which reads
1 k 3+δ ′ 1 ≤ k 2 k 2 1 -ν - κ 0 k 2 1 ,
for k 1 large enough. Hence, by definition of d(k 1 , k 2 ),
1 k 3+δ ′ 1 ≤ |d(k 1 , k 2 )| + κ 1 k 4 1 .
Now, by the triangle inequality, we have
1 k 3+δ ′ 1 ≤ |d(k 1 , k 2 ) -ω(λ, k 1 )| + |ω(λ, k 1 )| + κ 1 k 4 1 .
By definition, if λ ∈ E(r, k 1 , k 2 ) then the first term in the right hand side is bounded by k -4-δ
1
, so
1 k 3+δ ′ 1 ≤ 1 k 4+δ 1 + |ω(λ, k 1 )| + κ 1 k 4 1 .
Hence, since 0 < δ and δ ′ < 1, we have 1 2
1 k 3+δ ′ 1 ≤ |ω(λ, k 1 )| ,
for k 1 large enough. Now we use the simple estimate (6.3) valid for k 1 large enough and λ small enough, to obtain
1 k 3+δ ′ 1 ≤ 4 ν ′ λ ≤ 4 ν ′ r.
This establishes the first claim in Lemma 6.2.
To prove the second claim, note that, given r > 0 and k 1 , if E(r, k 1 , k 2 ) = ∅ and E(r, k 1 , k ′ 2 ) = ∅ then exist λ and λ ′ such that
|d(k 1 , k 2 ) -ω(λ, k 1 )| ≤ 1 k 4+δ 1 , d(k 1 , k ′ 2 ) -ω(λ ′ , k 1 ) ≤ 1 k 4+δ 1 .
Now by the triangle inequality, using (6.3), this yields
k 2 k 2 1 - k ′ 2 k 2 1 = = d(k 1 , k 2 ) -d(k 1 , k ′ 2 ) ≤ |d(k 1 , k 2 ) -ω(λ, k 1 )| + d(k 1 , k ′
2 )ω(λ ′ , k 1 ) + ω(λ, k 1 )ω(λ ′ , k 1 )
≤ 2 k 4+δ 1 + 2 ν ′ (λ + λ ′ ) ≤ 2 k 4+δ 1 + 4 ν ′ r.
Multiplying both sides by k 2 1 we obtain the bound
k 2 -k ′ 2 ≤ 2 k 2+δ 1 + 4 ν ′ rk 2 1 ,
and hence
# {m ∈ N : E(r, k 1 , m) = ∅} ≤ 1 + 2 k 2+δ 1 + 4 ν ′ rk 2 1 .
It remains to prove the last claim. Introduce
λ min (r, k 1 , k 2 ) = inf E(r, k 1 , k 2 ), λ max (r, k 1 , k 2 ) = sup E(r, k 1 , k 2 ).
Then, using (6.4) we find
|E(r, k 1 , k 2 )| ≤ λ max (r, k 1 , k 2 ) -λ min (r, k 1 , k 2 ) ≤ 2 |ν ′ | ω λ max (r, k 1 , k 2 ), k 1 -ω λ min (r, k 1 , k 2 ), k 1 ≤ 2 |ν ′ | ω λ max (r, k 1 , k 2 ), k 1 -d(k 1 , k 2 ) + d(k 1 , k 2 ) -ω λ min (r, k 1 , k 2 ), k 1 ≤ 4 |ν ′ | k -4-δ 1 .
This completes the proof of Lemma 6.2 and hence the proof of Proposition 2.10.
Remark 6.3. The previous proof follows essentially the analysis in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]. The key difference is that, in [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF], the authors need to prove that a diophantine condition of the form
k 2 -ν(ε)k 2 1 -κ 0 (ε) ≥ 1 k 2 1 , (6.7)
is satisfied for all ε 2 ∈ A for some set A satisfying lim What makes the proof of the above Theorem simple is that we proved only Proof. By using symbolic calculus, we begin by observing that we have a localization property. Consider two cutoff functions χ ′ ∈ C ∞ 0 (ω ′ ) and χ ∈ C ∞ 0 (R d ) such that χ = 1 on ω and χ ′ = 1 on ω ′ . Then ũ = χ ′ ψ -T χb χ ′ σ and σ = χ ′ σ satisfy
T λ 1 σ ũ -T V • ∇σ = ψ ∈ H s (ω ′ ), (7.2)
T a σ + T V • ∇ũ = θ ∈ H 2s-3 (ω ′ ), (7.3) where recall that
λ 1 σ (x, ξ) = (1 + |∇σ(x)| 2 )|ξ| 2 -(∇σ(x) • ξ) 2 .
The strategy of the proof is very simple: We next form a second order equation from (7.2)-(7.3). The assumption a(x 0 ) < 0 implies that the operator thus obtained is quasi-homogeneous elliptic, which implies the desired sub-elliptic regularity for System (7.2)-(7.3). Namely, we claim that ũ ∈ H α+ 1 2 (ω ′ ) and σ ∈ H α (ω ′ ) with α := min s + 1 2 , 2s -3 > s.
To prove this claim, we set Λ = (1 -∆) 1 2 and use the Gårding's inequality for paradifferential operators, to obtain that there are constants C and c > 0 such that
ℜ T V • ∇σ, Λ 2α ũ L 2 + T V • ∇ũ, Λ 2α σ L 2 ≤ C ũ H α σ H α , c ũ 2 H α+ 1 2 ≤ ℜ T λ 1 σ ũ, Λ 2α ũ L 2 + C ũ 2 H α , c σ 2 H α ≤ ℜ T a σ, -Λ 2α σ L 2 + C σ 2 H α-1 2 .
Therefore, taking the scalar product of the equations (7.2) and (7.3) by Λ 2α ũ and -Λ 2α σ respectively, and adding the real parts, implies that
c ũ 2 H α+ 1 2 + c σ 2 H α ≤ C θ H α-1 2 ũ H α+ 1 2 + C ψ H α σ H α + C ũ H α σ H α + C ũ 2 H α + C σ 2 H α-1 2 ,
and the claim follows.
As a consequence we find that σ ∈ H α (ω ′ ) and u ∈ H α+ 1 2 (ω ′ ). Going back to ψ = u + T b σ, we obtain that ψ ∈ H α (ω ′ ). This finishes the proof of the Theorem 7.1.
∂
y φ -∇σ • ∇φc • ∇σ = φ) 2 + c • ∇φ = 0 on ∂Ω, (∇φ, ∂ y φ) → (0, 0) as y → -∞,(2.1)
Proposition 4 . 10 .
410 Let r ∈ [0, 1[, a(x, ξ) ∈ Γ 1 1+r (T d ) and b(x, ξ) ∈ Γ 0 0 (T d ). Assume that there exists c > 0 such that
(x, ξ) ∈ T * T d and possibly on the parameter z ∈ [-1, 0]. Based on the discussion earlier, to paralinearize the interior equation (4.8), it is natural to introduce what we call the good unknown u := v -T ∂zv σ. (4.12) (A word of caution: ψ -T b σ, which is what we called the good unknown in the introduction, corresponds to the trace on {z = 0} of u.)
Notation 5 .
5 10. i) The set C(o, e) is the set of function f = f (x 1 , x 2 ) which are odd in x 1 and even in x 2 . Similarly we define the sets C(o, o), C(e, o) and C(e, e).
Remark 5 . 11 .
511 (i) With these notations, by assumption we have σ ∈ C(e, e) and ψ ∈ C(o, e) so that b
i) χ = (χ 1 , χ 2 )
12 where χ 1 ∈ C(o, e) and χ 2 ∈ C(e, o); ii) α ∈ Γ(o, e), γ ∈ C(e, e).
2 | ≤ |ξ 1 | / 5 .
215 The idea is that, since U (ξ) = 0 for ξ 2 ≤ |ξ 1 | /2, for any c < 1/2 one may freely change the values of A(x, ξ) and B(x, ξ) for ξ 2 ≤ c |ξ 1 |.
1 andω
1 (λ, k 1 ) := ν(λ)ν + κ0 (λ) -
1 2 ϕ 1 1 + ϕ 3 (λ) k 4 1 ,k 4+δ 1 .
111311 (λ) + ϕ 2 (λ) k 2where ϕ j (λ) (j = 1, 2, 3) is a Lipschitz function vanishing at the origin. Therefore, for k 1 large enough and λ small enough, we have|ν ′ | 2 λ ≤ |ω(λ, k 1 )| ≤ 2 ν ′ λ.(6.3)Similarly, for k 1 large enough and λ, λ ′ small enough,|ν ′ | 2 λλ ′ ≤ ω(λ, k 1 )ω(λ ′ , k 1 ) .(6.4)Given r > 0 and N ∈ N * , introduce the setA(r, N ) = k 2 ∈N N∋k 1 ≥N λ ∈ [0, r] : |d(k 1 , k 2 )ω(λ, k 1 )| ≥ 1
5 )k 4+δ 1 .
51 Note that forn ∈ N * , [0, r] \ A(r, N ) = k 1 ≥N k 2 ∈N E(r, k 1 , k 2 ) with E(r, k 1 , k 2 ) = λ ∈ [0, r] : |d(k 1 , k 2 )ω(λ, k 1 )| < 1
. 6 ) 2 .
62 # {m ∈ N : E(r, k 1 , m) = ∅} ≤ A 2 (rk 2 1 + 1). 3. |E(r, k 1 , k 2 )| ≤ A 3 k -4-δ 1 .
r→0 1 r
1 |A ∩ [0, r]| = 1.This corresponds to the case δ = 0 of the above theorem (which we precluded by assumption). Now, observe that in this case the above analysis only gives|[0, r] \ A(r, N 0 )| r3 3+δ ′ . Then to pass from this bound to |[0, r] \ A(r, N 0 )| = o(r), one has to gain extra decay in r. To do this, Iooss and Plotnikov use an ergodic argument.
We do not explain here the way we define pseudo-differential operators with symbols of limited smoothness since this problem will be fixed by using paradifferential operators, and since all that matters in (2.15) is the regularity of the remainder term r(σ, ψ).
δ 2 (T 2 ).
∂ -1 x 1 T ∂x 1 p ∂ -1 x 1 v -T ∂x 1 p ∂ -2 x 1 v = -∂ -1 x 1 T ∂ 2 x 1 p ∂ -2 x 1 v.(5.46)By inserting this result in (5.45) we obtain∂ -1 x 1 T p v = T p ∂ -1 x 1 v -T ∂x 1 p ∂ -2 x 1 v + ∂ -1 x 1 T ∂ 2 x 1 p ∂ -2 x 1 v.By repeating this reasoning we end up with∂ -1 x 1 T p v = T p ∂ -1 x 1 v -T ∂x 1 p ∂ -2 x 1 v + T ∂ 2 x 1 p ∂ -3 x 1 v -T ∂ 3 x 1 p ∂ -4 x 1 v + f, where f = T ∂ 4 x 1 p ∂ -5 x 1 v -∂ -1 x 1 T ∂ 4 x 1 p ∂ -5 x 1 v.By assumption ∂ -5x 1 v ∈ H s+2 (T 2 ) and ∂ 4x 1 p ∈ Γ 0 0 (T 2 ) so that T ∂ 4 x 1 p is of order ≤ 0. Therefore we obtain f ∈ H s+2 (T 2 ), which concludes the proof.
The question is to prove the a priori regularity of known travelling waves solutions to the water waves equations. We here start an analysis of this problem for diamond waves, which are three-dimensional doubly periodic travelling gravity waves whose fundamental domain is a symmetric diamond. The existence of such waves was established by Iooss and Plotnikov in a recent beautiful memoir ( [START_REF] Iooss | Small divisor problem in the theory of three-dimensional water gravity waves[END_REF]).
In view of (5.27) and (5.28), we end up with χ -1 * u ∈ H s+1-δ 2 (T 2 ), (5.32)
(5.33)
Direclty from (5.32), by using Theorem 5.20, we obtain u ∈ H s+1-δ 2 (T 2 ).
(5.34)
We next claim that
To prove (5.35), we first note that Theorem 5.20 implies that it is enough to prove that
To prove this result, we shall apply our second theorem about paracomposition operators. Indeed, define the symbol a by
Theorem 5.21 implies that
Consequently, in view of (5.34), we obtain
(5.36)
Now, directly from the definition of χ (see §3.1), we compute that
Therefore, (5.33) implies that
Consequently, the claim (5.35) follows from (5.36) Now recall that we have proved that (cf (5.8))
(5.37)
Since T a -1 is of order ≤ 0, by using (5.35) we obtain
Proof. The proof is based on the following simple observation: ξ → ξ 1 is transverse to the characteristic variety. We prove Lemma 5.36 for m = 0. Let J c be a real-valued function, homogeneous of degree 0 such that J c (ξ 1 , ξ 2 ) = J c (ξ 1 , -ξ 2 ) and
We shall prove that
where S j , q k are as in the statement of the lemma and R ′ is such that the operator R ′ ∂ x 1 is of anisotropic order ≤ -2. This yields the desired result (5.49) since
by assumption on the spectrum ofv.
For m = 0, S is an homogeneous symbol of degree 0 in ξ. By the symmetry hypothesis S(x, ξ 1 , ξ 2 ) = S(x, ξ 1 , -ξ 2 ), we can write S(x, ξ) as
(if ξ 2 = 0 then S(x, ξ) = 0 by assumption) so we have
where r is given by Taylor's formula. Next, by setting
To see this, write
1
to obtain the desired identities with Q j (ξ) = J c (ξ) Qj (ξ) where
, that a weaker diophantine condition is satisfied. (Here "weaker diophantine condition" refers to the fact that, if (6.7) is satisfied then (2.9) is satisfied for any δ ≥ 0.) In particular, this discussion clearly shows that it is simpler to prove that (2.3) is satisfied for some δ > 0 than for δ = 0. This gives a precise meaning to what we claimed in the introduction: our paradifferential strategy may be used to simplify the analysis of the small divisors problems.
7 Two elliptic cases 7.1 When the Taylor condition is not satisfied
where x ∈ T 2 , and f 1 , f 2 are given C ∞ functions.
Our goal here is to show that the problem is much easier in the case where the Taylor sign condition is not satisfied. To make this more precise, set a := µ + V • ∇b with b := ∇σ • ∇ψ 1 + |∇σ| 2 , V := ∇ψ -b∇σ.
We prove a local hypoellipticity result near boundary points (x, σ(x)) where a < 0. We prove that, if σ ∈ H s and φ ∈ H s for some s > 3 near a boundary point (x 0 , σ(x 0 )) such that a(x 0 ) < 0, then σ, φ are C ∞ near (x 0 , σ(x 0 )). (This can be improved; the result remains valid under the weaker assumption that σ, φ ∈ C s with s > 2 for x ∈ T d with d ≥ 1. Yet, we will not address this issue.)
The main observation is that, in the case where a < 0, the boundary problem (7.1) is weakly elliptic. Consequently, any term which has the regularity of the unknowns can be seen as an admissible remainder for the paralinearization of the first boundary condition (that is why we can localize the estimates). In addition, the fact that the problem is weakly elliptic implies that we can obtain the desired sub-elliptic estimates by a simple integration by parts argument. To localize in Sobolev space, we use the following notation: given an open subset ω ⊂ R d and a distribution u ∈
Theorem 7.1. Let s > 4 and consider an open domain ω ⊂⊂ T 2 . Suppose that (σ, ψ) ∈ H s (ω) satisfies System 7.1 and a(x) < 0 for all x ∈ ω. Then, for all ω ′ ⊂⊂ ω, there holds (σ, ψ) ∈ H s+1/2 (ω ′ ).
Capillary gravity waves
In this section, we prove a priori regularity for the system obtained by adding surface tension:
where H(σ) denotes the mean curvature of the free surface {y = σ(x)}:
Recently there have been some results concerning a priori regularity for the solutions of this system, first for the two-dimensional case by Matei [START_REF] Matei | The Neumann problem for free boundaries in two dimensions[END_REF], and second for the general case d ≥ 2 by Craig and Matei [START_REF] Craig | Sur la régularité des ondes progressives à la surface de l'eau[END_REF][START_REF] Craig | On the regularity of the Neumann problem for free surfaces with surface tension[END_REF]. Independently, there is also the paper by Koch, Leoni and Morini [START_REF] Koch | On optimal regularity of free boundary problems and a conjecture of De Giorgi[END_REF] which is motivated by the study of the Munford-Shah functional. Craig and Matei proved C ω regularity for C 2+α solutions, and Koch, Leoni and Morini proved this result for C 2 solutions (they also note that the result holds true for C 1 viscosity solutions). Both proofs rely upon the hodograph and Legendre transforms introduced in this context by Kinderlehrer, Nirenberg and Spruck in the well known papers [START_REF] Kinderlehrer | Regularity in free boundary problems[END_REF][START_REF] Kinderlehrer | Regularity in elliptic free boundary problems[END_REF][START_REF] Kinderlehrer | Regularity in elliptic free boundary problems[END_REF]. Here, as a corollary of Theorem 2.12, we prove that C 3 solutions are C ∞ , without change of variables, by using the hidden ellipticity given by surface tension. To emphasize this point, the following result is stated in a little more generality than is needed.
where F is a smooth function of its arguments, then (σ, ψ) ∈ C ∞ (R d ).
Proof. By using standard regularity results for quasi-linear elliptic PDE, we prove that if (σ, ψ) ∈ C m for some m ≥ 2, then (σ, ψ) ∈ C m+1-ε for any ε > 0. For instance, it follows from Theorem 2.2.D in [START_REF] Taylor | Pseudodifferential operators and nonlinear PDE[END_REF] that,
for any δ > 0. As a result, it follows from the paralinearization formula for the Dirichlet to Neumann operator (cf Remark 2.16 after Theorem 2.12) that,
T λ 1 σ ψ -T b σ) ∈ C m-δ ′ for any δ ′ > 0, where λ 1 σ is the principal symbol of the Dirichlet to Neumann operator. Since λ 1 σ is a first-order elliptic symbol with regularity at least C 1 in x, this implies that ψ -T b σ ∈ C m+1-δ ′′ and hence ψ ∈ C m+1-δ ′′ for any δ ′′ > 0. |
04102410 | en | [
"sdv"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04102410/file/Mach_230517_ncRNA_HAL.pdf | Núria Mach
email: [email protected]
The forecasting power of the mucin-microbiome axis in livestock respiratory diseases
Keywords: livestock, mucin, microbiome, ncRNAs, respiratory diseases
Complex respiratory diseases are a significant challenge for the livestock industry worldwide. These diseases cause severe economic losses but also have a considerable impact on animal health and welfare. One of the first lines of pathogen defense combines the respiratory tract mucus, a highly viscous material primarily formed by mucin, and a thriving multi-kingdom microbial ecosystem, referred to herein as the mucin-microbiome axis. This axis can be considered a mighty two-edged sword, as its usual function is to protect from unwanted substances and organisms at one arm's length, while its dysfunction may be a clue for pathogen infection and respiratory disease onset. We further learned that the structure and function of this axis might be modulated by noncoding regulatory RNAs (e.g., microRNAs, long RNAs). This opinion paper unearths the current understanding of the triangular relationship of mucins, holobiont, and noncoding RNAs and the wide range of functions exhibited by the mucinmicrobiome axis under respiratory infection settings. There is a need to look at these molecular underpinnings that dictate distinct health and disease outcomes to implement effective prevention, surveillance, and timely intervention strategies tailored to the different epidemiological contexts.
Introduction
Complex respiratory diseases are a significant problem in livestock production, particularly in intensive systems. These diseases can cause significant economic losses due to reduced productivity, increased morbidity, premature mortality, treatment costs, and severe consequences for public health and the environment [START_REF] Bond | Upper and lower respiratory tract microbiota in horses: Bacterial communities associated with health and mild asthma (inflammatory airway disease) and effects of dexamethasone[END_REF][START_REF] Oladunni | EHV-1: A constant threat to the horse industry[END_REF][START_REF] Ericsson | Respiratory dysbiosis and population-wide temporal dynamics in canine chronic bronchitis and non-inflammatory respiratory disease[END_REF][START_REF] Ericsson | Composition and predicted metabolic capacity of upper and lower airway microbiota of healthy dogs in relation to the fecal microbiota[END_REF][START_REF] Mach | The Airway Pathobiome in Complex Respiratory Diseases: A Perspective in Domestic Animals[END_REF] . In livestock physiopathology, there is a growing awareness that infectious agents frequently do not operate alone, and their virulence can be affected by multispecies synergic interactions [START_REF] Bond | Upper and lower respiratory tract microbiota in horses: Bacterial communities associated with health and mild asthma (inflammatory airway disease) and effects of dexamethasone[END_REF][START_REF] Oladunni | EHV-1: A constant threat to the horse industry[END_REF][START_REF] Ericsson | Respiratory dysbiosis and population-wide temporal dynamics in canine chronic bronchitis and non-inflammatory respiratory disease[END_REF][START_REF] Ericsson | Composition and predicted metabolic capacity of upper and lower airway microbiota of healthy dogs in relation to the fecal microbiota[END_REF][START_REF] Kuiken | Pathogen surveillance in animals[END_REF][START_REF] Blakebrough-Hall | An evaluation of the economic effects of bovine respiratory disease on animal performance, carcass traits, and economic outcomes in feedlot cattle defined using four BRD diagnosis methods[END_REF][START_REF] Holt | BPEX Pig Health Scheme: A useful monitoring system for respiratory disease control in pig farms?[END_REF] . Consequently, a central finding of disease complexes involves interactions among holobionts (the host and the many other microorganisms living in or around it [START_REF] Simon | Host-microbiota interactions: From holobiont theory to analysis[END_REF] ) and multiple etiological agents [START_REF] Mach | The Airway Pathobiome in Complex Respiratory Diseases: A Perspective in Domestic Animals[END_REF] .
Undoubtedly, complex respiratory diseases entail multifactorial processes whose mechanisms are still not fully understood. New evidence has shown that the airway microbiota, defined as the complex community of microorganisms living in the respiratory tract, including bacteria, eukaryotes (especially yeast and protists), and archaea [START_REF] Zeineldin | Contribution of the Mucosal Microbiota to Bovine Respiratory Health[END_REF] , might act as a gatekeeper that provides resistance to infection on the mucosal surface [START_REF] Li | The commensal microbiota and viral infection: A comprehensive review[END_REF][START_REF] Man | The microbiota of the respiratory tract: Gatekeeper to respiratory health[END_REF] . Under normal physiological conditions, the diverse community of commensal microorganisms maintains a mutualistic relationship with the host, acting as the prime educator and maintainer of the airway's innate and adaptive immune functions [START_REF] Mach | The Airway Pathobiome in Complex Respiratory Diseases: A Perspective in Domestic Animals[END_REF][START_REF] Zeineldin | Contribution of the Mucosal Microbiota to Bovine Respiratory Health[END_REF] . Classifying healthy versus diseased individuals based on their airway microbiome has been done successfully for ruminants [START_REF] Zeineldin | Meta-analysis of bovine respiratory microbiota: Link between respiratory microbiota and bovine respiratory health[END_REF][START_REF] Mcmullen | Topography of the respiratory tract bacterial microbiota in cattle[END_REF][START_REF] Zeineldin | Contribution of the Mucosal Microbiota to Bovine Respiratory Health[END_REF][START_REF] Gaeta | Deciphering upper respiratory tract microbiota complexity in healthy calves and calves that develop respiratory disease using shotgun metagenomics[END_REF][START_REF] Chai | Bovine respiratory microbiota of feedlot cattle and its association with disease[END_REF][START_REF] Holman | The nasopharyngeal microbiota of feedlot cattle that develop bovine respiratory disease[END_REF][START_REF] Nicola | Characterization of the upper and lower respiratory tract microbiota in Piedmontese calves[END_REF][START_REF] Zeineldin | Disparity in the nasopharyngeal microbiota between healthy cattle on feed, at entry processing and with respiratory disease[END_REF][START_REF] Mcmullen | Comparison of the nasopharyngeal bacterial microbiota of beef calves raised without the use of antimicrobials between healthy calves and those diagnosed with bovine respiratory disease[END_REF][START_REF] Timsit | Effects of nasal instillation of a nitric oxide-releasing solution or parenteral administration of tilmicosin on the nasopharyngeal microbiota of beef feedlot cattle at high-risk of developing respiratory tract disease[END_REF] , pigs [START_REF] Wang | Comparison of Oropharyngeal Microbiota in Healthy Piglets and Piglets With Respiratory Disease[END_REF][START_REF] Correa-Fiz | Antimicrobial removal on piglets promotes health and higher bacterial diversity in the nasal microbiota[END_REF][START_REF] Correa-Fiz | Piglet nasal microbiota at weaning may influence the development of Glässer's disease during the rearing period[END_REF][START_REF] Mahmmod | Variations in association of nasal microbiota with virulent and non-virulent strains of Glaesserella (Haemophilus) parasuis in weaning piglets[END_REF] , horses [START_REF] Bond | Effects of nebulized dexamethasone on the respiratory microbiota and mycobiota and relative equine herpesvirus-1, 2, 4, 5 in an equine model of asthma[END_REF] , and chickens [START_REF] Ngunjiri | Farm stage, bird age, and body site dominantly affect the quantity, taxonomic composition, and dynamics of respiratory and gut microbiota of commercial layer chickens[END_REF][START_REF] Yitbarek | Commensal gut microbiota can modulate adaptive immune responses in chickens vaccinated with whole inactivated avian influenza virus subtype H9N2[END_REF][START_REF] Yitbarek | Gut microbiota-mediated protection against influenza virus subtype H9N2 in chickens is associated with modulation of the innate responses[END_REF] . However, we can take one step further and broaden our view of the dynamics of respiratory diseases in livestock to include an additional aspect of the complex system: the respiratory mucus.
While mucus has historically been viewed as a simple physical barrier, recent work has suggested that mucins, the major gel-forming components of mucus, have many structural and functional roles in the respiratory system [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF][START_REF] Atanasova | Strategies for measuring airway mucus and mucins[END_REF][START_REF] Rose | Respiratory Tract Mucin Genes and Mucin Glycoproteins in Health and Disease[END_REF][START_REF] Thornton | Structure and Function of the Polymeric Mucins in Airways Mucus[END_REF][START_REF] Hansson | Mucus and mucins in diseases of the intestinal and respiratory tracts[END_REF][START_REF] Thai | Regulation of Airway Mucin Gene Expression[END_REF] . Mucins are high-molecular-weight glycosylated proteins with hundreds of branching chain sugar (O-linked glycans), representing up to 80% of mucin weight. Through this rich biochemistry, O-linked glycans on mucins (collectively termed glycome) confer a wealth of physical and functional properties in the respiratory tract [START_REF] Shipunov | Glycome assessment in patients with respiratory diseases[END_REF] . First, the glycans on mucins provide a physical barrier that traps and clears inhaled particles and a chemical barrier that neutralizes toxins, allergens, pollutants, and pathogens [START_REF] Atanasova | Strategies for measuring airway mucus and mucins[END_REF][START_REF] Rose | Respiratory Tract Mucin Genes and Mucin Glycoproteins in Health and Disease[END_REF][START_REF] Reily | Glycosylation in health and disease[END_REF][START_REF] Qin | The host glycomic response to pathogens[END_REF][START_REF] Schnaar | Glycans and glycan-binding proteins in immune regulation: A concise introduction to glycobiology for the allergist[END_REF][START_REF] Brazil | Finding the sweet spot: glycosylation mediated regulation of intestinal inflammation[END_REF] . This limits the growth and colonization of pathogens and their adhesion and invasion of the respiratory epithelium. Thus, in addition to their role as a physical and chemical barrier, glycans on mucins have also been shown to have immune-modulating properties and may regulate inflammation [START_REF] Hansson | Mucus and mucins in diseases of the intestinal and respiratory tracts[END_REF] . However, mucins' glycans are considered a double-edged sword, as they can aid invading pathogens to subvert host immune machinery. On top of these protective properties, mucin glycome affects the respiratory microbiota composition and functionality while serving as an environmental niche and a carbon and nitrogen source [START_REF] Bergstrom | Core 1-and 3-derived Oglycans collectively maintain the colonic mucus barrier and protect against spontaneous colitis in mice[END_REF][START_REF] Rose | Respiratory Tract Mucin Genes and Mucin Glycoproteins in Health and Disease[END_REF][START_REF] Chatterjee | Defensive Properties of Mucin Glycoproteins during Respiratory Infections-Relevance for SARS-CoV-2. Garsin DA[END_REF] . Reciprocally, the respiratory tract microbiota's composition and activity can influence mucins' production and quality while stimulating the immune system and protecting against pathogen infection [START_REF] Pérez-Cobas | Ecology of the respiratory tract microbiome[END_REF] . Any disruption in the display or function of mucins and the glycosylation pattern of their glycans can lead to dysbiosis and respiratory diseases and potentially increase the risk of respiratory infections in livestock.
The next complexity is that mucin production, secretion, and glycosylation are highly regulated by noncoding ncRNAs (ncRNAs) at the genomic level. The glycans on mucins are products of multiple glycosyltransferases and glycosidases working in a coordinated manner to synthesize structures appended to proteins [START_REF] Thu | Sweet Control: MicroRNA Regulation of the Glycome[END_REF] . Increasing evidence shows that ncRNAs are critical regulators of cellular and biological processes in living organisms, including mucin modifications [START_REF] Thu | Sweet Control: MicroRNA Regulation of the Glycome[END_REF][START_REF] Agrawal | Mapping posttranscriptional regulation of the human glycome uncovers microRNA defining the glycocode[END_REF] . The research community has classified the ncRNAs based on their length: small ncRNAs (sncRNAs) and long ncRNAs (lncRNAs) [START_REF] Weber | The MicroRNA Spectrum in 12 Body Fluids[END_REF] . One type of ncRNA implicated in mucin glycome regulation is microRNAs (miRNAs) [START_REF] Thu | Sweet Control: MicroRNA Regulation of the Glycome[END_REF][START_REF] Agrawal | Mapping posttranscriptional regulation of the human glycome uncovers microRNA defining the glycocode[END_REF][START_REF] Ng | High-Throughput Analysis Reveals miRNA Upregulating α-2,6-Sialic Acid through Direct miRNA-mRNA Interactions[END_REF] . miRNAs are small (~22 nucleotides) RNA molecules that bind to messenger RNAs (mRNAs) and regulate their translation and stability into proteins. Several miRNAs have been identified to fine-tune the glycan biosynthetic enzymes and regulate the expression of mucin genes in the respiratory tract of humans, including miR-34b/c 50 , miR-146a [START_REF] Zhong | MiR-146a negatively regulates neutrophil elastase-induced MUC5AC secretion from 16HBE human bronchial epithelial cells[END_REF] , miR-378 [START_REF] Skrzypek | Interplay Between Heme Oxygenase-1 and miR-378 Affects Non-Small Cell Lung Carcinoma Growth, Vascularization, and Metastasis[END_REF] , and miR-141 [START_REF] Siddiqui | Epithelial miR-141 regulates IL-13-induced airway mucus production[END_REF] .
Determining the underlying causes of complex respiratory disease in farm animals is complicated. Species of veterinary interest are subjected to different host variables, environments, and pathogens, which could all play a role in disease, alone or in concert. From a One-Health-One Welfare perspective, this opinion paper aims to give insights into the potential avenues of complex respiratory disease, building on the surge of recent primary research to debate different aspects of the complex and intricate relationships between pathogens, the holobiont, mucins, and their genetic regulation. Underpinning these mechanisms will be crucial to determine how these can be harnessed to develop novel interventions to prevent disease infection and improve animal health and welfare.
Food-producing animal complexes: holobionts in a polymicrobial environment
Respiratory complex diseases constitute a significant cause of morbidity and mortality in livestock, in which prevention and prompt diagnosis, and targeted treatments are essential [START_REF] Bond | Upper and lower respiratory tract microbiota in horses: Bacterial communities associated with health and mild asthma (inflammatory airway disease) and effects of dexamethasone[END_REF][START_REF] Oladunni | EHV-1: A constant threat to the horse industry[END_REF][START_REF] Ericsson | Respiratory dysbiosis and population-wide temporal dynamics in canine chronic bronchitis and non-inflammatory respiratory disease[END_REF][START_REF] Ericsson | Composition and predicted metabolic capacity of upper and lower airway microbiota of healthy dogs in relation to the fecal microbiota[END_REF][START_REF] Mach | The Airway Pathobiome in Complex Respiratory Diseases: A Perspective in Domestic Animals[END_REF] . For example, the bovine respiratory disease complex (BRDC) is a leading cause of morbidity and economic losses in wealthy countries, especially for newly feedlot calves, ranging from 30% in Belgium, [START_REF] Van Leenen | Comparison of bronchoalveolar lavage fluid bacteriology and cytology in calves classified based on combined clinical scoring and lung ultrasonography[END_REF] to 49% in Switzerland, and up to 90% in the U.S.A. [START_REF] Hilton | BRD in 2014: Where have we been, where are we now, and where do we want to go?[END_REF] . Similarly, sheep respiratory disease affects many animals [START_REF] Lacasta | Preface: Special issue on sheep respiratory diseases[END_REF] , leading to significant indirect losses, such as carcass condemnations, treatments, and decreased production [START_REF] Lacasta | Preface: Special issue on sheep respiratory diseases[END_REF] . Analogously, the prevalence of the porcine respiratory disease complex (PRDC) in finishing pigs continues to grow [START_REF] Qin | Viral communities associated with porcine respiratory disease complex in intensive commercial farms in Sichuan province, China[END_REF] , with a morbidity rate ranging from 10% in Denmark [START_REF] Hansen | An investigation of the pathology and pathogens associated with porcine respiratory disease complex in Denmark[END_REF] to 40% in the U.S.A [START_REF] Harms | Three cases of porcine respiratory disease complex associated with porcine circovirus type 2 infection[END_REF] . As for common livestock animals, the respiratory disease complex in commercial birds remains widespread [START_REF] Guinat | Spatio-temporal patterns of highly pathogenic avian influenza virus subtype H5N8 spread, France, 2016 to 2017[END_REF][START_REF] Filaire | Highly Pathogenic Avian Influenza A(H5N8) Clade 2.3.4.4b Virus in Dust Samples from Poultry Farms, France, 2021[END_REF] . It has become endemic in different countries, causing subclinical infections, mild respiratory symptoms, and high production losses in birds raised for meat or eggs [START_REF] Samy | Avian respiratory coinfection and impact on avian influenza pathogenicity in domestic poultry: Field and experimental findings[END_REF][START_REF] Guabiraba | Avian colibacillosis: Still many black holes[END_REF][START_REF] Awad | An overview of infectious bronchitis virus in chickens[END_REF][START_REF] Patel | Metagenomic of clinically diseased and healthy broiler affected with respiratory disease complex[END_REF] .
Complex respiratory diseases involve interactions among holobionts and multispecies synergistic etiological agents [START_REF] Mach | The Airway Pathobiome in Complex Respiratory Diseases: A Perspective in Domestic Animals[END_REF] . More than one pathogen-one disease paradigm is needed to explain complex respiratory disorders [START_REF] Vayssier-Taussat | Emerging horizons for tick-borne pathogens: From the "one pathogen-one disease" vision to the pathobiome paradigm[END_REF] . Synergistic interactions between pathogens often occur through mechanisms such as chemical signaling influencing gene expression or metabolic exchange/complementarity to avoid competition for nutrients and improve the metabolic ability of microbial consortium (Mach and Clark 2017b; Mazel-Sanchez et al. 2019). A plethora of examples in ruminants and swine illustrates the framework for co-infection between pathogens (Gaudino et al. 2023), especially under intensive production, where animal density and stressing conditions are increased, and breeding programs are overly focused on enhancing traits related to production instead of robustness and disease resistance. In this context, multiple viral agents can contribute to the development of BRDC [START_REF] Zeineldin | Meta-analysis of bovine respiratory microbiota: Link between respiratory microbiota and bovine respiratory health[END_REF][START_REF] Zeineldin | Contribution of the Mucosal Microbiota to Bovine Respiratory Health[END_REF][START_REF] Chai | Bovine respiratory microbiota of feedlot cattle and its association with disease[END_REF][START_REF] Holman | The nasopharyngeal microbiota of feedlot cattle that develop bovine respiratory disease[END_REF][START_REF] Alexander | The role of the bovine respiratory bacterial microbiota in health and disease[END_REF] , including bovine viral diarrhea virus (BVDV), bovine respiratory syncytial virus (BRSV), bovine herpes virus 1 (BHV-1), influenza D virus (IDV) (Oliva et al. 2019; Lion et al. 2021), bovine coronavirus (BCoV) [START_REF] Salem | Global Transmission, Spatial Segregation, and Recombination Determine the Long-Term Evolution and Epidemiology of Bovine Coronaviruses[END_REF] and parainfluenza three virus (PI3V) [START_REF] Klima | Pathogens of Bovine Respiratory Disease in North American Feedlots Conferring Multidrug Resistance via Integrative Conjugative Elements. Onderdonk AB[END_REF] . In particular, IDV increases the susceptibility to the respiratory pathogen Mycoplasma bovis (Lion et al. 2021; Gaudino et al. 2023). Mannheimia haemolytica is the primary causative pathogen leading to lung damage in sheep [START_REF] Gupta | The trehalose glycolipid C18Brar promotes antibody and T-cell immune responses to Mannheimia haemolytica and Mycoplasma ovipneumoniae whole cell antigens in sheep[END_REF] . Yet, Mycoplasma ovipneumoniae and PI3V combined with adverse physical and physiological effects of stress predispose Mannheimia haemolytica infection [START_REF] Sharp | Ovine pneumonia[END_REF][START_REF] Gupta | The trehalose glycolipid C18Brar promotes antibody and T-cell immune responses to Mannheimia haemolytica and Mycoplasma ovipneumoniae whole cell antigens in sheep[END_REF] . The swine influenza virus also compensates for the lack of suilysin (cytotoxic protein secreted by Streptococcus suis) in the adherence and invasion process of suilysin-negative Streptococcus suis [START_REF] Meng | Viral coinfection replaces effects of suilysin on streptococcus suis adherence to and invasion of respiratory epithelial cells grown under air-liquid interface conditions[END_REF] . A synergism between nasal Staphylococcus aureus and pathobionts such as Pasteurella multocida and Klebsiella spp. has also been reported in pigs [START_REF] Espinosa-Gongora | Differential analysis of the nasal microbiome of pig carriers or non-carriers of staphylococcus aureus[END_REF] . Co-occurrence between the porcine reproductive and respiratory syndrome virus (PRRSV), Haemophilus parasuis, and Mycoplasma hyorhinis in pig lungs is repeatedly observed [START_REF] Jiang | Illumina MiSeq Sequencing investigation of microbiota in bronchoalveolar lavage fluid and cecum of the swine infected with PRRSV[END_REF] . The swine influenza virus enhances the morbidity of Streptococcus suis infection by decreasing mucociliary clearance, damaging epithelial cells, and facilitating its adherence, colonization, and invasion in the lungs [START_REF] Meng | Efficient suilysin-mediated invasion and apoptosis in porcine respiratory epithelial cells after streptococcal infection under air-liquid interface conditions[END_REF] . Another case in point is the low pathogenic avian influenza viruses (LPAIV) that, during outbreaks, are coupled with coinfections by Mycoplasma gallisepticum, Mycoplasma synoviae, Ornithobacterium rhinotracheale, avian pathogenic Escherichia coli (APEC) and Staphylococcus aureus, increasing the mortality rate (Much et al. 2002; Umar et al. 2020; Filaire et al. 2022) and a marked reduction in egg production of laying hens (Umar et al. 2016). Frequently, these polymicrobial infections significantly hamper therapy, prognosis, and overall disease management. Under such circumstances, prevention is essential.
The untapped potential of airway mucins: the frontline defense of the respiratory tract
The respiratory tract is resistant to environmental injury, despite continuous exposure to pathogens, particles, and toxic chemicals in inhaled air [START_REF] Fahy | Airway Mucus Function and Dysfunction[END_REF] . Such resistance primarily depends on a highly effective defense provided by mucus [START_REF] Fahy | Airway Mucus Function and Dysfunction[END_REF] , a discontinuous, thin, and viscoelastic complex biological fluid that shields the lungs from environmental insults through a process known as mucociliary clearance [START_REF] Song | MUC5B mobilizes and MUC5AC spatially aligns mucociliary transport on human airway epithelium[END_REF] .
The mucus is mostly water (usually > 97%), mucins, non-mucin proteins, ions, lipids, and immunological factors [START_REF] Fahy | Airway Mucus Function and Dysfunction[END_REF][START_REF] Bansil | The biology of mucus: Composition, synthesis and organization[END_REF][START_REF] Nason | Display of the human mucinome with defined O-glycans by gene engineered cells[END_REF] . The major macromolecular mucus components are the mucin Olinked glycoproteins [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF] . Mucins have a unique structure that distinguishes them from other proteins. First, they have a central protein core rich in Ser, Pro, or Thr-repetitive and nonrepetitive sequences [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF][START_REF] Hansson | Mucus and mucins in diseases of the intestinal and respiratory tracts[END_REF] . Second, these repeated sequences allow for extensive post-translational modifications, including adding glycans and their glycosylation [START_REF] Reily | Glycosylation in health and disease[END_REF] . The glycans attached to mucins are highly O-glycosylated [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF] . O-glycosylation is increasingly revealed as a sophisticated informational system that underlies essential biological functions at the cellular and organismal levels [START_REF] Varki | Glycan-based interactions involving vertebrate sialic-acid-recognizing proteins[END_REF] . Glycomics, one of the latest omics system science fields, evaluate the structures and function of glycoproteins in a biological system [START_REF] Kunej | Rise of Systems Glycobiology and Personalized Glycomedicine: Why and How to Integrate Glycomics with Multiomics Science?[END_REF] . The study of the glycome is metaphorically akin to forestry; each glycoprotein comprises glycans (leaves) conjugated to a protein (tree trunk) [START_REF] Critcher | Seeing the forest through the trees: characterizing the glycoproteome[END_REF] . These O-glycans are primarily built from five O-linked monosaccharide components: galactose, N-acetylglucosamine (GlcNAc), and N-acetylgalactosamine (GalNAc), fucose, and sialic acid [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF][START_REF] Hansson | Mucus and mucins in diseases of the intestinal and respiratory tracts[END_REF] attached to the protein backbone through an oxygen atom. The process of Oglycosylation leads to remarkable O-linked glycan heterogeneity and diversity, with over 200 distinct forms identified on mucins, representing a wealth of biochemical information in a minimum of space. The mucin glycome is analogous to the genome, transcriptome, and proteome but even more dynamic, and it has higher structural complexity that has yet to be fully defined. The human and mouse airway mucin glycome has started to be determined through different omic technologies [START_REF] Shipunov | Glycome assessment in patients with respiratory diseases[END_REF][START_REF] Jia | The Human Lung Glycome Reveals Novel Glycan Ligands for Influenza A Virus[END_REF]100 . However, information needs to be created for livestock. Only two studies have described the N-glycan patterns in pig 101 or chicken 102 lungs, with no information about the O-glycome.
Broadly, there are two classes of mucins, those that remain tethered to cell membranes and those secreted, usually by the goblet cells. The cell-tethered mucins form the basis of a gel-like layer surrounding the cilia (periciliary layer) essential for regular ciliary action to move mucus out of the airways [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF] and help control the mucus hydration 103 . In contrast, the secreted mucins constitute the mobile mucus layer 103 . Currently, 22 mucins (from MUC1 to MUC21) are identified in humans (denoted with capital letters), and 16 are found in the respiratory tract 104 . The major mucins produced in the airways are the secreted polymeric mucins MUC5AC and MUC5B 105 and the cell-tethered mucins MUC1, MUC4, MUC16, and MUC20 106 . MUC5AC and MUC5B are large (5-50 Mda) polymeric mucins and underpin the structure and organization of the airway's mucus gel [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF] . MUC5AC is mainly produced in epithelial surface goblet cells in the upper airways, whereas MUC5B is primarily secreted from mucous cells in submucosal glands 105 . Similar conformation is observed in pigs 107,108 .
Most of the physical properties and functions of MUC5B and MUC5AC are governed by the glycosylation patterns of their O-linked glycan structures [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF] . Indeed, mucin glycosylation patterns are partially responsible for mucus's ability to set protective physical barriers against mechanical and chemical damage from the external environment (Goto et al. 2016) and harmful microorganisms 109 . They also prevent opportunistic microbes' virulence 110,111 , aggregation, and biofilm formation, interfering with pathogen adhesion and cell receptor binding 111 . Glycans on mucins also possess antimicrobial properties and modulate and maintain immune homeostasis 38- 41 . Most of these actions in health are regulated by MUC5B, the major gel-forming mucin in the lung 105 . Conversely, MUC5AC production is much lower in healthy airways but is upregulated, for example, in response to viral infections [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF] . It acts as a decoy for viral receptors and is essential for an allergic inflammatory challenge [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF][START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF]108 .
Despite mucins' various defensive and tolerative properties, some pathogens have developed strategies to exploit or manipulate mucins while enhancing their survival and evasion of the immune system. Certain viruses have evolved surfaces that do not adhere to mucins 112 . Other pathogens can degrade the mucus layer and penetrate, release toxins that disrupt the epithelial barrier or modify mucus pH, influencing its viscosity. This is possible thanks to mucin-degrading proteases, chemotaxis, and flagella, which allow pathogens to move inside the mucus, adhere to and produce infection 113 . Once secreted mucus barrier is surpassed, pathogens recognize and target particular classes of cell receptors, such as sialylated glycans, glycosaminoglycans, and cell adhesion molecules, to mediate cellular attachment and entry [START_REF] Varki | Glycan-based interactions involving vertebrate sialic-acid-recognizing proteins[END_REF]114 . Several RNA and DNA viruses use sialylated glycans, e.g., sialic acids (Sia), to access the host cells as initial anchors 115 . In particular, influenza and coronaviruses, two of the most critical zoonotic threats, use sialic acids as cellular entry receptors 116 . More than 60 different Sias are known, which differ in sugar structural modifications [117][118][119] . The presence or absence of an appropriate Sia receptor is a significant determinant of the host tropism of a pathogen. Indeed, studying the glycan structures in the chicken trachea and lung, Suzuki et col 102,120 reported that IAVs bind preferentially to terminal Sia on glycans that possess Sia α2-3Gal, with or without fucosylation and 6-sulfation but not to α2,6-Sia. At the same time, horses' Sia α2,3-Gal is predominantly expressed on the surface of ciliated epithelial cells of the nasal mucosa, trachea, and bronchus 120 . In contrast, the trachea of cows is deficient in both Sia α2,6-Gal and SA α2,3-Gal receptors 121 . All influenza viruses tested in pigs interacted with one or more sialylated N-glycans but not O-glycans or glycolipid-derived glycans 101 . In addition to serving as receptors for viral attachment, sialic acids can also play a role in viral evasion of the host immune system. Pathogens can mimic or mask their surface glycans to resemble those of host cells, making it difficult for the immune system to recognize and target them, otherwise called molecular mimicry. Other pathogens often carry glycan structures on their surface, e.g., sialic acid-specific glycans, to decorate their cell surfaces, aid in host cell attachment, and assist in evading host immunity [START_REF] Schnaar | Glycans and glycan-binding proteins in immune regulation: A concise introduction to glycobiology for the allergist[END_REF]117,[122][123][124][125] . In summary, pathogens exploit their own and host mucin glycans to establish infection and survival.
Airway mucins: sweet and well-coated partners for microbiota
Interestingly, mucins also contain, feed, and dictate the airway microbiome ecosystem [START_REF] Varki | Glycan-based interactions involving vertebrate sialic-acid-recognizing proteins[END_REF] while tolerating their enormous diversity [START_REF] Bergstrom | Core 1-and 3-derived Oglycans collectively maintain the colonic mucus barrier and protect against spontaneous colitis in mice[END_REF]126 and dictating their relationship (Varki 2007a. The respiratory tract microbiome is less diverse than the gut, with Bacteroidetes and Firmicutes as the dominant phyla in mammals [START_REF] Mach | The Airway Pathobiome in Complex Respiratory Diseases: A Perspective in Domestic Animals[END_REF] . High microbial biomass colonizes the mammal upper respiratory tract [START_REF] Zeineldin | Contribution of the Mucosal Microbiota to Bovine Respiratory Health[END_REF] . Conversely, the lower respiratory tract exhibits low biomass, with a significant role in lower airway mucosal immunology 127 . The colonization of the airways occurs shortly after birth, with the maturation of the microbiome occurring fast [START_REF] Mach | The Airway Pathobiome in Complex Respiratory Diseases: A Perspective in Domestic Animals[END_REF] . The airway microbiota composition is mediated mainly by microbial immigration, microbial elimination, and the proliferation rate of bacteria [START_REF] Mach | The Airway Pathobiome in Complex Respiratory Diseases: A Perspective in Domestic Animals[END_REF] . The airway microbiota has been shown to have essential roles in lung development 128 and maintaining homeostasis [START_REF] Zeineldin | Contribution of the Mucosal Microbiota to Bovine Respiratory Health[END_REF]129 . The significance of microbial research, especially when the world is trying to combat deadly infectious viral diseases and microbial resistance, has opened possibilities to explore diseased individuals based on their airway microbiome with much ease. Progress in this branch of science has helped elucidate the holobiont dynamics under respiratory infections in ruminants [START_REF] Zeineldin | Meta-analysis of bovine respiratory microbiota: Link between respiratory microbiota and bovine respiratory health[END_REF][START_REF] Mcmullen | Topography of the respiratory tract bacterial microbiota in cattle[END_REF][START_REF] Zeineldin | Contribution of the Mucosal Microbiota to Bovine Respiratory Health[END_REF][START_REF] Gaeta | Deciphering upper respiratory tract microbiota complexity in healthy calves and calves that develop respiratory disease using shotgun metagenomics[END_REF][START_REF] Chai | Bovine respiratory microbiota of feedlot cattle and its association with disease[END_REF][START_REF] Holman | The nasopharyngeal microbiota of feedlot cattle that develop bovine respiratory disease[END_REF][START_REF] Nicola | Characterization of the upper and lower respiratory tract microbiota in Piedmontese calves[END_REF][START_REF] Zeineldin | Disparity in the nasopharyngeal microbiota between healthy cattle on feed, at entry processing and with respiratory disease[END_REF][START_REF] Mcmullen | Comparison of the nasopharyngeal bacterial microbiota of beef calves raised without the use of antimicrobials between healthy calves and those diagnosed with bovine respiratory disease[END_REF][START_REF] Timsit | Effects of nasal instillation of a nitric oxide-releasing solution or parenteral administration of tilmicosin on the nasopharyngeal microbiota of beef feedlot cattle at high-risk of developing respiratory tract disease[END_REF] , pigs [START_REF] Wang | Comparison of Oropharyngeal Microbiota in Healthy Piglets and Piglets With Respiratory Disease[END_REF][START_REF] Correa-Fiz | Antimicrobial removal on piglets promotes health and higher bacterial diversity in the nasal microbiota[END_REF][START_REF] Correa-Fiz | Piglet nasal microbiota at weaning may influence the development of Glässer's disease during the rearing period[END_REF][START_REF] Mahmmod | Variations in association of nasal microbiota with virulent and non-virulent strains of Glaesserella (Haemophilus) parasuis in weaning piglets[END_REF] , horses [START_REF] Bond | Effects of nebulized dexamethasone on the respiratory microbiota and mycobiota and relative equine herpesvirus-1, 2, 4, 5 in an equine model of asthma[END_REF] , and chickens [START_REF] Ngunjiri | Farm stage, bird age, and body site dominantly affect the quantity, taxonomic composition, and dynamics of respiratory and gut microbiota of commercial layer chickens[END_REF][START_REF] Yitbarek | Commensal gut microbiota can modulate adaptive immune responses in chickens vaccinated with whole inactivated avian influenza virus subtype H9N2[END_REF]130 .
Being both an environmental niche and a food source 131 , respiratory mucins are essential drivers of microbiota composition, diversity, stability, and functionality, which can, in turn, influence microbial behavior and community structure under respiratory infection. Mucins create ecosystem heterogeneity by binding certain nutrients, leading to gradient formation and spatial niche partitioning. This is especially true for the airway microbiota, which primarily extracts nutrients from the respiratory mucins because nutrients are scarce 131 . Many Bacteroides spp and Akkermansia muciniphila encode an extensive repertoire of carbohydrate-active enzymes (CAZymes) that sequentially cooperate in metabolizing the host mucins 132,133 . The first targets for CAZymes are the terminal residues on the O-glycans, such as sialic acid, fucose, and glycosulfate 134 . The complete degradation of mucin polysaccharides can be done by combining enzymes that a diverse range of microbes can express. Insight into the enzymatic capacity of the microbiota is essential to predict how glycans on mucin landscapes contribute to the microbiota assembly and to host-microbiome symbiosis in eubiosis 135 and how we can use them to prevent complex diseases. Notably, the first insights into the respiratory mucin-microbiome axis in the livestock show how microbiota modifications (due to different levels of ammonia concentrations) in growing pigs impacted the thickness and viscosity of the mucus layer and increased the colonization of harmful bacteria 136 . Enriching the upper respiratory tract with the probiotic Bacillus amyloliquefaciens in chicken increased the sIgA levels, the count of goblet cells, and the expression of the MUC2 gene in the tracheal epithelium and the overall respiratory mucosal barrier function 137 .
Yet, the relationship between the microbiome and the mucins in the respiratory tract is mutualistic and implies a two-way traffic. Thus, reciprocally, airway microbiotas have a pervasive impact on the composition and functionality of mucins. Microbiota is required to synthesize large gel-forming mucins completely, including encapsulation, glycosylation, changes in fucosylation and sialidation patterns, and thickness 134,138,139 . Looking deeper at the sheer amount of data we have gathered from the mucin-microbiome axis in the gut, we know that microbiota metabolites regulate gut mucin synthesis at the transcriptional and epigenetic levels. Indeed, butyrate and propionate epigenetically regulate MUC2 gene expression in the human goblet cell-like LS174T cells 140 and help maintain intestinal barrier function. In line with these findings, an elegant study by Bergstrom and Xia 141 showed that the SCFAs resulting from Oglycan fermentation regulated intestinal mucin barrier function. Bioactive SCFA administration (primary butyrate) also promotes MUC2 and MUC5AC gene expression and increases epithelial cell integrity after damage 142 . A study in piglets concluded that gastric infusions of SCFAs maintained intestinal barrier function by increasing the expression of intestinal tight junction proteins occludin and claudin-1 genes and decreasing the gene and protein abundances of IL-1 in the colon, coupled with reduced intestinal epithelial cell apoptosis 143 . Interestingly, co-culturing experiments of A. muciniphila with non-mucus-degrading butyrate-producing bacteria 144 indirectly stimulated intestinal butyrate levels near the intestinal epithelial cells with potential health benefits to the host. The mucin-microbiome axis study undoubtedly opens possibilities to explore terra incognita in health and disease contexts for animals of veterinary interest.
The mucin-microbiome axis in respiratory infections
The importance and problems mucus and mucins encounter are evident for the general public during upper respiratory tract colds [START_REF] Hansson | Mucus and mucins in diseases of the intestinal and respiratory tracts[END_REF] . Persistent mucus accumulation and plugging of the airways pervasively impair microbial clearance, enhance the transition from the microbiome to the pathobiome [START_REF] Ridley | Mucins: the frontline defence of the lung[END_REF][START_REF] Fahy | Airway Mucus Function and Dysfunction[END_REF] , and inflammation [START_REF] Rose | Respiratory Tract Mucin Genes and Mucin Glycoproteins in Health and Disease[END_REF]145 . Thus, excessive mucin production can no longer attenuate microbial virulence and pacify opportunistic pathogens 146 . The conversion from healthy to pathologic mucus occurs through multiple mechanisms [START_REF] Fahy | Airway Mucus Function and Dysfunction[END_REF] , including abnormal secretion of salt and water, increased submucosal gland mucus secretion 147 , mucus infiltration with inflammatory cells, and heightened broncho-vascular permeability with respiratory distress [START_REF] Fahy | Airway Mucus Function and Dysfunction[END_REF] .
In line with this, a first study in pigs indicates that infections primarily drive changes in the quantity and physicochemical properties of airway mucins and may enhance pathogen biofilm formation and promote survival in nutrient-limited conditions, as reported for Streptococcus mutans in pigs 148 . Similarly, the expression of MUC5AC has been induced in cells exposed to a wide variety of Gram-negative and Gram-positive bacteria, including P. aeruginosa and Staphylococcus aureus, evoking that mucin secretion is a host defense response to infection 149,150 . Viruses can prominently stimulate MUC5AC expression in vitro or in vivo on airway epithelial cells [START_REF] Thai | Regulation of Airway Mucin Gene Expression[END_REF] .
Beyond mucin expression alterations, glycan glycosylation patterns have also been linked to respiratory physiopathology [START_REF] Shipunov | Glycome assessment in patients with respiratory diseases[END_REF]151 . For instance, mucin glycans exhibit reduced chain length, sulfation, fucosylation, and increased sialylation during inflammation 146 . It is important to note that seemingly minor differences in glycan structures may result in significant pathophysiological outcomes. As an illustration, a shift in the nine-carbon backbone monosaccharides of the sialic acid might enhance the virus binding and infection of cells or can act as decoy receptors that bind virions and block viruses 152 or bacteria, facilitating increased colonization and the development of lung disease 153,154 . The specific roles of mucins and the glycome during infections can vary depending on the pathogen, tissue, and host factors involved 155 . Yet, these early results give states good reason to continue investigating how respiratory pathogens shift host mucins and their associated glycome and the microbiome.
Given the intertwined relationship between pathogens, the airway microbiota, and mucins, and the considerable malleability of the microbiota relative to host genomes, the possibility of influencing respiratory tract health via microbiota manipulation seems possible 151 . Today, controlling pathogens' access to glycans on mucins via microbiome modifications has proven to be a promising method to prevent infection 151 . Elegantly, Pereira et al. 156 demonstrated that the administration of a synthetic bacterial consortium could decrease the availability of sialic acid from mucins by cross-feeding microbes and, thereby, protect against an infection caused by microbial pathogens that use these sugar groups as binding sites, e.g., Clostridiodes difficile 156 or viruses such as influenza virus, reovirus, adenovirus, and rotavirus 114 . Therefore, synthetic microbial communities expressing sialidase activity might improve mucosal health and prevent complex respiratory diseases 157 . Complementary to this approach, nebulizing heparan sulfateconsuming commensal bacteria consistently contained SARS-CoV-2 attachment in higher-risk individuals 158 . At the same time, nebulized fucose has been shown to reduce bacterial adhesion and improve lung function in animal models of respiratory infections 159 . Determining the ideal microbial composition for optimal respiratory health and immune function in livestock is still a topic of ongoing research [START_REF] Mcmullen | Topography of the respiratory tract bacterial microbiota in cattle[END_REF][START_REF] Chai | Bovine respiratory microbiota of feedlot cattle and its association with disease[END_REF][START_REF] Holman | The nasopharyngeal microbiota of feedlot cattle that develop bovine respiratory disease[END_REF][START_REF] Mahmmod | Variations in association of nasal microbiota with virulent and non-virulent strains of Glaesserella (Haemophilus) parasuis in weaning piglets[END_REF][START_REF] Alexander | The role of the bovine respiratory bacterial microbiota in health and disease[END_REF][160][161][162][163][164][165][166] . The nature and the mechanisms behind respiratory microbiome and its interactions with the mucins and glycome profiles in livestock are not understood. Yet, it seems likely that there is a substantial amount of untapped potential concerning the airway microbiome-mucin axis in livestock and how external factors, such as the administration of nebulized synthetic bacterial consortium or the use of food containing synthetic and natural dietary glycans designed to target microbial activity at the mucosa. In the gut, such approaches have shed light on how alterations to the biochemistry of mucins and mucus impact their protective capacity and restore healthy mucosal function 135,167,168 .
The non-coding RNAs: an engaged control of the mucin production and glycome patterns
Next to the use of microbial consortia or dietary glycans to control mucins glycome, understanding the genetic regulation of mucin production and glycosylation patterns offers a way to modulate mucin levels, affecting specific signaling pathways or transcription factors involved in mucin synthesis, glycosylation inhibitors, and glycosyltransferases, among others. However, transmembrane and secreted mucins have a complex glycosylated nature and extreme size 169 . They are products of an orchestrated collection of enzymes working in a coordinated manner to synthesize structures appended to proteins and lipids [START_REF] Reily | Glycosylation in health and disease[END_REF]170 .
As conserved regulatory agents, ncRNAs have started to gain attention as crucial regulators of mucin and glycome regulators at the transcriptional, post-transcriptional, and translational levels. The ncRNAs are RNA molecules that do not encode proteins but have regulatory or structural functions within cells 171 . ncRNAs are mainly divided into small ncRNAs and long non-coding RNAs. microRNAs, circular RNAs, and their precursor pri-miRNAs are sncRNAs ranging from ~22 bp in their mature form to ~70 bp in their premature stem-loop form. LncRNAs are noncoding RNAs transcribed by RNA polymerase II (RNA Pol II) and are longer than 200 nucleotides. Like miRNAs, lncRNAs have emerged as new regulators of gene expression and have become a focal point of biomedical and veterinary research 172 . MiRNAs regulate gene expression at the post-transcriptional level by binding (usually with imperfect complementarity) to the 3-UTR of a target mRNA, resulting in translation degradation or inhibition. A single miRNA can have hundreds of target genes, and multiple miRNAs can converge on a single mRNA. Unlike transcriptional regulators, the role of miRs is not to turn a gene on or off but instead to tune protein expression [START_REF] Thu | Sweet Control: MicroRNA Regulation of the Glycome[END_REF] . The lncRNAs regulate the genomic output at many levels, from transcription to translation 173 . The effect of lncRNAs is mainly achieved by interfering with the expression of downstream genes, supplementing or interfering with the mRNA splicing process, and regulating protein activity 174 . Besides, a growing body of research has found that lncRNAs can regulate gene transcription through the function of competing endogenous RNAs (ceRNAs) 174 . An individual transcriptome contains more lncRNAs than mRNA molecules 175 . For livestock, the most significant number of identified lncRNA transcripts is available for pigs and cattle 175 . Poultry is represented by less than half of the records. Genomic annotation of lncRNAs showed that most are assigned to introns (pig, poultry) or intergenic (cattle) 176 . The number of detected miRNAs in farm animal species is lower than lncRNs, from 1064 in cattle to 267 in goats and 406 in pigs (miRbase release 22 (http://www.mirbase.org/).
Given the emerging importance of ncRNAs in conditions and their potential to identify genes that underlie specific biological processes, it is clear that more attention should be paid to ncRNA and mucins interactions. To this point, data on the roles of ncRNAs in mucin regulation has been mainly generated by well-controlled murine models and humans. For instance, several miRNAs have been identified that regulate the expression of mucin genes in the respiratory tract of humans, namely miR-34b/c 50 , miR-146a 51 , miR-378 [START_REF] Skrzypek | Interplay Between Heme Oxygenase-1 and miR-378 Affects Non-Small Cell Lung Carcinoma Growth, Vascularization, and Metastasis[END_REF] , and miR-141 [START_REF] Siddiqui | Epithelial miR-141 regulates IL-13-induced airway mucus production[END_REF] . Additionally, modifications of miRNA expression can result in significant alterations of glycosyltransferases and glycome [START_REF] Agrawal | Mapping posttranscriptional regulation of the human glycome uncovers microRNA defining the glycocode[END_REF]170 . Jame-Chenarboo et al. (2022) established that miRNAs are substantial regulators of cell glycosylation. Therefore, changes to mucin glycosylation patterns, such as increased sialylation, will likely alter mucin's binding properties with pathogens and their protective functions. miRNAs also play a crucial role in regulating mucin barrier functions, which has mainly been studied in the gut and the context of inflammatory bowel diseases [177][178][179] . Although these studies have elegantly displayed the impact of the regulatory ncRNAs on mucin structure and function, much work is needed to see whether these outcomes directly reflect livestock's complexity (genetic, environmental, comorbidities). The proposed mechanisms and results based on laboratory animals have yet to be validated in animals of veterinary interest due to the limitations regarding obtaining samples and the complexity of production systems, hosts, biotic and abiotic stressors, and infectious etiology. This complicates the application of human and laboratory animal-based findings to livestock. Therefore, the role of ncRNA on mucin glycome remains an open question in livestock. Coupled with this heterogeneity, the disease is known to change ncRNA behavior in livestock. For instance, PRRSV, one of the most important viral pathogens in the swine industry, affects host homeostasis through changes in miRNA expression 180-182 183 (reviewed elsewhere 184 ). Profiles of lncRNA expression in disease were also affected by porcine circovirus-associated disease 185,186 .
An extra difficulty lies in the fact that the presence and activity of microbiota might also regulate the host ncRNAs 187,188 . Indeed, the microbiomes and microbial metabolites such as secondary bile acids and SCFAs have been shown to regulate the expression of miRNAs 189 and lncRNAs in the intestine epithelial cells 190 , macrophages 191 , and other metabolic organs 190 in adult mice or in vitro 192 , evoking that the gut microbiome regulates the expression of both coding RNAs and ncRNAs regionally and systemically 193 , especially during pathogen infection 194 . For instance, a study investigating early rumen development in neonatal dairy calves revealed that nearly 46% of miRNAs expressed in the rumen are responsive to SCFA 195 . Analouglsy, another study focused on rumen microbiota 196 suggested that the microbiota influences the host's miRNA expression pattern and that the host potentially helps shape the gut bacterial profile by producing specific miRNAs. Minimal knowledge exists about the ncRNA regulatory mechanisms behind mucin-microbiome interactions. Still, these emerging nuances illustrate that microbiome interacts with the host directly (direct modulation of transcriptome) and indirectly via the expression of ncRNA 173 .
Concluding remarks
In livestock pathology, there is a growing awareness that infectious agents frequently do not operate alone, and their virulence can be likely affected by their interaction with the microbiomemucin axis. Therefore, the microbiome-mucin axis is a significant player in the health of the respiratory tract. As such, they contain signals that can be used to forecast airway pathophysiology. There are fundamental gaps for species of veterinary interest, yet, these signals can be harnessed by combining microbial and glycomics omics, two fields that have tremendously developed in recent years. Based on the emerging evidence evaluated in this opinion paper, ncRNAs may play a vital role in the microbiota-mucin crosstalk via modifying the holobiont. The next phase in the livestock research area should focus on understanding whether particular microbiome-mucin glycome structures confer resistance and resilience to microbial infection and which are the critical factors controlling the mucin-microbiome axis, including the role of ncRNA. It is now time to harness the forecasting power of the microbiomemucin axis.
Figures Figure 1. The microbiome-mucin axis in the respiratory tract: mucin damage matters
Upper panel: The mutualistic relationship between the airway mucins and the microbiota and derived metabolites under eubiosis. On the one hand, the microbiota and their metabolism induce the synthesis of large gel-forming mucins, including encapsulation, glycosylation, changes in fucosylation and sialidation patterns, and thickness. On the other hand, the mucin layer serves as an environmental niche and a food source for the microbiota. The high diversity of gut mucins impacts the gut microbiota composition, diversity, and stability but also influences immune homeostasis.
Bottom panel: The mutualistic relationship between the mucin glycome and the microbiota under a disrupted airway ecosystem and environment. The deterioration of the respiratory mucosal barrier enables virus binding to the cells, and the translocation of bacteria and lipopolysaccharides (LPS) outside the respiratory tract, triggering immune and inflammatory responses, often resulting in increased permeability and, eventually, endotoxemia. Changes in
Declaration of interest
The author declares no competing interests. the respiratory barrier integrity involve changes in the abundance, expression, and glycosylation of mucins, and thus immune dysregulation, dysbiosis, and risk of disease onset. This figure has been created with BioRender.com. |
00410248 | en | [
"math.math-pr"
] | 2024/03/04 16:41:20 | 2011 | https://inria.hal.science/inria-00410248/file/RoeckBarPorSentPTRF09.pdf | Viorel Barbu
Michael Röckner
Francesco Russo
Probabilistic representation for solutions of an irregular porous media type
Keywords: singular degenerate porous media type equation, probabilistic representation. 2000 AMS-classification: 60H30, 60H10, 60G46, 35C99, 58J65
equation: the degenerate case.
Introduction
We are interested in the probabilistic representation of the solution to a porous media type equation given by
∂ t u = 1 2 ∂ 2 xx (β(u)), t ∈ [0, ∞[ u(0, x) = u 0 (x), x ∈ R, (1.1)
in the sense of distributions, where u 0 is an initial bounded probability density. We look for a solution of (1.1) with time evolution in L 1 (R).
We make the following assumption.
Assumption 1.1
• β : R → R is monotone increasing.
• |β(u)| ≤ const|u|, u ≥ 0.
In particular, β is right-continuous at zero and β(0) = 0.
• There is λ > 0 such that (β + λid)(x) → ∓∞ when x → ∓∞.
Remark 1.2 (i) By one of the consequences of our main result, see Remark 1.6 below, the solution to (1.1) is non-negative, since u 0 ≥ 0.
Therefore, it is enough to assume that only the restriction of β to R + is increasing such that |β(u)| ≤ const|u| for u ≥ 0, and (β +λid)(x) → ∞ when x → +∞. Otherwise, we can just replace β by an extension of the restriction of β to R + which satisfies Assumption 1.1, e.g. take its odd symmetric extension.
(ii) In the main body of the paper, we shall in fact replace β with the "filled" associated graph, see remarks after Definition 2.2 for details; in this way, we consider β as a multivalued function and Assumption 1.1 will be replaced by Hypothesis 3.1.
Since β is monotone, (1.1) implies β(u) = Φ 2 (u)u, u ≥ 0, Φ being a nonnegative bounded Borel function. We recall that when β(u) = |u|u m-1 , m > 1, (1.1) is nothing else but the classical porous media equation.
One of our targets is to consider Φ as continuous except for a possible jump at one positive point, say e c > 0. A typical example is
Φ(u) = H(u -e c ), (1.2)
H being the Heaviside function.
The analysis of (1.1) and its probabilistic representation can be done in the framework of monotone partial differential equations (PDE) allowing multivalued coefficients and will be discussed in detail in the main body of the paper. In this introduction, for simplicity, we restrict our presentation to the single-valued case.
Definition 1.3
• We will say that equation (1.1) or β is non-degenerate if on each compact, there is a constant c 0 > 0 such that Φ ≥ c 0 .
• We will say that equation (1.1) Of course, Φ in (1.2) is degenerate. In order to have Φ non-degenerate, one could add a positive constant to it.
There are several contributions to the analytical study of (1.1), starting from [START_REF] Ph | A semilinear equation in L 1 (R N )[END_REF] for existence, [START_REF] Brezis | Uniqueness of solutions of the initial-value problem for u t -∆ϕ(u) = 0[END_REF] for uniqueness in the case of bounded solutions and [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF] for continuous dependence on the coefficients. The authors consider the case where β is continuous, even if their arguments allow some extensions for the discontinuous case.
As mentioned in the abstract, the first motivation of this paper was to discuss continuous time models of self-organized criticality (SOC), which are described by equations of type (1.1) with β(u) = uΦ 2 (u) and Φ as in (1.2), see e.g. [START_REF] Bak | How Nature Works: The Science of Self-Organized Criticality[END_REF] for a significant monography on the subject and the interesting physical papers [START_REF] Banta | Avalanche dynamics from anomalous diffusion[END_REF] and [START_REF] Cafiero | Local rigidity and self-organized criticality for avalanches[END_REF]. For other comments related to SOC, one can read the introduction of [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF]. The recent papers, [START_REF] Barbu | Stochastic porous media equations and Self-organized criticality[END_REF][START_REF] Barbu | Selforganized criticality via stochastic partial differential equations[END_REF], discuss (1.1) in the case (1.2), perturbed by a multiplicative noise.
The singular non-linear diffusion equation (1.1) models the macroscopic phenomenon for which we try to give a microscopic probabilistic representation, via a non-linear stochastic differential equation (NLSDE) modelling the evolution of a single point.
The most important contribution of [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF] was to establish a probabilistic representation of (1.1) in the non-degenerate case. For the latter we established both existence and uniqueness. In the degenerate case, even if the irregular diffusion equation (1.1) is well-posed, at that time, we could not prove existence of solutions to the corresponding NLSDE. This is now done in the present paper.
To the best of our knowledge the first author who considered a probabilistic representation (of the type studied in this paper) for the solutions of a nonlinear deterministic PDE was McKean [START_REF] Jr | Propagation of chaos for a class of non-linear parabolic equations[END_REF], particularly in relation with the so called propagation of chaos. In his case, however, the coefficients were smooth. From then on the literature has steadily grown and nowadays there is a vast amount of contributions to the subject, especially when the nonlinearity is in the first order part, as e.g. in Burgers equation. We refer the reader to the excellent survey papers [START_REF] Sznitman | Topics in propagation of chaos[END_REF] and [START_REF] Graham | Probabilistic models for nonlinear partial differential equations[END_REF].
A probabilistic interpretation of (1.1) when β(u) = |u|u m-1 , m > 1, was provided for instance in [START_REF] Benachour | Processu associés à l' équation des milieux poreux[END_REF]. For the same β, though the method could be adapted to the case where β is Lipschitz, in [START_REF] Jourdain | Probabilistic approximation for a porous medium equation[END_REF] the author has studied the evolution equation (1.1) when the initial condition and the evolution takes values in the set of all probability distribution functions on R. Therefore, instead of an evolution equation in L 1 (R), he considers a state space of functions vanishing at -∞ and with value 1 at +∞. He studies both the probabilistic representation and propagation of chaos.
Let us now describe the principle of the mentioned probabilistic representation. The stochastic differential equation (in the weak sense) rendering the probabilistic representation is given by the following (random) non-linear diffusion:
Y t = Y 0 + t 0 Φ(u(s, Y s ))dW s Law density(Y t ) = u(t, •), (1.3)
where W is a classical Brownian motion. The solution of that equation may be visualised as a continuous process Y on some filtered probability space (Ω, F, (F t ) t≥0 , P ) equipped with a Brownian motion W . By looking at a properly chosen version, we can and shall assume that Y :
[0, T ] × Ω → R + is B([0, T ]) ⊗ F-measurable.
Of course, we can only have (weak) uniqueness for (1.3) fixing the initial distribution, i.e. we have to fix the distribution (density) u 0 of Y 0 .
The connection with (1.1) is then given by the following result, see also [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF].
Theorem 1.5 Let us assume the existence of a solution Y for (1.3). Then u : [0, T ] × R → R + provides a solution in the sense of distributions of (1.1) with u 0 := u(0, •).
Remark 1.6 An immediate consequence for the associated solution of (1.1) is its positivity at any time if it starts with an initial value u 0 which is positive. Also the mass 1 of the initial condition is conserved in this case.
However this property follows already by approximation from Corollary 4.5 of [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF], which in turn is based on the probabilistic representation in the nondegenerate case, see Corollary 4.2 below for details.
The main purpose of this paper is to show existence of the probabilistic representation equation (1.3), in the case where β is degenerate and not necessarily continuous. The uniqueness is only known if β is non-degenerate and in some very special cases in the degenerate case.
Let us now briefly and consecutively explain the points that we are able to treat and the difficulties which naturally appear in the probabilistic representation.
For simplicity we do this for β being single-valued (and) continuous. However, with some technical complications this generalizes to the multi-valued case, as spelt out in the subsequent sections.
1. Monotonicity methods allow us to show existence and uniqueness of solutions to (1.1) in the sense of distributions under the assumption that β is monotone, that there exists λ > 0 with (β + λid)(R) = R and that β is continuous at zero, see Proposition 3.2 of [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF] and the references therein.
2. If β is non-degenerate, Theorem 4.3 of [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF], allows to construct a unique (weak) solution Y to the non-linear SDE in the first line of (1.3), for any intial bounded probability density u 0 on R.
3. Suppose β to be degenerate. We fix a bounded probability density
u 0 . We set β ε (u) = β(u) + εu, Φ ε = √ Φ 2 + ε and consider the weak solution Y ε of Y ε t = Y ε 0 + t 0 Φ ε (u ε (s, Y ε s ))dW s , (1.4)
where u ε (t, •) is the law of Y ε t , t ≥ 0 and Y ε 0 is distributed according to u 0 (x)dx. The sequence of laws of the processes (Y ε ) are tight, but the limiting process of a convergent subsequence a priori may not necessarily solve the SDE
Y t = Y 0 + t 0 Φ(u(s, Y s ))dW s .
(1.5)
However, this will be shown to be the case in the following two general situations.
(a) The case when the initial condition u 0 is locally of bounded variation, without any further restriction on the coefficient β.
(b) The case when β is strictly increasing after some zero, see Definition 4.20, and without any further restriction on the initial condition.
In this paper, we proceed as follows. Section 2 is devoted to preliminaries and notations. In Section 3, we analyze an elliptic non-linear equation with monotone coefficients which constitutes the basis for the existence of a solution to (1.1). We recall some basic properties and we establish some other which will be useful later. In Section 4, we recall the notion of C 0 -solution to (1.1) coming from an implicite scheme of non-linear elliptic equations presented in Section 3. Moreover, we prove three significant properties. The
first is that β(u(t, •)) is in H 1 , therefore continuous, for almost all t ∈ [0, T ].
The second is that the solution u(t, •) is locally of bounded variation if u 0 is.
The third is that if β is strictly increasing after some zero, then Φ(u(t, •)) is continuous for almost all t. Section 5 is devoted to the study of the probabilistic representation of (1.1).
Finally, we would like to mention that, in order to keep this paper selfcontained and make it accessible to a larger audience, we include the analytic background material and necessary (through standard) definitions.
Likewise, we tried to explain all details on the analytic delicate and quite technical parts of the paper which form the back bone of the proofs for our main result.
Preliminaries
We start with some basic analytical framework.
If f : R → R is a bounded function we will set f ∞ = sup x∈R |f (x)|.
By C b (R) we denote the space of bounded continuous real functions and by C ∞ (R) the space of all continuous functions on R vanishing at infinity. D (R) will be the space of all infinitely differentiable functions with compact support ϕ : R → R, and D ′ (R) will be its dual (the space of Schwartz distributions). S (R) is the space of all rapidly decreasing infinitely differentiable functions ϕ : R → R, and S ′ (R) will be its dual (the space of tempered distributions).
If p ≥ 1 by L p (R) (resp. L p loc (R)), we denote the space of all real Borel functions f such that |f | p is integrable (resp. integrable on each compact interval). We denote the space of all Borel essentialy bounded real functions by L ∞ (R). In several situations we will even omit R.
We will use the classical notation W s,p (R) for Sobolev spaces, see e.g. [START_REF] Adams | Sobolev spaces[END_REF].
• s,p denotes the corresponding norm. We will use the notation H s (R) instead of W s,2 (R). If s ≥ 1, this space is a subspace of the space C(R) of real continuous functions. We recall that, by Sobolev embedding, W 1,1 (R) ⊂ C ∞ (R) and that each u ∈ W 1,1 (R) has an absolutely continuous version. Let δ > 0. We will denote by < •, • > -1,δ the inner product
< u, v > -1,δ =< (δ - 1 2 ∆) -1/2 u, (δ - 1 2 ∆) -1/2 v > L 2 (R) ,
and by • -1,δ the corresponding norm. For details about (δ -1 2 ∆) -s , see [START_REF] Stein | Singular integrals and differentiability properties of functions[END_REF][START_REF] Triebel | Interpolation Theory, Function Spaces, Differential Operators[END_REF] and also [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF], section 2. In particular, given s ∈ R, (δ
-1 2 ∆) s maps S ′ (R) (resp. S(R)) onto itself. If u ∈ L 2 (R). (δ - 1 2 ∆) -1 u(x) = R K δ (x -y)v(y)dy, with K δ (x) = 1 √ 2δ e - √ 2δ|x| . (2.6)
Moreover the map (δ -1 2 ∆) -1 continuously maps H -1 onto H 1 and a tempered distribution u belongs to H -1 if and only if (δ
-1 2 ∆) -1/2 u ∈ L 2 . Remark 2.1 L 1 ⊂ H -1 continuously. Moreover for u ∈ L 1 , u -1,δ ≤ K δ 1 2 ∞ u L 1 = (2δ) -1 4 u L 1 .
Let T > 0 be fixed. For functions (t, x) → u(t, x), the notation u ′ (resp. u ′′ ) will denote the first (resp. second) derivative with respect to x.
Let E be a Banach space. One of the most basic notions of this paper is the one of a multivalued function (graph). A multivalued function (graph) β on E will be a subset of E × E. It can be seen, either as a family of couples (e, f ), e, f ∈ E and we will write f ∈ β(e) or as a function β : E → P(E).
We start with the definition in the case E = R.
Definition 2.2 A multivalued function β defined on R with values in subsets of R is said to be monotone if given
x 1 , x 2 ∈ R, (x 1 -x 2 )(β(x 1 )-β(x 2 )) ≥ 0.
We say that β is maximal monotone (or a maximal monotone graph)
if it is monotone and if for one (hence all) λ > 0, β + λid is surjective, i.e.
R(β + λid) := x∈R (β(x) + λx) = R.
For a maximal monotone graph β : R → 2 R , we define a function j : R → R by
j(u) = u 0 β • (y)dy, u ∈ R, (2.7)
where β • is the minimal section of β. It fullfills the property that ∂j = β in the sense of convex analysis see e.g. [START_REF] Barbu | Analysis and control of nonlinear infinite dimensional systems[END_REF]. In other words β is the subdifferential of j. j is convex, continuous and if 0 ∈ β(0), then j ≥ 0.
We recall that one motivation of this paper is the case where
β(u) = H(u - e c )u
g i ∈ Af i , i = 1, 2, we have f 1 -f 2 ≤ f 1 -f 2 + λ(g 1 -g 2 ) ,
for any λ > 0.
This is equivalent to saying the following: for any λ > 0, (I + λA) -1 is a contraction on Rg(I + λA). We remark that a contraction is necessarily single-valued.
Proposition 2.4 Suppose that E is a Hilbert space equipped with the scalar product ( , ) H . Then A is accretive if and only if A is monotone i.e.
(f 1 -f 2 , g 1 -g 2 ) H ≥ 0 for any f 1 , f 2 , g 1 , g 2 ∈ E such that g i ∈ Af i , i = 1,
E = L 1 (R), so E * = L ∞ (R).
The following is taken from [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF], Section 1.
Theorem 2.7 Let β : R → R be a monotone (possibly multi-valued) function such that the corresponding graph is maximal monotone. Suppose that
0 ∈ β(0). Let f ∈ E = L 1 (R). 1. There is a unique u ∈ L 1 (R) for which there is w ∈ L 1 loc (R) such that u -∆w = f in D ′ (R), w(x) ∈ β(u(x)), for a.e. x ∈ R, (2.8)
see Proposition 2 of [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF].
Then, a (possibly multivalued) operator
A := A β : D(A) ⊂ E → E
is defined with D(A) being the set of u ∈ L 1 (R) for which there is
w ∈ L 1 loc (R) such that w(x) ∈ β(u(x)) for a.e. x ∈ R and ∆w ∈ L 1 (R) and for u ∈ D(A) Au = {- 1 2 ∆w|w as in definition of D(A)}.
This is a consequence of the remarks following Theorem 1 in [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF].
In particular, if β is single-valued, then Au = -1 2 ∆β(u). (We will adopt this notation also if β is multi-valued). [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF]. Moreover D(A) = E. [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF]. In particular, for every positive integer n
The operator A defined in 2. above is m-accretive on
E = L 1 (R), see Proposition 2 of
We set
J λ = (I + λA) -1 , which is a single-valued operator. If f ∈ L ∞ (R), then |J λ f ∞ ≤ f ∞ , see Proposition 2 (iii) of
, J n λ f ∞ ≤ f ∞ .
Let us summarize some important results of the theory of non-linear semigroups, see for instance [START_REF] Evans | Nonlinear evolution equations in an arbitrary Banach space[END_REF][START_REF] Barbu | Nonlinear semigroups and differential equations in Banach spaces[END_REF][START_REF] Barbu | Analysis and control of nonlinear infinite dimensional systems[END_REF][START_REF] Ph | A semilinear equation in L 1 (R N )[END_REF] or the more recent monograph [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF],
which we shall use below. Let A : E → E be a (possibly multivalued) accretive operator. We consider the equation
0 ∈ u ′ (t) + A(u(t)), 0 ≤ t ≤ T.
(2.9)
A function u : [0, T ] → E which is absolutely continuous such that for a.e.
t, u(t, •) ∈ D(A) and fulfills (2.9) in the following sense is called strong solution.
There exists η : [0, T ] → E, Bochner integrable, such that η(t) ∈ A(u(t)) for a.e. t ∈ [0, T ] and
u(t) = u 0 - t 0 η(s)ds, 0 < t ≤ T.
A weaker notion for (2.9) is the so-called C 0 -solution, see chapter IV.8 of [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF], or mild solution, see [START_REF] Barbu | Analysis and control of nonlinear infinite dimensional systems[END_REF]. In order to introduce it, one first defines the notion of ε-solution related to (2.9).
An ε-solution is a discretization
D = {0 = t 0 < t 1 < . . . < t N = T }
and an E-valued step function
u ε (t) = u 0 : t = t 0 u j ∈ D(A) : t ∈]t j-1 , t j ],
for which t jt j-1 ≤ ε for 1 ≤ j ≤ N , and
0 ∈ u j -u j-1 t j -t j-1 + Au j , 1 ≤ j ≤ N.
We remark that, since A is maximal monotone, u ε is determined by D and u 0 , see Theorem 2.7 3.
Definition 2.8 A C 0 -solution of (2.9) is an u ∈ C([0, T ]; E) such that
for every ε > 0, there is an ε-solution u ε of (2.9) with
u(t) -u ε (t) ≤ ε, 0 ≤ t ≤ T.
Proposition 2.9 Let A be a maximal monotone (multivalued) operator on a
Banach space E. We set again J λ := (I +λA) -1 , λ > 0. Suppose u 0 ∈ D(A).
Then:
1. There is a unique C 0 -solution u : [0, T ] → E of (2.9) 2. u(t) = lim n→∞ J n t n u 0 uniformly in t ∈ [0, T ].
Proof.
1) is stated in Corollary IV.8.4. of [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF] and 2) is contained in Theorem IV 8.2 of [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF].
The complications coming from the definition of C 0 -solution arise because
the dual
E * of E = L 1 (R) is not uniformly convex.
In general a C 0 -solution is not absolutely continuous and not a.e. differentiable, so it is not a strong solution. For uniformly convex Banach spaces, the situation is much easier.
Indeed, according to Theorem IV 7.1 of [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF], for a given u 0 ∈ D(A), there would exist a (strong) solution u : [0, T ] → E to (2.9). Moreover, Theorem 1.2 of [START_REF] Crandall | On the relation of the operator ∂ ∂s + ∂ ∂τ to evolution governed by accretive operators[END_REF] says the following. Given u 0 ∈ D(A) and given a sequence (u n 0 ) in D(A) converging to u 0 , then the sequence of the corresponding strong solutions (u n ) would converge to the unique C 0 -solution of the same equation.
Elliptic equations with monotone coefficients
Let us fix our assumptions on β which we assume to be in force in this entire section.
Hypothesis 3.1 Let β : R → 2 R be a maximal monotone graph with the property that there exists c > 0 such that
w ∈ β(u) ⇒ |w| ≤ c|u|. (3.1)
We note that (3.1) implies that β(0) = 0, hence j(u) ≥ 0, for any u ∈ R, where j is defined in (2.7). Furthermore, by Hypothesis 3.1,
j(u) ≤ |u| 0 |β • (y)|dy ≤ c|u| 2 . (3.2)
We recall from [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF] that the first ingredient to study well-posedness of equation (1.1) is the following elliptic equation
u -λ∆β(u) ∋ f (3.3)
where f ∈ L 1 (R) and u is the unknown function in L 1 (R).
Definition 3.2 Let f ∈ L 1 (R). Then u ∈ L 1 (R) is called a solution of (3.3) if there is w ∈ L 1 loc with w ∈ β(u) a.e. and u -λ∆w = f (3.4)
in the sense of distributions.
According to Theorem 4.1 of [START_REF] Ph | A semilinear equation in L 1 (R N )[END_REF], and Theorem 1, Ch.1, of [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF], equation
(3.
3) admits a unique solution. Moreover, w is also uniquely determined by u. Sometimes, we will also call the couple (u, w) the solution to (3.4).
We recall some basic properties of the couple (u, w).
Lemma 3.3 Let (u, w) be the unique solution of (3.3). Let J λ β : L 1 (R) → L 1 (R) be the map which associates the solution u of (3.3) to f ∈ L 1 (R).
We have the following:
1. J λ β 0 = 0.
2. J λ β is a contraction in the sense that
J λ β (f 1 ) -J λ β (f 2 ) L 1 ≤ f 1 -f 2 L 1 for every f 1 , f 2 ∈ L 1 . 3. If f ∈ L 1 L ∞ , then u ∞ ≤ f ∞ 4. If f ∈ L 1 L 2 , then u ∈ L 2 and R u 2 (x)dx ≤ R f 2 (x)dx. and R j(u)(x)dx ≤ R j(f )(x)dx ≤ const f L 2 . 5. Let f ∈ L 1 . Then w, w ′ ∈ W 1,1 ⊂ C ∞ (R). Hence, in particular w ∈ W 1,p for any p ∈ [0, ∞].
Proof.
1. is obvious and comes from uniqueness of (3.3).
2. See Proposition 2.i) of [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF].
3. See Proposition 2.iii) of [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF].
4. This follows from [START_REF] Ph | A semilinear equation in L 1 (R N )[END_REF], Point III, Ch. 1 and (3.2).
5. We define g :
= 1 2λ (w + f -u). Since f ∈ L 1 , also u ∈ L 1 , hence w ∈ L 1 by (3.1). Altogether it follows that g ∈ L 1 . (3.4) and (2.6) imply that w = K δ ⋆ g with δ = 1
2λ and hence
w ′ = K ′ δ ⋆ g = sign(y -x)e -λ -1 2 |x-y| g(y)dy,
where
sign x = -1 : x < 0 0 : x = 0 1 : x > 0.
This implies w, w ′ ∈ L 1 L ∞ . By (3.4) we know that also
w ′′ ∈ L 1 , hence w, w ′ ∈ W 1,1 (⊂ C ∞ ).
u + λδβ(u) -λ∆(β(u)) ∋ f. (3.5)
In fact, [START_REF] Ph | A semilinear equation in L 1 (R N )[END_REF] treats the equation ∆v + γ(v) ∋ f , with γ : R → 2 R a maximal monotone graph. We reduce equation (3.3) and (3.5) to this equation, by
setting v = λβ(u), γ(v) = -β -1 ( v λ )
, where β -1 is the inverse graph of β, and setting v = λβ(u), γ(v) = -β -1 ( v λ )δv, respectively. In both cases γ is a maximal monotone graph.
Since w ′′ ∈ L 1 , (3.4) can be written as
R u(x)ϕ(x)dx -λ R w ′′ (x)ϕ(x)dx = R f (x)ϕ(x)dx ∀ϕ ∈ L ∞ (R). (3.6)
Since w ∈ L ∞ , we may replace ϕ by w in (3.6). In addition, w ′ ∈ L 2 , so by a simple approximation argument, it follows that
R (u(x) -f (x))w(x)dx + λ R w ′ 2 (x)dx = 0. (3.7)
Now, we are ready to prove the following.
Lemma 3.5 Let f ∈ L 1 L 2 and (u, w) be a solution to (3.3). Then R (j(u) -j(f ))(x)dx ≤ -λ R w ′ 2 (x)dx.
Proof. By definition of the subdifferential and since w(x) ∈ β(x) for a.e.
x ∈ R, we have
(j(u) -j(f ))(x) ≤ w(x)(u -f )(x) a.e. x ∈ R. (3.8)
Again (3.7) implies the result after integrating (3.8).
We go on analysing the local bounded variation character of the solution u of (3.3).
If f : R → R, for h ∈ R, we define
f h (x) = f (x + h) -f (x). (3.9)
Writing w h ′′ := (w h ) ′′ we observe that
u h -λw h ′′ = f h , (3.10)
where w(x) ∈ β(u(x)), and w(x
+ h) ∈ β(u(x + h)) a.e.
Let ζ ≥ 0 be a smooth function with compact support.
Lemma 3.6 Assume β is strictly monotone, i.e.
β(x) β(y) = ∅ if x = y. (3.11)
Let u be a solution of (3.3). Then, for each
h ∈ R R ζ(x)|u h (x)|dx ≤ R ζ(x)f h (x)sign(w h (x))dx + c ζ ′′′ ∞ λ|h| u L 1 . (3.12)
where c is the constant from (3.1).
Proof.
(3.10) gives R u h (x)ϕ(x)dx = R (λw h ′′ (x) + f h (x))ϕ(x)dx, ∀ϕ ∈ L ∞ (R). (3.13) We set ϕ(x) = sign(u h (x))ζ(x
= 0} = {λw h ′′ + f h = 0}. Hence (3.13) implies R ζ(x)|u h (x)|dx = {u h =0} (λw h ′′ (x) + f h (x))sign(w h (x))ζ(x)dx = λ R w h ′′ (x)sign(w h )(x)ζ(x)dx + R f h (x)sign(w h (x))ζ(x)dx. It remains to control λ R w h ′′ (x)sign(w h (x))ζ(x)dx. (3.14)
Let ̺ = ̺ L : R → R, be an odd smooth function such that ̺ ≤ 1 and
̺(x) = 1 on [ 1 L , ∞[. (3.14) is the limit when L goes to infinity of λ R w h ′′ (x)̺(w h (x))ζ(x)dx = -λ R w h ′ (x) 2 ̺ ′ (w h (x))ζ(x)dx - λ R w h ′ (x)̺(w h (x))ζ ′ (x)dx.
Since the first integral of the right-hand side of the previous expression is positive, (3.14) is upper bounded by the limsup when L goes to infinity of
-λ R w h ′ (x)̺(w h (x))ζ ′ (x)dx = -λ R (̺(w h (x))) ′ ζ ′ (x)dx,
where ̺(x) =
x 0 ̺(y)dy. But the previous expression is equal to
λ R ̺(w h (x))ζ ′′ (x)dx = λ R ̺(w(x))(ζ ′′ (x -h) -ζ ′′ (x))dx.
Since ̺(x) ≤ |x|, pointwise and w ∈ β(u) a.e., u ∈ L 1 , the previous integral is bounded by
λ ζ ′′′ ∞ |h| R |w(x)|dx ≤ cλ ζ ′′′ ∞ |h| R |u(x)|dx, (3.15)
with c coming from (3.1).
Remark 3.7 Using similar arguments as in Section 4 below, we can show that u is locally of bounded variation whenever f is. We have not emphasized this result since we will not directly use it.
4 Some properties of the porous media equation
Let β : R → 2 R . Throughout this section, we assume that β satisfies Hypothesis 3.1. Our first aim is to prove Theorem 4.15 below, for which we need some preparations. Let
u 0 ∈ (L 1 ∩ L ∞ )(R).
We recall some results stated in [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF] as Propositions 2.11 and 3.2.
Proposition 4.1 1. Let u 0 ∈ (L 1 L ∞ )(R).
Then, there is a unique solution to (1.1) in the sense of distributions. This means that there exists a unique couple (u,
η u ) ∈ L 1 ∩ L ∞ ([0, T ] × R) 2 with R u(t, x)ϕ(x)dx = R u 0 (x)ϕ(x)dx + 1 2 t 0 R η u (s, x)ϕ ′′ (x)dx ds, (4.1
)
where ϕ ∈ C ∞ • (R). Furthermore, t → u(t, •) is in C([0, T ], L 1 ) and η u (t, x) ∈ β(u(t, x)) for dt ⊗ dx-a.e. (t, x) ∈ [0, T ] × R.
We define the multivalued map
A = A β : E → E, E = L 1 (R), where D(A) is the set of all u ∈ L 1 for which there is w ∈ L 1 loc (R) such that w(x) ∈ β(u(x)) a.e. x ∈ R and ∆w ∈ L 1 (R). For w ∈ D(A) we set Au = {- 1 2
∆w|w as in the definition of D(A)}.
Then A is m-accretive on L 1 (R). Therefore there is a unique C 0solution of the evolution problem
0 ∈ u ′ (t) + Au(t), u(0) = u 0 .
3. The C 0 -solution under 2. coincides with the solution in the sense of distributions under 1. [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF].
4. u ∞ ≤ u 0 ∞ . 5. Let β ε (u) = β(u) + εu, ε > 0 and consider the solution u (ε) to ∂ t u (ε) = 1 2 β ε (u (ε) ) ′′ u (ε) (0, •) = u 0 . Then u (ε) → u in C([0, T ], L 1 (R)) when ε → 0, see
Corollary 4.2 We have u(t, •) ≥ 0 a.e. for any t ∈ [0, T ]. Moreover R u(t, x)dx = R u 0 (x)dx = 1, for any t ≥ 0.
Proof. In fact the functions u (ε) introduced in point 4. of Proposition 4.1 have the desired property. Taking the limit when ε goes to zero, the assertion follows.
Remark 4.3 Uniqueness to (4.1) holds even only with the assumptions β monotone, continuous at zero and β(0) = 0, see [START_REF] Brezis | Uniqueness of solutions of the initial-value problem for u t -∆ϕ(u) = 0[END_REF].
Below we fix on an initial condition u 0 ∈ L 1 L ∞ . Lemma 4.4 Let ε > 0. We consider an ε-solution given by
D = {0 = t 0 < t 1 < . . . < t N = T } and u ε (t) = u 0 , t = 0 u j , t ∈]t j-1 , t j ] , for which for 1 ≤ j ≤ N u j - t j -t j-1 2
w ′′ j = u j-1 , w j ∈ β(u j ) a.e.. We set
η ε (t, •) = β(u 0 ) : t = 0, w j : t ∈]t j-1 , t j ].
When ε → 0, η ε converges weakly in L 1 ([0, T ]×R) to η u , where (u, η u ) solves equation (1.1). Furthermore, for p = 1 or p = ∞,
sup t≤T u ε (t, •) L p ≤ u 0 L p and (4.2) sup t≤T η ε (t, •) L p ≤ c u 0 L p ,
where c is as in Hypothesis 3.1. Hence,
sup t≤T u(t, •) L p ≤ u 0 L p . (4.3)
and (4.4) follows by an elementary interpolation argument. Indeed, for r ∈
η L r ([0,T ]×R) ≤ cT 1 r u 0 r-! r L ∞ u 0 r L 1
[1, ∞[, r ′ := r r-1 , ε n → 0 and ϕ ∈ (L r L ∞ )([0, T ] R) we have T 0 R ϕ(t, x)η u (t, x)dxdt = lim n→∞ T 0 R ϕ(t, x)η εn (t, x)dxdt ≤ ϕ L r ′ ([0,T ]×R) lim inf n→∞ T 0 R |η εn (t, x)| r dxdt 1 r ≤ ϕ L r ′ ([0,T ]×R) lim inf n→∞ sup t≤T η εn (t, •) r-1 r L ∞ T 1 r sup t≤T η εn (t, •) 1 r L 1 ≤ ϕ L r ′ ([0,T ]×R) T 1 r c u 0 r-1 r L ∞ u 1 r L 1 , 19
where we used the second part of (4.2) in the last step.
If not mentioned otherwise, in the sequel for N > 0 and ε = T N , we will consider the subdivision
D = {t i = εi, 0 ≤ i ≤ N }. (4.5)
We now discuss some properties of the solution exploiting the fact that the initial condition is square integrable.
Proposition 4.5 Let u 0 ∈ (L 1 L ∞ )(R). Then the solution (u, η u ) ∈ (L 1 ∩ L ∞ )([0, T ] × R) 2 of (1.1) has the following properties. a) η u (t, •) is absolutely continuous for a.e. t ∈ [0, T ] and η u ∈ L 2 ([0, T ]; H 1 (R)). b) R j(u(t, x))dx+ 1 2 t r R (η ′ u ) 2 (s, x)dxds ≤ R j(u(r, x))dx ∀ 0 ≤ r ≤ t ≤ T.
In particular t → R j(u(t, x))dx is decreasing and
[0,T ) R (η ′ u ) 2 (s, x)dxds ≤ 2 R j(u 0 (x))dx := C ≤ const. u 0 2 L 2 < ∞ . (4.6) c) t → j(u(t, x))dx is continuous on [0, T ].
Proof. We consider the scheme considered in Lemma 4.4 corresponding to ε = T N . By Lemma 3.3 5., we have w j ∈ H 1 (R), 1 ≤ j ≤ N , and by Lemma 3.5
R j(u i (x))dx + ε 2 R w ′ i (x) 2 dx ≤ R j(u i-1 (x))dx ∀i = 1, . . . , N. (4.7)
Hence for any 0
≤ l ≤ m ≤ N R j(u m (x))dx + ε 2 m i=l+1 R (w ′ i )(x) 2 dx ≤ R j(u l (x))dx. (4.8)
Using the notation introduced in Lemma 4.4, for all 0 ≤ r ≤ t ≤ T , we obtain
R j(u ε (t, x))dx + 1 2 t r R (η ε′ ) 2 (s, x)dsdx ≤ R j(u ε (r, x))dx. (4.9)
On the other hand, Lemma 4.4 and Lemma 3.3 4. imply that (4.12) and (4.9) say that η ε , ε > 0, are bounded in L 2 ([0, T ]; H 1 (R)). There is then a subsequence (ε n ) with η εn converging weakly in L 2 ([0, T ]; H 1 (R)) and therefore also weakly in L 2 ([0, T ] × R) to some ξ. According to Lemma 4.4 and the uniqueness of the limit, it follows η u = ξ and so
u ε (t) L 2 ≤ u 0 L 2 ∀t ∈ [0, T ]. (4.10) Therefore, T 0 ds R u ε (s, x) 2 dx ≤ T u 0 2 L 2 . ( 4
η u ∈ L 2 ([0, T ]; H 1 (R)),
which implies a). We recall that
T 0 ds R (η ′ u ) 2 (s, x)dx ≤ lim inf ε→0 T 0 ds R (η ε′ ) 2 (s, x)dx. (4.13)
In fact the sequence (η ε′ ) is weakly relatively compact in L 2 ([0, T ] × R). It follows by (3.2) and (4.10) that j(u ε (t)), ε > 0, are uniformly integrable for each t ∈ [0, T ]. Since j is continuous, and
u ε (t, •) → u(t, •) in L 1 (R) for each t ∈ [0, T ], it follows that j(u ε (t, •)) → j(u(t, •)) as ε → 0 in L 1 (R) for each t ∈ [0, T ]. (4.
12) and (4.9) imply that
R j(u(t, x))dx + 1 2 t r R (η ′ u ) 2 (s, x)dxds ≤ R j(u(r, x))dx, (4.14)
for every 0 ≤ r ≤ t ≤ T , which is inequality b).
To prove c), by (4.3) and Lebesgue dominated convergence theorem, us-
ing again that u 0 ∈ (L 1 L ∞ )(R) ⊂ L 2 , we deduce that t → u(t, •) is in C([0, T ], L 2 ) since it is in C([0, T ], L 1 ) by Proposition 4.1, 1. Now let t n → t in [0, T ] as n → ∞, then u(t n , •) → u(t, •) in L 2 as n → ∞, in particu- lar {u 2 (t n , •)|n ∈ N} is equiintegrable, hence by (3.2) {j(u(t n , •))|n ∈ N} is equiintegrable. Since j is continuous, assertion c) follows. Corollary 4.6 R u 2 (t, x)dx ≤ u 0 2 L 2 , ∀t ∈ [0, T ]. (4.15)
Proof. The result follows by Fatou's lemma, from (4.10).
Inequality (4.14) will be shown in Theorem 4.15 below to be indeed an equality.
Remark 4.7 According to Proposition 4.1 1., for every ϕ ∈ C ∞ 0 (R), we have
R u(t, x)ϕ(x)dx - R u 0 (x)ϕ(x)dx = 1 2 t 0 R η u (s, x)ϕ ′′ (x)dxds. (4.16) Since s → η u (s, •) belongs to L 2 ([0, T ]; H 1 (R)), by Proposition 4.5 a), we have s → η ′′ u (s, •) ∈ L 2 ([0, T ]; H -1 (R))
. This, together with (4.16), imply that t → u(t, •) is absolutely continuous from [0, T ] → H -1 (R). So, in This proposition will be important to prove that the real function t → R j(u(t, x))dx is absolutely continuous.
H -1 (R) we have d dt u(t, •) = 1 2 η ′′ u (t, •) t ∈ [0, T ] a.e.. ( 4
Proof. We equip H = H -1 (R) with the inner product •, • -1,δ where δ ∈]0, 1] and
u, v -1,δ = (δ - 1 2 ∆) -1 2 u, (δ - 1 2 ∆) -1 2 v L 2 (R) ,
and corresponding norm • -1,δ . We define Γ :
H → [0, ∞] by Γ(u) = R j(u(x))dx, if u ∈ L 1 loc +∞, otherwise.
and D(Γ) = {u ∈ H| Γ(u) < ∞}. We also consider
D(A δ ) = {u ∈ D(Γ)| ∃ η u ∈ H 1 , η u ∈ β(u) a.e.}.
For u ∈ D(A δ ), we set A δ u = {(δ -1 2 ∆)η u |η u as in the definition of D(A δ )}. Obviously, Γ is convex since j is convex, and Γ is proper since D(Γ) is nonempty and even dense in H -1 , because L 2 (R) ⊂ D(Γ). The rest of the proof will be done in a series of lemmas. Lemma 4.9 The function Γ is lower semicontinuous.
Proof. First of all we observe that Γ is lower semicontinuous on L 1 loc (R). In fact, defining Γ N , N ∈ N, analogously to Γ, with j ∧ N replacing j, by the continuity of j and Lebesgue's dominated converegence theorem, Γ N is continuous in L 1 loc . Since Γ = sup N ∈N Γ N , it follows that Γ is lower continuous on L 1 loc . Let us suppose now that u n → u in H -1 (R). We have to prove that for each K > 0. Hence, by Dunford-Pettis theorem, the sequence (u n ) is weakly relatively compact in L 1 loc . Therefore, there is a subsequence (n l ) such that (u n l ) converges weakly in L 1 loc , necessarily to u, since u n → u strongly, hence also weakly in H -1 (R). Since Γ is convex and lower semicontinuous on L 1 loc , it is also weakly lower semicontinuous on L 1 loc , sse [START_REF] Choquet | Representation theory[END_REF] p.62, 22.1. This implies that
j(u(x))dx ≤ lim inf l→∞ j(u n k (x))dx = C.
Finally, (4.20)and thus the assertion of Lemma 4.9 is proved.
An important intermediate step is the following.
Lemma 4.10 D(Γ) = D(A δ ) and ∂ H Γ(u) = A δ u, ∀u ∈ D(Γ). In particular D(A δ ) is dense in H -1 .
We observe, that ∂ H depends in fact on δ since the inner product on H -1 depends on δ.
Proof. Let u ∈ D(Γ), h ∈ L 2 (⊂ D(Γ)). For z ∈ ∂ H Γ(u) we have Γ(u + h) -Γ(u) ≥ z, h -1,δ = R v(x)h(x)dx, (4.21)
where [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF], this yields that v ∈ β(u) a.e. Consequently, D(Γ) = D(A δ ) and
v = (δ -1 2 ∆) -1 z. Clearly v ∈ H 1 . By (4.21) it follows that v ∈ ∂ L 2 Γ(u) where Γ is the restriction of Γ to L 2 (R). By Example 2B of Chapter IV.2 in
∂ H Γ(u) ⊂ A δ u, ∀u ∈ D(Ψ).
It remains to prove that
A δ (u) ⊂ ∂ H Γ(u), ∀u ∈ D(Γ). Let u ∈ D(Γ), h ∈ L 2 , η u ∈ β(u) a.e. with η u ∈ H 1 . Since j(u + h) -j(u) ≥ η u h a.e., it follows Γ(u + h) -Γ(u) ≥ R η u (x)h(x)dx = (δ - 1 2 ∆)η u , h -1,δ . (4.22)
It remains to show that (4.22) holds for any h ∈ H -1 such that u+h ∈ D(Γ).
Then we have u + h, u ∈ L 1 loc and j(u), j(u + h) ∈ L 1 . We first prove that (4.22) holds if h ∈ L 1 (⊂ H -1 ). We truncate h setting
h n = 1 {|h|≤n} h, n ∈ N, so that h n ∈ L 2 (R). Now j(u + h n )(x) = j(u(x) + h(x)) if |h(x)| ≤ n, j(u(x)) if |h(x)| > n,
and it is dominated by
j(u + h) + j(u) ∈ L 1 .
We have
R j(u + h n )(x)dx ≥ R j(u(x))dx + (δ - 1 2 ∆)η u , h n -1,δ . ( 4
.23)
Since h n → h in L 1 (and so in H -1 ), using Lebesgue's dominated convergence theorem, (4.22) follows for h ∈ L 1 .
Let M > 0 and consider a smooth function χ : R → [0, 1] such that χ(r) = 1 for 0 ≤ |r| ≤ 1, χ(r) = 0 for 2 ≤ |r| < ∞. We define
χ M (x) = χ x M , x ∈ R.
Then
χ M (x) = 1 : |x| ≤ M 0 : |x| ≥ 2M. Since hχ M ∈ L 1 , we have R (j(u + hχ M )(x) -j(u)(x))dx ≥ (δ - 1 2 ∆)η u , hχ M -1,δ . (4.24)
Since j is convex and non-negative, we have
j(u + hχ M ) = j((1 -χ M )u + χ M (u + h)) ≤ (1 -χ M )j(u) + χ M j(u + h) ≤ j(u) + j(u + h).
Hence Lebesgue's domintated convergence theorem allows to take the limit in the left-hand side, when M → ∞ of (4.24) to obtain R (j(u + h)(x))j(u(x))dx.
The right-hand side of (4.24) converges to (δ -1 2 ∆)η u , h H -1 because of the next lemma. Hence, the assertion of Lemma 4.10 follows.
Lemma 4.11 Define h
M := χ M h in H -1 , M > 0. Then lim M →∞ h M = h weakly in H -1 .
Proof (of Lemma 4.11). Let us first show that the sequence (h
M ) is bounded in H -1 . In fact, given ϕ ∈ H 1 , R h M (x)ϕ(x)dx = R h(x)χ M (x)ϕ(x)dx ≤ h H -1 δ 1 2 χ M ϕ L 2 + χ ′ M ϕ + ϕ ′ χ M L 2 ≤ h H -1 δ 1 2 ϕ L 2 + ϕ ′ L 2 + χ ′ ∞ M ϕ L 2 ≤ const h H -1 ϕ H 1
for some positive constant independent of M .
Hence there is a subsequence weakly converging to some
k ∈ H -1 . Since R h M (x)ϕ(x)dx → M →∞ R h(x)ϕ(x)dx
for any ϕ ∈ C ∞ 0 (R), k must be equal to h. Now the assertion of Lemma 4.11 follows.
By Corollary IV 1.2 in [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF], we know that A δ is maximal monotone on H -1 and therefore m-accretive with domain D(A δ ) = D(Γ).
We go on with the proof of Proposition 4.8. Since our initial condition u 0 belongs to L 1 ∩ L ∞ and L 2 ⊂ D(Γ), clearly u 0 ∈ D(A δ ). According to Komura-Kato theorem, see [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF]Proposition IV.3.1], there exists a (strong)
solution u = u δ : [0, T ] → E = H -1 of du dt + A δ u ∋ 0, t ∈ [0, T ] u(0, •) = u 0 , (4.25) which is Lipschitz. In particular, for almost all t ∈ [0, T ], u δ (t, •) ∈ D(A δ ) and there is ξ δ (t, •) ∈ H 1 such that ξ δ (t, •) ∈ β(u δ (t, •)) a.e., t → (δξ δ - 1 2 ∆ξ δ )(t, •) ∈ H -1 is measurable and u δ (t, •) = u 0 + t 0 δξ δ - 1 2 ∆ξ δ (s, •)ds (4.26) in H -1 .
Furthermore, for the right-derivative D + u δ (t), we have
D + u δ (t, •)) + (A δ ) • u δ (t, •) = 0 in H -1 , ∀t ∈ [0, T ], (4.27)
where (A δ ) • denotes the minimal section of A δ and the map t
→ (A δ ) • u δ (t, •) -1,δ
is decreasing. On the other hand (4.26) implies that
du δ dt (t, •) + δξ δ (t, •) - 1 2 (ξ δ ) ′′ (t, •) = 0 for a.e. t ∈ [0, T ]. (4.28)
Consequently, for almost all t ∈ [0, T ]
δξ δ (t, •) - 1 2 (ξ δ ) ′′ (t, •) -1,δ = (A δ ) • u δ (t, •) -1,δ ≤ (A δ ) • u 0 -1,δ , (4.29)
i.e. setting ξ 0 = (δ -1 2 ∆) -1 (A δ ) • u 0 , we observe that it belongs to H 1 and that
H -1 (δ - 1 2 ∆)ξ δ (t, •), ξ δ (t, •) H 1 ≤ H -1 (δ - 1 2 ∆)ξ 0 , ξ 0 H 1 , for a.e. t ∈ [0, T ].
Consequently, for a.e.
t ∈ [0, T ], R (δξ δ (t, x) 2 + 1 2 ξ ′ δ (t, x) 2 )dx ≤ δ R ξ 2 0 (x)dx + R ξ ′ 0 2 (x)dx (4.30) ≤ ξ 0 H 1 =: C since δ ≤ 1.
We now consider equation (4.25) from an L 1 perspective, similarly as for equation (1.1), see Proposition 4.1 2. Since our initial condition u 0 belongs to (L 1 ∩ L ∞ )(R), equation (4.25) can also be considered as an evolution problem on the Banach space E = L 1 (R). More precisely define
D( Ãδ ) := {u ∈ L 1 (R)|∃w ∈ L 1 loc : w ∈ β(u)
a.e. and (δ -
1 2 ∆)w ∈ L 1 (R)} and for u ∈ D( Ãδ ), Ãδ u := {(δ - 1 2 ∆)w|w as in D( Ãδ )}.
Note that for w as in the definition of D (A δ ), we have (δ
-1 2 ∆)w ∈ H -1 , since L 1 (R) ⊂ H -1 . Therefore, w ∈ H 1 , hence D( Ãδ ) ⊂ D(A δ ) and Ãδ = A δ on D( Ãδ ).
(4.31) Furthermore, as indicated in Section 3, it is possible to show that Ãδ is an m-accretive operator on L 1 .
For λ > 0, the following four points are then a consequence of Remark 3.4 and Lemma 3.3.
1. For each f ∈ L 1 (R) there is u ∈ L 1 , w ∈ L 1 with w ∈ β(u) a.e. and u + λ(δw - 1 2 λw ′′ ) = f. (4.32)
2. The map
f → u := (I + λ Ãδ ) -1 (f ) is a contraction on L 1 . (4.33) 3. D( Ãδ ) = L 1 . 4. We recall that whenever f ∈ L ∞ , then u ∈ L ∞ and u ∞ ≤ f ∞ . (4.34)
Therefore, there is a C 0 -solution ũ : [0, T ] × R → R of (4.25). Since by (4.31), every ε-solution of (4.25) in L 1 (R) is also an ε-solution of (4.25) in
H -1 and L 1 ⊂ H -1 continuously, ũ is also a C 0 -solution of (4.25) in H -1 .
Since, by Proposition IV 8.2 and 8.7 of [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF], the solution above is the unique C 0 -solution of (4.25) in H -1 , we have proved the first part of the following lemma.
Lemma 4.12 The solution ũ coincides with the H -1 -valued solution u δ .
Moreover, for p = 1 or p = ∞ and c as in Hypothesis 3.1
sup t≤T u δ (t, •) L p ≤ u 0 L p and esssup t≤T ξ δ (t, •) L p ≤ c u 0 L p (4.35)
Proof.
It remains to show (4.35). As in the proof of (4.2) by (4.33), (4.34) and induction, we easily obtain that for any ε-solution in L 1 and p = 1 or p = ∞,
sup t≤T u ε (t, •) L p ≤ u 0 L p .
The conclusion follows because for every t ∈ [0, T ], there is a sequence Proof. It will be enough to prove that for δ small enough, we have
(ε n ) such that u εn (t, •) → ũ(t, •) = u δ (t,
R |u δ (t, x) -u(t, x)|dx ≤ cT u 0 L 1 δ. (4.36)
Using point 5. of Proposition 4.1 in a slightly modified form, and approximating β by β ε (u) = β(u) + εu, it is enough to suppose that β is strictly monotone, i.e. (3.11) holds. In the lines below the parameter ε will play however a different role.
We need to go back to the L 1 -ε-solutions related to u δ and u.
For ε > 0 we consider a subdivision 0 = t ε 0 < . . . < t ε j < . . . < t ε N = T such that t ε jt ε j-1 < ε, j = 1, . . . , N . Similarly as in Lemma 4.4
u ε δ (t j , •) = u ε δ (t j-1 , •) + (t j -t j-1 ) (η ε δ ) ′′ 2 (t j , •) -(t j -t j-1 )δη ε δ (t j , •) (4.37)
and
u ε (t j , •) = u ε (t j-1 , •) + (t j -t j-1 ) 1 2 (η ε ) ′′ (t j , •) (4.38) with η ε δ ∈ β(u ε δ ), η ε ∈ β(u ε ) a.e.
. Taking the difference of the previous two equations we obtain
u ε δ (t j , •) -u ε (t j , •) = u ε δ (t j-1 , •) -u ε (t j-1 , •) (4.39) + t j -t j-1 2 (η ε δ -η ε ) ′′ (t j , •) -δ(t j -t j-1 )η ε δ (t j , •).
Let Ψ κ : R → [-1, 1] be an odd smooth increasing function such that Ψ κ (x) → sign x as κ → 0 pointwise, We integrate (4.39) against Ψ κ (η ε δ (t j , •)η ε (t j , •)) and we get
R (u ε δ (t j , x) -u ε (t j , x))Ψ κ (η ε δ (t j , x) -η ε (t j , x))dx = R (u ε δ (t j-1 , x) -u ε (t j-1 , x))Ψ κ (η ε δ (t j , x) -η ε (t j , x))dx - (t j -t j-1 ) 2 R (η ε δ -η ε ) ′ (t j , x) 2 Ψ ′ κ (η ε δ (t j , x) -η ε (t j , x))dx -δ(t j -t j-1 ) R (η ε δ (t j , x)Ψ κ (η ε δ (t j , x) -η ε (t j , x))dx.
Using the fact that
Ψ ′ κ ≥ 0, |Ψ κ | ≤ 1, that, by strict monotonicity of β sign(η ε δ (t j , •) -η ε (t j , •)) = sign(u ε δ (t j , •) -u ε (t j , •)),
a.e. on {u ε δ (t j , •) = u ε (t j , •)}, and letting κ → 0, by (4.2), we obtain
R |u ε δ (t j , x) -u ε (t j , x)|dx ≤ R |u ε δ (t j-1 , x) -u ε (t j-1 , x)|dx + cδ(t j -t j-1 ) u 0 L 1 . (4.40)
Since R |u ε δ (0, x)u ε (0, x)|dx = 0, an induction argument implies that
R |u ε δ (t j , x) -u ε (t j , x)|dx ≤ cT u 0 L 1 • δ for every j ∈ {0, . . . , N }. Consequently, for any t ∈ [0, T ] R |u ε δ (t, x) -u ε (t, x)|dx ≤ cT u 0 L 1 δ.
Letting ε → 0, (4.36) follows and Lemma 4.13 is proved.
By (4.26), for every α ∈ C ∞ 0 (R) and all t ∈ [0, T ], R u δ (t, x)α(x)dx = R u 0 (x)α(x)dx -δ t 0 ds R ξ δ (s, x)α(x)dx + 1 2 t 0 ds R dxξ δ (s, x)α ′′ (x),
ξ δ ∈ β(u δ ) a.e. Letting δ → 0, by (4.35) and Lemma 4.13, we obtain that
R u(t, x)α(x)dx = R u 0 (x)α(x)dx + 1 2 lim δ→0 t 0 ds R ξ δ (s, x)α ′′ (x)dx. (4.41) By (4.35) it follows that for each K > 0, u δ → u in L 2 ([0, T ] × [-K, K]) and that (ξ δ ), is bounded in L 2 ([0, T ] × [-K, K]
). Since, by [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF] Example IV.2C,
the map u → β(u) is m-accretive on L 2 ([0, T ] × [-K, K]
), it is weaklystrongly closed, see [START_REF] Barbu | Nonlinear semigroups and differential equations in Banach spaces[END_REF], p.37 Proposition 1.1 (i) and (ii). So, there is a
sequence (δ n ) such that ξ δn → ξ weakly in L 2 ([0, T ] × [-K, K]) for some ξ ∈ β(u) a.e. Hence, (4.41) implies R u(t, x)α(x)dx = R u 0 (x)α(x)dx + 1 2 t 0 ds R ξ(s, x)α ′′ (x)dx. (4.42)
By the uniqueness part of Proposition 4.1 1., we conclude that ξ ≡ η u .
By Proposition 4.5, we already knew that η u (t, •) ∈ H 1 (R) for almost any t.
By (4.30) for a.e. fixed t, there is a sequence (δ n ) such that (ξ δn )(t, •) weakly converges to some ξ(t,
•) in H 1 (R) hence in L 2 (R). Consequently ξ(t, •) = η u (t,
|Γ(u(t, •)) -Γ(u(s, •))| ≤ max r∈{t,s} | < (δ - 1 2 ∆)η u (r, •), u(t, •) -u(s, •) > -1,δ | ≤ max r∈{t,s} (δ - 1 2 ∆)η u (r, •) -1,δ u(t, •) -u(s, •) -1,δ ≤ esssup r∈[0,T ] δ R η u (r, x) 2 dx + 1 2 R η ′ u (r, x) 2 dx u(t, •) -u(s, •) -1,
|Γ(u(t, •)) -Γ(u(s, •))| ≤ const u(t, •) -u(s, •) -1,δ , ∀t, s ∈ [0, T ],
and the assertion follows.
We are now prepared to prove the first main result of this section, which will be used in the next section in a crucial way. for every 0 ≤ r ≤ t ≤ T .
Proof. For a.e. t ∈ [0, T ], (4.17) gives
d dt u(t, •), ϕ = 1 2 η u (t, •), ϕ ′′ , ∀ϕ ∈ C ∞ 0 (R).
By density arguments,
H -1 d dt u(t, •), ψ H 1 = - 1 2 R η ′ u (t, x)ψ ′ (x)dx
for every ψ ∈ H 1 (R). For ψ = η u (t, •), we get
H -1 d dt u(t, •), η u (t, •) H 1 = - 1 2 R η ′ u 2 (t, x)dx. (4.44) Since u ∈ (L 1 L ∞ )([0, T ] × R) and |j(u)| ≤ c|u| 2 ,
u(t + h, •) -u(t, •) h h→0 -→ d dt u(t, •) in H -1 (R).
Let h > 0 such th, t + h are both positive. We have by (4.22)
R j(u(t, x)) -j(u(t -h, x)) h dx ≤ u(t, •) -u(t -h, •) h , η u (t, •) L 2 .
Taking limsup for h → 0, we get
lim sup h→0 R j(u(t, x)) -j(u(t -h, x)) h dx ≤ H -1 d dt u(t, •), η u (t, •) H 1 . (4.46)
On the other hand
u(t + h, •) -u(t, •) h , η u (t, •) L 2 ≤ R j(u(t + h, x)) -j(u(t, x)) h dx. So H -1 d dt u(t, •), η u (t, •) H 1 ≤ lim inf h→0 R j(u(t + h, x)) -j(u(t, x)) h dx.
Consequently for a.e. t ∈ [0, T ], Proof (of Proposition 4.17). For h small real fixed, we set
lim sup h→0 R j(u(t, x)) -j(u(t -h, x)) h dx ≤ H -1 du dt (t, •), η u (t, •) H 1 ≤ lim inf h→0 R j(u(t + h, x)) -j(u(t, x)) h dx. ( 4
u h (t, x) = u(t, x + h) -u(t, x).
Let ζ be a smooth nonnegative function with compact support on some compact interval. We aim at establishing the following intermediate result:
R ζ(x)|u h (t, x)|dx ≤ R ζ(x)|(u 0 ) h (x)|dx + c ζ ′′′ ∞ |h| [0,T ]×R |u(s, x)|dsdx.
(4.50)
Approximating β with β ε as in Proposition 4.1 5., we may suppose that β satisfies (3.11) on β. In the rest of this proof ε will however be the discretization mesh related to an ε-solution. We recall that u is the unique C 0 -solution to (1.1). So for fixed t ∈]0, T ]
u(t, •) = lim ε→0 u ε (t, •) in L 1 , (4.51)
where u ε (t, •) is given in Lemma 4.4.
According to Lemma 3.6 we have, for
i = 1, . . . , N , R ζ(x)|u h i (x)|dx ≤ R ζ(x)u h i-1 (x)sign(w h i (x))dx + c ζ ′′′ ∞ |h|ε R |u i (x)|dx ≤ R ζ(x)|u h i-1 (x)|dx + c ζ ′′′ ∞ |h|ε R |u i (x)|dx,
where
u h i = (u i ) h , w h i = (w i ) h , i ∈ {0, .
. . , N }, and u i is defined as in Lemma 4.4 with partition as in (4.5).
Let t ∈]0, T ] and m be an integer such that
t ∈] (m-1)T N , mT N ]. Summing on i = 0, • • • , m, we get R ζ(x)|u h m (x)|dx ≤ R ζ(x)|u h 0 (x)|dx + c ζ ′′′ ∞ |h|ε m i=1 R |u i (x)|dx. Setting u ε,h := (u ε ) h we obtain R ζ(x)|u ε,h (t, x)|dx ≤ R |u h 0 (x)|ζ(x)dx + c ζ ′′′ ∞ |h| T 0 ds R |u ε (s, x)|dx.
So, letting ε → 0 and using (4.51) we get
R ζ(x)|u h (t, x)|dx ≤ R |(u 0 ) h (x)|ζ(x)dx + c ζ ′′′ ∞ |h| T 0 ds R |u(s, x)|dx
and so (4.50). Therefore, lim sup
h→0 1 |h| R ζ(x)|u h (t, x)|dx ≤ 2 ζ ∞ |h| u 0 var +c ζ ′′′ ∞ [0,T ]×R |u(s, x)|dsdx, (4.52)
where • var denotes the total variation.
We denote the right hand-side of (4.52) by C
(ζ). Let K > 0, ϕ ∈ C ∞ 0 (R) such that suppϕ ⊂] -K, K[, and t ∈ [0, T ]. Taking ζ ≡ 1 on ] -K, K[, we can replace ϕ with ϕζ. Then R u(t, x) ϕ(x) -ϕ(x -h) h dx = R u h (t, x) h ζ(x)ϕ(x)dx ≤ 1 |h| ϕ ∞ R ζ(x)|u h (t, x)|dx.
So taking the limsup and using (4.52) we obtain
R u(t, x)ϕ ′ (x)dx ≤ ϕ ∞ C(ζ)
Hence u(t, •) has locally bounded variation on ] -K, K[ and the assertion follows.
We now show that, without particular assumptions on the initial conditions, in the degenerate case, a suitable "section" of Φ(u(t, •)) has at most countably many discontinuities if so has u(t, •). We again consider equation (1.1) in the sense of distributions
∂ t u = 1 2 η ′′ u , η u ∈ β(u) u(0, •) = u 0 ∈ L 1 ∩ L ∞ .
We recall that by Proposition 4.5 a), η u (t, •) ∈ H 1 (R) for a.e. t ∈]0, T ], hence has an absolutely continuous version, which will be still denoted by η u (t, •).
Likewise, since u(t, •) ≥ 0 a.e., for ∀t ∈ [0, T ], we shall take a version which is nonnegative everywhere, which will be still denoted by u(t, •) below.
Define
χ u = η u u 1 {|u|>0} . (4.53)
Here we recall that uη u ≥ 0, hence ηu u ≥ 0 on {|u| > 0}, and that χ u is bounded by Hypothesis 3.1.
Lemma 4.19 Suppose β is degenerate, let t ∈ [0, T [ such that η u (t, •) ∈ H 1 (R) and x ∈ R. If u(t, •) is continuous in x, then so is χ u (t, •). In partic- ular, χ u (t, •) has at most countably many discontinuities if so has u(t, •). Proof. It is enough to show that χ 2 u (t, •) is continuous in x. Let x n ∈ R, n ∈ N, converge to x. We have χ 2 u (t, x n ) = ηu(t,xn) u(t,xn) , if u(t, x n ) > 0 0, if u(t, x n ) = 0. • If u(t, x) > 0, then χ 2 u (t, x n ) → η u (t, x) u(t, x) = χ 2 u (t, x).
• If u(t, x) = 0 then, since β is degenerate,
χ 2 u (t, x n ) n→∞ -→ 0 = χ 2 u (t, x).
We have observed that for a relatively general coefficient β, but with a restriction on the initial condition, u(t, •) (and therefore a suitable section of Φ(u(t, •))) is a.e. continuous, for a.e. t ∈ [0, T ], see Proposition 4.11. We now provide some conditions on β (degenerate) for which a suitable section of Φ(u(t, •)) is continuous for any initial condition in L 2 (R). This will prepare the third main result of this section, crucially to be used in the next section.
Let (u, η u ) be as usual the solution to (1.1) and χ u as in (4.53).
Definition 4.20
We say that β is strictly increasing after some zero
if there is e c ≥ 0 such that i) β| [0,ec[ = 0. ii) β is strictly increasing on [e c , ∞[. iii) If e c = 0, then lim u→0 + Φ(u) = 0.
Remark 4.21 1. Condition iii) guarantees that β is degenerate.
A typical example of a function that is strictly increasing after some
zero is given by
β(u) = uH(u -e c ),
Since β -1 is single-valued, continuous on ]0, ∞[ and η u (x 0 ) ∈ β(u(x 0 )), so η u (x 0 ) > 0, we have
u(x 0 ) = β -1 (η u (x 0 )) = β -1 ( lim n→∞ η u (x n )) = lim n→∞ β -1 (η u (x n )) = lim n→∞ u(x n ). Consequently χ 2 u (x n ) → χ 2 u (x 0 ). 3. u(x 0 ) = e c .
Clearly there are three possibilities.
(a) there is a subsequence (n k ) with u(
x n k ) ∈]e c , ∞[, (b) there is a subsequence (n k ) with u(x n k ) ∈ [0, e c [, (c)
there is a subsequence (n k ) with u(x n k ) = e c ∀k ∈ N.
Case (a). First we suppose e c > 0. We have
η u (x n k ) → η u (x 0 ). If η u (x 0 ) = 0 then χ 2 u (x n k ) = η u (x n k ) u(x n k ) → 0 = η u (x 0 ) u(x 0 ) = χ 2 u (x 0 ). If η u (x 0 ) = 0 then the continuity of β -1 implies u(x n k ) = β -1 (η u (x n k )) → β -1 (η u (x 0 )) = u(x 0 ) = e c , so χ 2 u (x n k ) → χ 2 u (x 0 )
. If e c = 0, the result follows since β is degenerate.
Case (b). In this case e c is again strictly positive. Since η
u (x n k ) ∈ β(u(x n k ) = 0 we have χ u (x n k ) = 0, hence χ u (x n k ) k→∞ -→ 0. But 0 = η u (x n k ) k→∞ -→ η u (x 0
). This implies that η u (x 0 ) = 0, so χ 2 u (x 0 ) = 0. Case (c). We have u(
I 3 (n) = E t s χ n (r, Y n r ) 2 -χ 2 u (r, Y n r ) drΘ(Y n r , r ≤ s) .
We start showing the convergence of I 3 (n). Now Θ(Y n r , r ≤ s) converges a.s. to Θ(Y r , r ≤ s) and it is dominated by a constant. so that it suffices to consider the expectation of By Proposition 5.7 below η u (ε) → η u in L 1 ([0, T ]×R) as ε → 0. Furthermore, Proposition 4. 1 5), see also the theorem in the introduction of [START_REF] Ph | The continuous dependence on ϕ of solutions of u t -∆ϕ(u) = 0[END_REF], implies that u ε (t, •) converges to u(t, •) in L 1 (R), as ε → 0, uniformly in t ∈ [0, T ].
Hence Lebesgue's dominated convergence theorem implies that I(n) → 0, since χ u is bounded.
We go on with the analysis of I 2 (n) and I 1 (n). I 2 (n) equals to zero because Y n is a martingale with quadratic variation given by [Y n ] t = t 0 χ n (r, Y n r ) 2 dr.
We finally treat I 1 (n). We recall that Y n → Y a. s. as random elements in C([0, T ]) and that the sequence E (Y n t ) 4 , is bounded, so (Y n t ) 2 are uniformly integrable. Therefore, for t > s we have
E (Y n t ) 2 -(Y n s ) 2 )Θ(Y n r , r ≤ s) -E (Y 2 t -Y 2
Remark 3 . 4
34 Let δ > 0. The same results included in Lemma 3.3 are valid for the equation
(4. 4 )
4 for all r ∈ [1, ∞[. Proof. See point 3. in the proof of the Proposition 3.2 in [9]. (4.2) follows by Lemma 3.3, 1-3. and Hypothesis 3.1 by induction. (4.3) is an immediate consequence of the first part of (4.2) and the fact that u is a C 0 solution.
. 11 )
11 Since |β(u)| ≤ c|u|, (4.11) implies that sup s, x) 2 dx < ∞.(4.12)
R
j(u(x))dx ≤ lim inf n→∞ R j(u n (x))dx. (4.20)Let us consider a subsequence such that j(u n (x))dx converges to the righthand side of (4.20) denoted by C. We may suppose C < ∞. the sequence (u n ) n∈N is uniformly integrable on [-K, K]
•) a.e. as n → ∞. The second part of (4.35) then obviously follows by Hypothesis 3.1, since ξ δ (t, •) ∈ β(u δ (t, •)) a.e. for a.e. t ∈ [0, T ]. Lemma 4.13 We have u δ → u in C([0, T ]; L 1 (R)) as δ → 0, where u is the solution to (1.1).
Theorem 4 .
4 [START_REF] Choquet | Representation theory[END_REF] Under Assumption(4.18), the unique solution to (1.1) verifies R j(u(t, x))dx = R j(u(r, x))dx -
x n k ) = e c . If e c = 0 the result follows trivially by definition of χ u . Therefore we can suppose again that e c > 0. Then u(x n k ) = e c k→∞ -→ e c , soχ 2 u (x n k ) = η u (x n k ) e c → η u (x 0 ) e c = η u (x 0 ) u(x 0 ) = χ 2 u (x 0 ).
r, Y n r ) 2χ u (r, Y n r ) εn ) (r, y)χ u (r, y) 2 u n (r, y) dy.
t s χ 2 u 2 u
22 s )Θ(Y r , r ≤ s) → 0, when n → ∞. It remains to prove that E (r, Y r )drΘ(Y r , r ≤ s) -(r, Y n r )drΘ(Y n r , r ≤ s) → 0. (5.7) Under the assumptions of the theorem, for fixed r ∈ [0, T ], by the second and third main results of Section 4 (see Propositions 4.17, 4.22 and Remark 4.18), χ u (r, •) has at most a countable number of discontinuities. Moreover, the law of Y r has a density and it is therefore non atomic. So, let N (r) be the null event of ω ∈ Ω such that Y r (ω) is a point of discontinuity of χ u (r, •).For ω / ∈ N (r) we havelim n→∞ χ 2 u (r, Y n r (ω)) = χ 2 u (r, Y r (ω)).Now, Lebesgue's dominated convergence and Fubini's theorem imply (5.7).So equation (5.6) is shown.It remains to prove the following result which is based on our first main result of Section 4, see Theorem 4.15.
or β is degenerate if lim u→0
+ Φ(u) = 0 in the sense that for any sequence of non-negative reals (x n ) converging to zero, and y n ∈ Φ(x n ) we have lim n→∞ y n = 0. Remark 1.4 1. β may be in fact neither non-degenerate nor degenerate. If β is odd, which according to Remark 1.2 (ii), we may always assume, then β is non-degenerate if and only if lim inf u→0+ Φ(u) > 0. 2. Of course, Φ in (1.2) is degenerate. In order to have Φ non-degenerate, one could add a positive constant to it.
is said to be accretive if for any f 1 , f 2 , g 1 , g 2 ∈ E such that
. It can be considered as a multivalued map by filling the gap. More
generally, let us consider a monotone function ψ. Then all the discontinuities
are of jump type. At every discontinuity point x of ψ, it is possible to com-
plete ψ by setting ψ(x) = [ψ(x-), ψ(x+)]. Since ψ is a monotone function,
the corresponding multivalued function will be, of course, also monotone.
Now we come back to the case of our general Banach space E with norm
• . An operator T : E → E is said to be a contraction if it is Lipschitz of norm less or equal to 1 and T (0) = 0.
Definition 2.3 A map A : E → E, or more generally a multivalued map A : E → P(E)
). By(3.11) we have w h = 0 on {u h = 0},
dx a.e. Hence, by strict monotonicity we have ϕ(x) = sign(w h (x))ζ(x) a.e. on {u h = 0}. By (3.10), up to a Lebesgue null set, we have {u h
•) for almost all t ∈ [0, T ]. Proof. Let 0 ≤ s < t ≤ T . Let Γ and D(Γ) be as defined in the proof of Proposition 4.8. Since u(t, •) ∈ D(Γ) for a.e. t ∈ [0, T ], Lemma 4.10 applies and thus for a.e. t, s ∈ [0, T ] we have that (δ -1 2 ∆)η u (t, •) ∈ A δ u(t, •), and
Recalling (4.30) for a.e. t ∈ [0, T ] we get
R dx η ′ u (t, x) 2 = R dx ξ′ (t, x) 2 ≤ lim inf δ→0 R dx (ξ δ ) ′ (t, x) 2 ≤ C.
This finally completes the proof of Proposition 4.8.
At this point, we can state and prove the following important theorem.
Theorem 4.14 Assume that Hypothesis 3.1 and condition (4.18) hold. Let
u be the solution of (1.1) (or equivalently of (4.1), from Proposition 4.1).
Then the function t → R j(u(t, x))dx is absolutely continuous.
δ .
By (4.19) and (3.1), this is bounded by
max(c, C) δ u 0 2 L 2 + 1 u(t, •) -u(s, •) -1,δ ,
where we recall that by Remark 4.7 the map t → u(t, •) is absolutely contin-uous in H -1 . Since by Proposition 4.5 c), t → Γ(u(t, •)) is continuous, we have
then, in particular, it belongs to L 2 ([0, T ], L 2 (R)). We need the following lemma.
Lemma 4.16 For a.e. t ∈ [0, T ]
H -1 d dt u(t, •), η u (t, •) H 1 = d dt R j(u(t, x))dx. (4.45)
Proof. Let t ∈]0, T ] such that
Proposition 4.17 Let Hypothesis 3.1 hold and let u be the unique solution to (1.1) with initial condition u 0 ∈ L 1 L ∞ being locally of bounded variation. Then, for each t ∈ [0, T ], u(t, •) also has locally bounded variation.
At this point (4.44) and Lemma 4.16 imply that for a.e. t ∈ [0, T ],
d dt R j(u(t, x))dx = - 1 2 R η ′ u 2 (t, x)dx. (4.48)
Theorem 4.14 says that t → R j(u(t, x))dx is absolutely continuous. So, after integrating in time, we get
R j(u(t, x))dx = R j(u(r, x))dx - 1 2 r t ds R η ′ u (s, x) 2 dx. (4.49)
This completes the proof of Theorem 4.15.
The second main result of this section, also crucially used in Section 5 below,
is the following.
Remark 4.18 1. We note that (4.18) is not needed for the above propo-
sition.
2. Since u(t, •) has locally bounded variation, it has at most a countable number of discontinuities. We will see that in the degenerate case, i.e.
if Φ(0) = 0, a suitable section of Φ(u(t, •)), also has at most countably many discontinuities, see Lemma 4.19 below.
.47)
On the other hand we know already by Theorem 4.14 that for a.e. t ∈ [0, T ], the limsup and liminf-terms in (4.47) coincide. Hence the assertion follows.
) 2 (s, Y n s )ds, and E([Y n ] T ) is finite, Φ being bounded, the continuous local martingales Y n are indeed martingales.By Skorokhod's theorem there is a new probability space (Ω, F, P ) and processes Ỹ n , with the same distribution as Y n so that Ỹ n converge to some process Ỹ , distributed as Y , as C([0, T ])-random elements P -a.s. In particular, those processes Ỹ n remain martingales with respect to the filtrations generated by them. We denote the sequence Ỹ n (resp. Ỹ ), again by Y n (resp. Y ). We observe that, for each t ∈ [0, T ], u(t, •) is the law density of Y t . In fact, for any t ∈ [0, T ], Y n t converges in probability to Y t ; on the other hand u n (t, •), which is the law of u n t converges to u(t, •) in L 1 (R), by Proposition 4.1 5. Remark 5.6 Let Y n (resp. Y) be the canonical filtration associated with Y n (resp. Y ). Those processes W n are standard (Y n t ) -Wiener processes since [W n ] t = t and because of Lévy's characterization theorem of Brownian motion. Then (s, Y s )dW s . (5.6) where χ u is defined as in (4.53). Once this equation is established for the given u, the statement of Theorem 5.4 would be completely proven because of Remark 5.5. In fact, that remark shows in particular the third line of (5.2). Taking into account, Theorem 4.2 of Ch. 3 of [22], to establish (5.6), it will be enough to prove that Y is a Ymartingale with quadratic variation [Y ] t = t 0 χ 2 u (s, Y s )ds. Let s, t ∈ [0, T ] with t > s and Θ a bounded continuous function from C([0, s]) to R. In order to prove the martingale property for Y , we need to show that E ((Y t -Y s )Θ(Y r , r ≤ s)) = 0. This follows by (5.5) because Y n → Y a.s. as C([0, T ])-valued process and which in turn follows, if for t > s we can verify
Since (χ n Remark 5.5 We set [Y n ] t = t 0 W n t = t 0 1 χ n (s, Y n s )dY n s . one has Y n t = Y n 0 + t 0 χ n (s, Y n s )dW n s . We aim to prove first that Y t = Y 0 + t t -Y n s )Θ(Y n r , r ≤ s)) = 0. It remains to show that Y 2 t -E (Y 2 t -Y 2 s --E (Y n t ) 2 -(Y n s ) 2 -t s χ 2 u (r, Y n r )dr Θ(Y n r , r ≤ s) , χ u E ((Y n I 2 (n) = E (Y n t ) 2 -(Y n s ) 2 -
0
This completes the proof.
t 0 χ 2 u (s, Y s )ds, t ∈ [0, T ], defines a Y-martingale, t s χ 2 u (r, Y r )dr)Θ(Y r , r ≤ s) = 0.
The left-hand side decomposes into I 1 (n) + I 2 (n) + I 3 (n) where
I 1 (n) = E (Y 2 t -Y 2 s -t s χ 2 u (r, Y r )dr)Θ(Y r , r ≤ s) t s χ n (r, Y n r ) 2 dr Θ(Y n r , r ≤ s) ,
and
ACKNOWLEDGEMENTS
Financial support through the SFB 701 at Bielefeld University and NSF-Grant 0606615 is gratefully acknowledged.
where e c > 0 and H is the Heaviside function, i.e. 3. We recall that for almost all t ∈]0, T ], η u (t, •) is continuous. This will constitute the main ingredient in the proof of the proposition below. [START_REF] Banta | Avalanche dynamics from anomalous diffusion[END_REF]. Suppose that β is as in Definition 4.20. Then β -1 is single-valued and continuous on ]0, ∞[. Proposition 4.22 Suppose β strictly increasing after some zero. Then for almost all t ∈]0, T [, χ u (t, •) is continuous.
Proof. We first recall that by Corollary 4.2, u(t, •) ≥ 0 a.e. for all t ∈ [0, T ].
Let e c be as in Definition 4.20. Let t ∈]0, T ] for which η u (t, •) is continuous.
Let (x n ) be a sequence converging to some x 0 ∈ R. The principle is to find
In the sequel of the proof, we will omit t and denote the functions u(t, x) (resp. η u (t, x), χ u (t, x)) by u(x) (resp. η u (x), χ u (x)).
We distinguish several cases 1. u(x 0 ) ∈ [0, e c [. Then e c > 0 and η u (x 0 ) ∈ β(u(x 0 )) = 0. Hence χ u (x 0 ) = 0.
• If u(x n k ) < e c for some subsequence (n k ), then
2. We suppose now u(x 0 ) ∈]e c , ∞[.
The probabilistic representation of the deterministic equation
We again consider the β : R → 2 R satisfying Hypothesis 3.1. We aim at providing a probabilistic representation for solutions to equation (1.1). Let
We consider a multi-valued map Φ : R → 2 R + such that
The degenerate case is much more difficult than the non-degenerate case which was solved in [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF].
Definition 5.1 Let (u, η u ) be the solutions in the sense of Proposition 4.1 to equation (1.1). i.e.
We say that (1.1) has a probabilistic representation, if there is a filtered probability space (Ω, F, P, (F t )), an (F t ))-Wiener process W and, at least one process Y , such there exists
We recall the main result of [START_REF] Ph | Probabilistic representation for solutions of an irregular porous media equation[END_REF], Theorem 4.3.
Theorem 5.2 When β is non-degenerate then (1.1) has a probabilistic representation, with
Remark 5.3 In the non-degenerate case the representation is unique.
We will show that, even in the degenerate case, (1.1) has a probabilistic representation.
Theorem 5.4 Suppose that β is degenerate. Then equation (1.1) admits a probabilistic representation if one of the following conditions are verified.
1. β is strictly increasing after some (non-negative) zero.
2. u 0 has locally bounded variation.
Proof. We will make use of Theorem 5.2. Let ε ∈]0, 1] and set
) the solution to the deterministic PDE (1.1), with β ε replacing β. Define
We note that since Φ ε , ε ∈]0, 1] are uniformly bounded, so are χ ε , ε ∈]0, 1].
By Theorem 5.2, there exists a unique solution Y = Y ε in law of
Since Φ is bounded, using the Burkholder-Davies-Gundy inequality one obtains We set u n := u (εn) , where we recall that u n (t, •) is the law of Y n t , and
Proof. We set
where β • ε is the minimal section of β ε and clearly
In particular,
(5.9) So, by (4.4), the family
Let (ε n ) be a sequence converging to zero. There is a subsequence (n k ) such that η k u := η
Taking the limit when k → ∞, we get
Therefore, [START_REF] Barbu | Analysis and control of nonlinear infinite dimensional systems[END_REF], p.37, Proposition 1.1 (i) and (ii), imply that this map is weakly-strongly closed. Since, by (4.3),
a.e. on [0, T ] × [-K, K] for all K > 0, so, ξ ∈ β(u) a.e. By the uniqueness of (1.1) we get ξ = η u a.e.
Let ε n → 0. The rest of the paper will be devoted to the proof of the existence of a subsequence (η k u ) := η u (εn k ) converging (strongly
Hence {η k u } is equintegrable on [0, T ] × R. Therefore, the existence of such a subsequence completes the proof.
We will need the following well-known lemma.
Lemma 5.8 Let H be a Hilbert space, (f n ) be a sequence in H converging weakly to some f ∈ H. Suppose
Then f n → f strongly in H.
We apply the previous Lemma to establish the existence of a subsequence still denoted by (η k u ) such that (η k u ) ′ converges strongly to η ′ u in L 2 ([0, T ]×R). For this, we will prove that lim sup
We consider (5.8) for ε = ε n k and we let k go to infinity. First, for t ∈ [0, T ] we have
This together with (5.8) gives
Since by Corollary 4.6
the last term in (5.13) converges to zero when k → ∞. Taking the limsup when k → ∞ in (5.13) and using (5.12) we get lim sup
Consequently, by Lemma 5.8
Now let us finally prove that η k u → η u (strongly) in L 2 loc ([0.T ], ×R). Let x ∈ R. We recall that η u , η k u (t, •) vanish at infinity since they belong to H 1 (R) = H 1 0 (R). So we can write, for x ∈ R,
(η k uη u ) 2 (t, y)dy |
04102687 | en | [
"info"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04102687/file/A_Branch_and_Cut_algorithm_for_BTSP.pdf | Thi Quynh
Trang Q T Vo
Mourad Baiou
email: [email protected]
• Viet
Hung Nguyen
email: [email protected]
V H Nguyen
A Branch-and-Cut algorithm for the Balanced Traveling Salesman Problem
Keywords: Traveling Salesman Problem, Balanced optimization, Mixed-Integer Programming, Branch-and-Cut Mathematics Subject Classification (2020) 90-10, 90-05
. They proposed several heuristics based on the double-threshold framework, which converge to good-quality solutions though not always optimal (e.g. 27 provably optimal solutions were found among 65 TSPLIB instances of at most 500 vertices). In this paper, we design a special-purpose branch-and-cut algorithm for solving exactly the BTSP. In contrast with the classical TSP, due to the BTSP's objective function, the efficiency of algorithms for solving the BTSP depends heavily on determining correctly the largest and smallest edge costs in the tour. In the proposed branch-and-cut algorithm, we develop several mechanisms based on local cutting planes, edge elimination, and variable fixing to locate more and more precisely those edge costs. Other important ingredients of our algorithm are heuristics for improving the lower and upper bounds of the branch-and-bound tree. Experiments on the same TSPLIB instances show that our algorithm was able to solve to optimality 63 out of 65 instances.
Introduction
Given a finite set E with cost vector c and a family F of feasible subsets of E, the balanced optimization problem seeks a feasible subset S * ∈ F that minimizes the difference in cost between the most expensive and least expensive element used, i.e., max e∈S * c e -min e∈S * c e . This optimization class arises naturally in many practical situations where one desires a fair distribution of costs. Balanced optimization was introduced by Martello et al. [START_REF] Martello | Balanced optimization problems[END_REF] in the context of the assignment problem. Then, a line of works was investigated for other specific cases of balanced optimization, such as the balanced shortest path [START_REF] Turner | Variants of shortest path problems[END_REF][START_REF] Cappanera | Balanced paths in acyclic networks: Tractable cases and related approaches[END_REF], the balanced minimum cut [START_REF] Katoh | Efficient algorithms for minimum range cut problems[END_REF], and the balanced spanning tree [START_REF] Galil | On finding most uniform spanning trees[END_REF][START_REF] Camerini | Most and least uniform spanning trees[END_REF].
In this paper, we consider the balanced version of the traveling salesman problem (TSP). In the context of the TSP, the finite set E is the edge set of a graph, and the feasible subset family F is the set of all Hamiltonian cycles (a.k.a tours) in the graph. The balanced traveling salesman problem (BTSP) finds a tour in which the difference between the largest and smallest edge costs is minimum. We call this difference the max-min distance. Formally, given an undirected graph G = (V, E) and a cost vector c associated with E, the BTSP can be stated as follows: min
H∈Π(G) {max e∈H c e -min e∈H c e } (1)
where Π(G) is the set of all Hamiltonian cycles in G. The BTSP is NP-hard as the problem of finding a Hamiltonian cycle in the graph can be reduced to the BTSP.
The BTSP was first studied by Larusic and Punnen [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF] with applications in many practical problems, such as the nozzle guide vane assembly problem [START_REF] Robert D Plante | The product matrix traveling salesman problem: an application and solution heuristic[END_REF] and the cyclic workforce scheduling problem [START_REF] George | On gilmore-gomory's open question for the bottleneck tsp[END_REF]. While most of the previous works about balanced optimization focused on polynomial-time algorithms, the BTSP was the first NP-hard case studied. The BTSP can be reduced to the problem of finding the shortest interval such that all edges whose costs are in the interval can form a Hamiltonian cycle. An approach for finding such an interval is the double-threshold algorithm [START_REF] Martello | Balanced optimization problems[END_REF], widely used for balanced optimization problems. As its name suggests, the double-threshold algorithm maintains two thresholds of the edge costs of the tour: a lower threshold and an upper threshold. At each iteration, the algorithm generates a threshold pair and checks whether the graph whose edge costs are restricted by this threshold pair is Hamiltonian. The interval to find is a threshold pair with the smallest difference.
A critical issue of this approach is that it requires solving O(|V | 2 ) Hamiltonicity verification problems, which are NP-hard. It causes the approach to be unpractical when the problem size is large. To tackle this issue, Larusic and Punnen [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF] heuristically solved the Hamiltonicity verification problem at every iteration. They also developed four variants of the double-threshold algorithm to reduce the number of iterations without sacrificing solution quality by using the bottleneck TSP [START_REF] Larusic | The asymmetric bottleneck traveling salesman problem: algorithms, complexity and empirical analysis[END_REF] and the maximum scatter TSP [START_REF] Esther M Arkin | On the maximum scatter traveling salesperson problem[END_REF]. With these modifications, their algorithms produced provably optimal solutions for 27 out of 65 TSPLIB instances [START_REF] Reinelt | Tsplib-a traveling salesman problem library[END_REF] from 14 to 493 vertices with a time limit of 18000 seconds per instance.
To the best of our knowledge, no exact algorithm based on Mixed-Integer Programming (MIP) for the BTSP has been proposed in the literature, although it is quite easy to formulate the BTSP through the existing MIP formulations for the TSP. The reason is that solving the BTSP's formulations directly without tools to locate the largest and smallest edge costs can be inefficient and more difficult than solving the classical TSP. In this paper, we propose a branch-and-cut algorithm that includes mechanisms to tighten the bounds of the largest and smallest edge costs. These mechanisms include local cutting planes, edge eliminating, and variable fixing techniques. To further improve the performance, we develop heuristics for strengthening the lower and upper bounds of the BTSP. The efficiency of the proposed branch-and-cut algorithm is assessed through computational comparison to the double-threshold-based algorithms in [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF]. Numerical results show that our algorithm can solve to optimality 63 instances out of 65 within 3 hours of CPU time.
The paper is organized as follows. In Section 2, we present a MIP formulation for the BTSP. Section 3 proposes a family of local cutting planes for the BTSP, called local bounding cuts. Then, the heuristics to improve the lower and upper bounds of the branch-and-bound tree are presented respectively in Sections 4 and 5. Section 6 describes the branch-and-cut algorithm used for the BTSP, which includes heuristics to eliminate edges and fix variables. Section 7 provides computational results to evaluate the algorithm's efficiency. Finally, we give some conclusions in Section 8.
Preliminaries
Given a graph G = (V, E) and a cost vector c associated with E, we provide below some notations used throughout the paper. For any subset S of V , let δ(S) be a subset of E where each edge has exactly one endpoint in S, i.e., δ(S) = {(i, j) ∈ E | i ∈ S and j ∈ V \ S}. For abbreviation, we write δ(v) instead of δ({v}) for all v ∈ V . Given a Hamiltonian cycle H ∈ Π(G), we respectively denote by u H and l H the largest and smallest edge costs in H. For an edge set F ⊆ E, we denote V (F ) the end-vertices set of edges in F and C(F ) = {c e ∈ c | e ∈ F } the edge cost set corresponding to F . Without loss of generality, we assume that C(E) = {C 1 , . . . , C p } where p ≤ m is the number of distinct components of the cost vector c and
C 1 < C 2 < • • • < C p . For an interval [α, β], G[α, β] stands for a subgraph of G with edge set E[α, β] = {e ∈ E | α ≤ c e ≤ β}. We call G[α, β] the subgraph restricted by [α, β]. For any positive integer n, let [n] = {1, . . . , n}.
MIP formulation for the BTSP
Given an undirected graph G = (V, E) with edge costs c, the BTSP consists in finding a tour that minimizes the max-min distance. We denote by {x e | e ∈ E} a set of binary variables where x e = 1 if edge e is in the tour and x e = 0 otherwise. Let u and l respectively be variables representing the tour's highest and smallest edge costs. We propose a MIP formulation for the BTSP as follows:
(M IP -BT SP ) min u -l (2a) s.t. e∈δ(v) x e = 2 ∀v ∈ V (2b) e∈δ(S) x e ≥ 2 ∀∅ ̸ = S ⊂ V (2c) u ≥ c e x e ∀e ∈ E (2d) l ≤ c e x e + (1 -x e )M e ∀e ∈ E (2e) x e ∈ {0, 1} ∀e ∈ E (2f)
where M e = min{max e ′ ∈δ(i) c e ′ , max e ′ ∈δ(j) c e ′ } for all e ∈ E. The objective function (2a) corresponds to the max-min distance. Constraints (2b) are degree constraints, which ensure that each vertex has precisely two incident edges in the tour. Constraints (2c) are the well-known subtour elimination inequalities that prevent the existence of subtours. Constraints (2d) and (2e) are used to estimate the highest and smallest edge costs. More specifically, constraints (2d) ensure that u must be greater than or equal to the costs of edges selected in the tour. On the other hand, if an edge e occurs in the tour (x e = 1), inequalities (2e) read as l ≤ c e , which are true by the definition of l. Otherwise (x e = 0), constraints (2e) become l ≤ M e , which are valid as l ≤ max e∈δ(i) c e , ∀i ∈ V .
Local bounding cuts
The BTSP entails estimating the largest and smallest edge costs compared to the TSP. This task is non-trivial and enormously impacts the algorithm's performance. In (M IP -BT SP ), while the highest edge cost u is directly estimated through the edge variables, the smallest edge cost estimation needs to use the constants M e . It can lead to untight bounds for l in the linear programming (LP) relaxations and make solving the BTSP noticeably more time-consuming than solving the TSP. This can be seen in the following experiment. We addressed the TSP and BTSP on the TSPLIB instance si175 (with 175 vertices) by a general-purpose branch-and-cut algorithm with the same TSP constraints in the MIP formulations. While the TSP can be solved in 25 seconds, the BTSP can not be solved to optimality within 10800 seconds. Thus, the crucial point in solving the BTSP via branch-and-cut algorithms is not the reinforcement of TSP constraints but the estimation of the largest and smallest edge costs. This section provides a family of local cutting planes to strengthen the bounds of the smallest edge cost in the LP relaxations.
Observe that in the branch-and-bound tree, each node is associated with an ordered pair ⟨F 0 , F 1 ⟩ where F 0 , F 1 ⊂ E are two disjoint edge sets. Given a node ⟨F 0 , F 1 ⟩, a tour found by the node or its descendants is one whose incidence vector satisfies
x e = 0 ∀e ∈ F 0
x e = 1 ∀e ∈ F 1 .
In other words, this tour permanently includes the edges of F 1 and excludes the edges of F 0 . Let M i C(F1) be the minimum of C(F 1 ) = {c e | e ∈ F 1 }. Obviously, the smallest edge cost of the tour can not exceed M i C(F1) . Based on this observation, we have the following inequalities, called local bounding cuts
l ≤ c e x e + (1 -x e )M i C(F1)
∀e ∈ E.
(
) 3
As their name suggests, the local bounding cuts are locally-valid, namely that these cuts are valid only for the current node and its descendants in the branch-and-bound tree, as they use the specific properties of the node. The local bounding cuts aim at favoring early locating the smallest edge cost at the subtree to help the solver concentrate on finding a tour or proving the tour's non-existence in the subgraph restricted by [l, u]. Indeed, these cuts can tighten the bounds of the smallest edge cost l in the subproblems and thus narrow the interval [l, u].
Algorithm for improving the lower bound
A good lower bound enables to speed up branch-and-cut algorithms. Given a graph G = (V, E) with edge costs c, we present below a heuristic partly inspired by the Hamiltonian verification procedure in [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF] to find a lower bound of the BTSP.
As mentioned in [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF], a Hamiltonian graph must be a biconnected graph (i.e., a graph in which for any pair of vertices u and v, there exist two paths from u to v without any vertices in common except u and v). The intuition of the heuristic is that for all distinct costs C i ∈ C(E), we find the shortest interval containing C i such that the subgraph restricted by this interval is biconnected. The minimum length among these intervals is a lower bound of the BTSP. Algorithm 1 gives a formal description of our lower bound heuristic. Before describing the heuristic in detail, we introduce some definitions and lemmas.
Definition 1 (Biconnected interval) For any
C i ∈ C(E), a biconnected interval compatible with C i is an interval [α, β] such that i) α ≤ C i ≤ β; ii) G[α, β] is biconnected.
Algorithm 1 Heuristic to find a lower bound of the BTSP Input: A graph G = (V, E) with edge costs c. Output: A lower bound of the BTSP.
1: Let C 1 < C 2 < • • • < Cp be the distinct costs of c 2: b 0 ← 1, C p+1 ← +∞ 3: for i ∈ [p] do 4: j ← b i-1 5: while j ≤ p do 6: if G[C j ,
C l i ← C 1 , Cu i ← Cp 18: for j ∈ [i] do 19: if C b j -C j < Cu i -C l i then 20: C l i ← C j , Cu i ← C b j 21: end if 22:
end for 23: end for 24:
return min i∈[p] Cu i -C l i .
The length of a biconnected interval [α, β] is the difference between β and α, i.e., β -α. We denote by γ(C i ) the length of the shortest biconnected interval compatible with C i .
Lemma 1 Let H be a tour in G. If H contains an edge with cost
C i , then u H -l H ≥ γ(C i ). Proof. We consider the graph G[l H , u H ] with edge set E[l H , u H ] = {e ∈ E | l H ≤ c e ≤ u H }. G[l H , u H ] is biconnected as it contains the tour H. Since H has an edge with cost C i , l H ≤ C i ≤ u H . Thus, (l H , u H ) is a biconnected interval compatible with C i . By the definition of γ(C i ), u H -l H ≥ γ(C i ).
Corollary 1 Let γ * = min Ci∈C(E) γ(C i ) and OPT be the optimal value of (M IP -BT SP ), we have γ * ≤ OPT .
Thanks to Corollary 1, to obtain a lower bound of the BTSP, it is sufficient to find the shortest biconnected interval compatible with
C i for all C i ∈ C(E).
The following lemma provides a characterization of the shortest biconnected intervals.
Lemma 2 If [α, β] is the shortest biconnected interval compatible with C i , then α and β belong to the edge cost set of E.
Proof. We consider the graph G
[α, β]. Let α ′ = min{c e | e ∈ E[α, β]} and β ′ = max{c e | e ∈ E[α, β]}. Obviously, α ′ , β ′ ∈ C(E) and α ′ ≤ C i ≤ β ′ . Since G[α ′ , β ′ ] = G[α, β] and G[α, β] is biconnected, G[α ′ , β ′ ] is also biconnected. Thus, [α ′ , β ′ ] is a biconnected interval compatible with C i . Since [α, β] is the shortest biconnected interval compatible with C i , β -α ≤ β ′ -α ′ . On the other hand, by the definition of G[α, β], α ≤ α ′ and β ≥ β ′ . Then, β ′ -α ′ ≤ β -α. The equality holds if and only if α = α ′ and β = β ′ .
By Lemma 2, to find the shortest biconnected intervals, we first determine the smallest index b
j ∈ [p] (recall that p = |C(E)|) such that G[C j , C bj ] is biconnected, for all C j ∈ C(E). Then, the shortest biconnected interval compatible with C i is the shortest interval [C j , C bj ] containing C i . A naive way to find b j is to initially set b j by j and increase b j until G[C j , C bj ] is biconnected. It requires checking the graph's biconnectivity O(|E| 2 ) times.
However, we can reduce it to O(E) by using the following lemma.
Lemma 3 For any
i, j ∈ [p], if C i < C j then b i ≤ b j .
Proof. We prove the lemma by contradiction. Assume that there exist two costs
C i , C j such that C i < C j and b i > b j . Obviously, G[C j , C bj ] is a subgraph of G[C i , C bj ]. Since G[C j , C bj ] is biconnected, G[C i , C bj ] is also biconnected. On the other hand, b i is the smallest value such that G[C i , C bi ] is biconnected. Thus, b i ≤ b j , contradicts the assumption.
Using Lemma 3, we can set b j initially as b j-1 instead of j. This reduces the number of biconnectivity checks at most O(|E|). The algorithm then repeatedly verifies the biconnectivity of the graph G[C j , C bj ] and increases b j until G[C j , C bj ] is a biconnected graph. Since a biconnected graph is a connected graph without articulation vertices, the graph's biconnectivity can be checked in O(|V | + |E|) by Tarjan's algorithm [START_REF] Tarjan | Depth-first search and linear graph algorithms[END_REF]. In total, the complexity of Algorithm 1 is O(|E| 2 ).
Local search algorithm to improve the upper bound
To improve the upper bound of the branch-and-cut algorithm, we develop a local search algorithm for the BTSP, called k-balanced, based on k-opt algorithms for the TSP [START_REF] Shen | Computer solutions of the traveling salesman problem[END_REF][START_REF] Helsgaun | General k-opt submoves for the lin-kernighan tsp heuristic[END_REF]. The algorithm takes a graph G = (V, E) with edge costs c and an initial tour as input and returns an improved tour with a smaller max-min distance. We use k-balanced to provide a good feasible solution at the beginning of the branch-and-cut algorithm and enhance the incumbent solutions during the branch-and-cut.
The intuition of k-balanced is to repeatedly perform k-exchanges (k-opt moves) to improve the current tour. A k-exchange replaces k edges in the current tour with k edges in such a way that a tour with a smaller max-min distance is achieved. Algorithm 2 sketches a generic version of k-balanced. In the following, we describe in detail the algorithm.
Algorithm 2 Generic k-balanced
Input: A tour H of G and a fixed number k. Output: A tour with a smaller max-min distance.
1: improved ← True 2: while improved do 3:
improved ← False 4: Select (F, l ′ , u ′ ) where F ⊂ H and [l ′ , u ′ ] ⊊ [l H , u H ]. 5: EC(F, l ′ , u ′ ) ← {(i, j) ∈ E | i, j ∈ V (F ) ∧ (l ′ ≤ c (i,j) ≤ u ′ )} . 6: if exists a k-subset F ⊂ EC(F, l ′ , u ′ ) such that (H \ F ) ∪ F is a tour then 7: H ← (H \ F ) ∪ F . 8: improved ← True 9:
end if 10: end while 11: return H.
Given a tour H of G, at each iteration, k-balanced constructs two edge sets,
F = {f 1 , . . . , f k } and F = {f 1 , . . . , f k }, such that H ′ = (H \ F ) ∪ F is a new
tour with a smaller max-min distance. We call the edges of F out-edges and the edges of F in-edges.
The max-min distance of H ′ is smaller than that of H if and only if all edge costs of H ′ belong to an interval shorter than [l H , u H ]. Due to this fact, the out-edge set F must contain all edges with either the maximum edge cost or the minimum edge cost in H and the in-edge set F only comprises edges with costs belonging to a range [l ′ , u ′ ] such that u ′ -l ′ < u H -l H . In order to avoid searching all possible intervals [l ′ , u ′ ], we simply consider intervals
[l ′ , u ′ ] ⊊ [l H , u H ].
We first describe a way to construct the in-edge set F given a triple (F, l ′ , u ′ ) where
F ⊂ H and [l ′ , u ′ ] ⊊ [l H , u H ]. Let EC(F, l ′ , u ′ ) = {(i, j) ∈ E | i, j ∈ V (F ) ∧ (l ′ ≤ c (i,j) ≤ u ′ )}
the set of edges whose end-vertices are in V (F ) with costs between l ′ and u ′ . By its definition, EC(F, l ′ , u ′ ) is precisely the set of edges that can be used to complete a tour from H \ F , namely that F ⊂ EC(F, l ′ , u ′ ). To construct F , we solve the problem of completing a Hamiltonian cycle from H \ F with only edges in EC(F, l ′ , u ′ ). With k fixed, we can solve the same problem on G ′ -a compressed version of G with at most 2k vertices. The construction of F is thus cheap since it is independent of the size of G. Figure 1 illustrates this idea.
We now present rules to select (F, l ′ , u ′ ). We create three variants of k-balanced corresponding to three selection rules for (F, l ′ , u ′ ): k-balanced min, k-balanced max, and k-balanced extreme. Table 1 summarizes the three variants.
Algorithm 3 describes the selection rule of (F, l ′ , u ′ ) for k-balanced min/max. In these variants, we select F in such a way as to maximize the cardinality of EC(F, l ′ , u ′ ). We call this rule the maximum candidate cardinality principle (MCCP). In particular, for k-balanced min, we set (l ′ , u ′ ) = (l H + 1, u H ) and initialize F by all min-cost edges. At step i, an edge f i in H \ F is added to the current F if it can increase the cardinality EC(F, l ′ , u ′ ) the most. More precisely, f i is the edge that has the most incident edges having one end-vertex in V (F ) with costs between l ′ and u ′ . The selection procedure is repeated until the cardinality of F equals k. This selection rule is applied similarly for k-balanced max with two modifications: F initially is a set of all max-cost edges, and l ′ , u ′ respectively equal l H and u H -1. Such a way to select (F, l ′ , u ′ ) offers the uttermost cardinality of EC(F, l ′ , u ′ ) and thus increases the probability of F 's existence. However, it slowly decreases the max-min distance at each iteration (the gain can be only 1 per k-exchange).
On the other hand, k-balanced extreme prioritizes dropping the max-min distance as fast as possible. While k-balanced min/max chooses edges to remove, the removal rule of k-balanced extreme is cost-based. Let d(c e , H) := min(|l H -c e |, |u H -c e |) be a distance from a cost c e to the edge costs of H. We choose F as the set of k edges with the smallest distance d(c, H). Then, (l ′ , u ′ ) equals (min e∈H\F c e , max e∈H\F c e ). This selection method can reduce the max-min distance substantially. However, it also decreases the cardinality of EC(F, l ′ , u ′ ) and thus decreases the possibility of finding the in-edge set F . Algorithm 4 gives the formal description of the rule.
Algorithm 3 Selection rule for k-balanced min/max
Input: A graph G = (V, E), a tour H, a constant k, and an extreme type ET .
Output: (F, l ′ , u ′ ) where F ⊂ H and [l ′ , u ′ ] ⊊ [l H , u H ]. 1: if ET is min then 2: F ← {e ∈ H | ce = l H }, l ′ ← l H + 1, u ′ ← u H 3: else if ET is max then 4: F ← {e ∈ H | ce = u H }, l ′ ← l H , u ′ ← u H -1 5: end if 6: while |F | < k do 7: f ← arg max e=(i,j)∈H | δ({i, j}) ∩ δ(F ) ∩ {e ∈ E|l ′ ≤ ce ≤ u ′ } | 8:
F ← F ∪ {f } 9: end while 10: return (F, l ′ , u )
Algorithm 4 Selection rule for k-balanced extreme
Input: A graph G = (V, E), a tour H, and a constant k. Output: (F, l ′ , u ′ ) where F ⊂ H and [l ′ , u ′ ] ⊊ [l H , u H ]. 1: F ← ∅. 2: while |F | < k do 3: removed_cost ← arg min ce∈C(H\F ) d(c, H) 4: F ← F ∪ {e ∈ H | ce = removed_cost} 5: end while 6: l ′ ← min e∈H\F ce 7: u ′ ← max e∈H\F ce 8: return (F, l ′ , u ′ )
Notice that in all variants of k-balanced, we only consider one subset F to find k-exchange at each iteration. Although this setting can omit high-quality k-exchanges, it allows the algorithm to launch with many random initial tours and k's values within an acceptable amount of CPU time. Thus, we still can obtain reasonable feasible solutions. To further improve the algorithm, when the number of min-cost edges or max-cost edges is at most 3, we search 3-opt moves with all valid edge triples of the tour.
Branch-and-cut algorithm
In this section, we describe a branch-and-cut algorithm for solving exactly the BTSP. It contains mechanisms to locate the largest and smallest edge costs (i.e., local bounding cuts, edge elimination, and variable fixing) and algorithms to improve the lower and upper bounds.
The first step is to perform Algorithms 1 and 2 to yield a lower bound and an upper bound to start the branch-and-cut algorithm. These bounds are also used to eliminate edges and reduce the formulation's size. Details are given in Section 6.1.
After the initialization steps, the algorithm constructs a search tree (a.k.a branch-and-bound tree) whose root node is the LP relaxation of (M IP -BT SP ) without subtour elimination constraints. When an integer solution is found, violated subtour constraints are found and added to the formulation. If this solution satisfies all subtour constraints and has the best objective value, it is called the incumbent solution. When obtaining a new incumbent solution, Algorithm 2 is called to enhance this solution and decrease the upper bound of the branch-and-cut algorithm. At nodes in which the solutions to the subproblems are fractional, local bounding cuts and subtour constraints are generated following the separation strategies presented in Section 6.3. To accelerate exploring the nodes, we integrate into the branch-and-cut algorithm several variable fixing techniques, which are described in Section 6.2. Other fundamental components, such as node and variable selections, follow the default rules of the commercial solver CPLEX 12.10.
The algorithm is sketched as follows:
Step 0: (Initialization) 0.1 Run Algorithms 1 and 2 to get a lower bound of the BTSP and an initial feasible solution (x 0 , l 0 , u 0 ), respectively. 0.2 (Edge elimination) Eliminate edges based on (x 0 , l 0 , u 0 ) following Section 6.1. 0.3 Let N be the node set of the branch-and-bound tree and (x, l, u) be the current incumbent solution. Initialize N by the LP relaxation of (M IP -BT SP ) without (2c) and (x, l, u) by (x 0 , l 0 , u 0 ). Step 1: (Node selection) If N is empty, then return (x, l, u) and terminate. Otherwise, take out a subproblem P from N .
Step 2: Solve P. If P is infeasible, go to Step 1. Otherwise, let (x * , l * , u * ) be an optimal solution to P.
Step 3: Step 5: (Cut generation) Generate violated valid inequalities by the separation strategies in Section 6.3 and fix variables by the heuristics introduced in Section 6.2.
If u * -l * ≥ u -l,
Step 6: (Variable selection) Choose a fractional variable to branch. Add the two resulting subproblems to N and go to Step 1.
Edge elimination
To reduce the formulation's size and accelerate solving the LP relaxations, we eliminate edges that can not occur in the optimal tour of the BTSP. Remember that the branch-and-cut algorithm aims to improve the incumbent solution more and more. Thus, if we can prove that the occurrence of an edge leads to a tour worse than the incumbent tour, we can remove this edge from the formulation.
Let H 0 be the initial tour found by Algorithm 2. As proven in Lemma 1, if a tour contains an edge with cost C i , its max-min distance is at least the length of the shortest biconnected interval compatible with C i . Then, edges with costs C i satisfying γ(C i ) > u H 0 -l H 0 can not be a part of the optimal tour; otherwise, the max-min distance of this tour will be greater than u H 0 -l H 0 . By this observation, we can remove edges e ∈ E such that γ(c e ) > u H 0 -l H 0 .
Variable fixing
Besides eliminating edges at the beginning, we also fix variables during the branch-and-cut algorithm to decrease the number of variables to be controlled and tighten the LP relaxations. Naturally, variables that cannot help to improve the incumbent solution should be fixed to 0. To fix variables, we add the inequalities corresponding to the fixing of the variables as cutting planes. This section proposes two heuristics to determine variables that can be fixed to 0: one based on the biconnected intervals and one based on fixed costs at nodes. Throughout this section, we denote by (x, l, u) the current incumbent solution of the search tree.
Biconnected-interval-based variable fixing
Using the same arguments as in Section 6.1, edges with costs C i such that γ(C i ) ≥ u -l can not appear in solutions that are better than the incumbent solution. Thus, such edges can be permanently fixed to 0 in the remaining nodes of the branch-and-bound tree. In particular, when a new incumbent solution (x, l, u) is found, we add the following inequalities to the formulation
x e = 0 ∀e ∈ E : γ(c e ) ≥ u -l. ( 4
)
Obviously, the inequalities (4) are valid for the remainder of the search tree.
Fixed-costs-based variable fixing
The second heuristic to fix variables is due to the fact that each node of the search tree is associated with two disjoint edge sets F 0 and F 1 where F 0 , F 1 consist of edges that have been fixed to 0 and 1, respectively. Given a node ⟨F 0 , F 1 ⟩, we respectively denote by M i C(F1) and M s C(F1) the minimum and maximum of C(F 1 ). Let H ′ be a tour that has the max-min distance smaller than the incumbent solution's one and is found by the node or its descendants. Obviously, H ′ only comprises the edges of F 1 and edges with costs in (M s C(F1) -(u -l), M i C(F1) + (u -l)). The remaining edges, which do not satisfy the above cost condition, can be fixed to 0. The inequalities corresponding to the fixing of these variables are
x e = 0, ∀e ∈ E : c e / ∈ (M s C(F1) -(u -l), M i C(F1) + (u -l)) (5)
Since the validity of inequalities ( 5) depends on fixed costs at the node, these inequalities are only valid for the considered node and its descendants.
Separation algorithms and strategies
An efficient branch-and-cut algorithm relies on good separation algorithms and deft separation strategies. We propose here separation procedures and strategies for subtour constraints and local bounding cuts. We first denote by (x * , u * , l * ) a fractional solution at a node of the branch-and-bound tree.
Subtour elimination constraints
Recall that subtour elimination inequalities have the form e∈δ(S) x e ≥ 2 where S ⊂ V . To find subtour constraints violated by x * , one can construct a graph
G * = (V, E * ) with edge set E * = {e ∈ E | x * e > 0}. A cost associated with e ∈ E * is x * e .
By this setting, a violated subtour constraint is a cut whose weight is less than 2 in G * . Such a cut can be found via a Gomory-Hu tree [START_REF] Ralph | Multi-terminal network flows[END_REF] of G * , built from |V | -1 max-flow computation.
Since solving subtour's separation problem is computationally expensive and can provide no cutting planes, we generate subtour inequalities at every 100 nodes instead of every node in the search tree.
Local bounding cuts
At a node of the branch-and-bound tree, one can generate at most O(|E|) local bounding cuts. If we generate all possible local bounding cuts at every node, the subproblems will be enormous and very hard to solve. Thus, we only generate local bounding cuts with variable x e such that x * e > 0 and M e < M i C(F1) . In addition, since the local bounding cuts are mainly for the optimality phase, we only generate them when the MIP relative gap is less than 0.5 at every 10 nodes.
Computational experiments
In this section, we conduct some experiments to assess the efficiency of our branch-and-cut algorithm. All the experiments are conducted on a PC Intel Core i7-10700 CPU 2.9GHz and with 64 GB RAM. The algorithm is implemented in Python using CPLEX 12.10 with default setting and one solver thread. The CPU time limit for exploring the branch-and-bound tree is 10800 seconds (3 hours) per instance. For the testbed, we use the same TSPLIB instances [START_REF] Reinelt | Tsplib-a traveling salesman problem library[END_REF] from 14 to 493 vertices as [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF].
The biconnectivity verification problem in Algorithm 1 has been solved using the Networkx package [START_REF] Aric | Exploring network structure, dynamics, and function using networkx[END_REF]. For the k-balanced algorithms, since the test instances are completed graphs, we use permutations of {1, . . . , |V |} to initialize tours. The problem of completing a Hamiltonian cycle to find k-exchanges is solved by integer programming. To find a good upper bound, we run k-balanced with 10 random tours; at each iteration, we launch k-balanced extreme with k in {0, 10, . . . , K} and 3-balanced (if possible). To enhance the incumbent solutions during the branch-and-cut algorithm, we run k-balanced min and k-balanced max with k = K and 3-balanced. The value of K is defined in Table 2.
Graph size (|V |) |V | < 50 50 ≤ |V | < 100 100 ≤ |V | < 200 |V | ≥ 200 K 0 30 50 100
Table 2 The value of K corresponds to graph sizes.
We first selected 12 instances from the test set to analyze the impact of ingredients in our algorithm. The initial set comprises four small-sized instances (gr21, hk48, eil75, gr96), four medium-sized instances (pr136, si175, d198, tsp225) and four large-sized instances(a280, lin318, pcb442, d493). The first experiment in Section 7.1 aims at comparing our branch-and-cut algorithm to the commercial solver CPLEX 12.10. Then, Section 7.2 analyzes the impact of the components: local bounding cuts, Algorithm 1 to find a lower bound and Algorithm 2 to improve the upper bound. Finally, in Section 7.3, the entire testbed's results are shown with a comparison to the results of the double-threshold-based algorithms in [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF].
The effectiveness of the proposed branch-and-cut algorithm
In the first experiment, we compare our algorithm with the commercial solver CPLEX for solving the formulation (M IP -BT SP ) specified in Section 2.
Table 3 reports the results of the two algorithms on the initial test set. Column "Size" indicates the number of vertices of instances, which are equal to the numbers in instances' names. The results of each algorithm in the table contain the objective value (labeled "Obj"), the running time in seconds (labeled "Time(s)"), and the number of nodes in the search tree (labeled "Nodes"). Notice that the running time includes the time spent on the initialization steps and the search tree exploration. Instances whose running times are marked with an asterisk ( * ) are instances that cannot be solved to optimality within the CPU time limit, and their reported objective value is the best one found so far. Numerical results illustrate that our branch-and-cut algorithm outperforms CPLEX. Indeed, our algorithm can rapidly solve all 12 instances within the time limit, whereas CPLEX can solve only 7 out of 12 cases. In detail, CPLEX fails to prove the solution optimality for instances si175, d198 and find the optimal solutions for instances lin318, pcb442, d493. Among the 12 instances, there is only one instance (gr21) on which our algorithm performs slower; for the rest, our algorithm solves the problems 4 times faster on average than CPLEX. Moreover, our algorithm's average tree size is 23 times smaller than that of CPLEX. The computational results in Table 4 show that all components play important roles in the branch-and-cut algorithm. Excluding one of the components from the algorithm substantially raises the running time and makes the algorithm cannot solve several instances to optimality within the time limit. When using all components, the running time decreases by a factor of 3. We can order the effectiveness of the components as follows: Upper bound > Lower bound > Local cuts. The upper bound component yields the most improvement on the CPU time (3.2 times faster), then the lower bound component (2.7 times) and valid inequalities (1.6 times).
For a deeper analysis, we present in Table 5 the lower and upper bounds obtained by our algorithm and CPLEX. It can be seen that the lower and upper bounds found by our algorithm are extremely sharper than CPLEX's ones. Furthermore, the time spent finding the upper bounds of our algorithm on average is also smaller than that of CPLEX. In Table 6, column "DT-based algorithm" reports the results of the DTbased algorithms provided by [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF]. Subcolumn "Obj" represents the best objective value found by the MDT or IB algorithm. Subcolumn "Opt?" indicates whether the solution is provably optimal, namely that the lower bound equals the objective value. Subcolumn "Time" gives the total time for calculating the lower bound and solving the instance by the MDT or IB algorithm. Notice that the running time of the DT-based algorithms as reported in [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF] and recopied in Table 6 is measured with experiment settings differing from ours, i.e the algorithms are coded in C programming language and tested on a PC with 3.40 GHz Pentium 4 CPU and 2 GB of RAM, and the time limit is 18000 seconds. We present the running time here not for comparison purposes but for reference only. As reported in [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF], the DT-based algorithms converged to solutions within 10% optimality estimated based on lower bound values, in which 27 solutions are provably optimal. The found solutions are the best solutions that can be found by the algorithms without regard to the CPU time limit, except for the instance gr431. Table 6 shows that our algorithm can solve to optimality 63 out of 65 instances within the time limit (10800 seconds), in which 36 instances are solved to optimality for the first time. For 14 of the 65 problems -mainly large-sized instances, our algorithm obtains solutions better than the DTbased algorithms. Although the two instances fl417 and pr439 can not be solved optimally within the time limit, their best objective values so far are significantly smaller than the DT-based algorithms' ones.
Conclusion
In this paper, we proposed a branch-and-cut algorithm for solving exactly the BTSP. We strengthened the branch-and-cut algorithm by local bounding cuts, edge elimination, and variable fixing. We also developed heuristics to improve the lower and upper bounds of the algorithm. Several experiments on TSPLIB instances with less than 500 vertices are conducted. For 63 out of 65 instances, we obtained optimal solutions and for 14 of the 65 instances -mainly large-sized ones, our algorithm provided solutions with smaller objective values comparing with the previous work in the literature [START_REF] Larusic | The balanced traveling salesmanproblem[END_REF]. For solving exactly large scale instances of thousands vertices, more mechanisms of tightening lower and upper bounds would be needed. Interesting directions for future research would be the investigation for new classes of local cuts and the improvement of the k-balanced algorithm.
Fig. 1
1 Fig.1Illustration of a 3-opt move in 3-balanced max. (1.a) represents a tour H whose largest and smallest edge costs are 8 and 3, respectively. We will remove all edges with max-cost 8 (f 1 , f 2 , f 3 ) from H and set (l ′ , u ′ ) = (l H , u H -1) = (3, 7). (1.b) illustrates the remainder H \ F of the tour. The dash lines are the edges of EC(F, l ′ , u ′ ) where edges have two endpoints in V (F ) and costs belong to[START_REF] Cappanera | Balanced paths in acyclic networks: Tractable cases and related approaches[END_REF][START_REF] Helsgaun | General k-opt submoves for the lin-kernighan tsp heuristic[END_REF]. (1.c) demonstrates a compressed version G ′ of G, in which paths in H \ F are considered as edges. The problem of reconnecting H in G is equivalent to the one in G ′ . (1.d) shows the resulting tour with a smaller max-min distance, i.e. 3.
F
7. 2
2 Impact of the local cuts, lower bound and upper bound componentsIn this section, we aim to analyze the effectiveness of the three key components: local bounding cuts, the lower bound algorithm, and the upper bound algorithm. Four algorithm variants are created for this purpose. The first setting Full corresponds to the full version, which uses all components. The setting Full x represents the version excluding the component x, e.g., the setting Full Local cuts is the version without local bounding cuts.
7. 3
3 Comparison to the double-threshold-based algorithmsFinally, we present the results of the branch-and-cut algorithm on the entire testbed with a comparison to the double-threshold-based (DT-based) algorithms introduced in[START_REF] Larusic | The balanced traveling salesmanproblem[END_REF], i.e., the modified double-threshold (MDT) and iterative bottleneck (IB) algorithms.
Table 1
1 Selection rules of (F, l ′ , u ′ )
l ′ u ′
Table 3
3 Comparison between the two algorithms on 12 TSPLIB instances
Table 4
4 Computational results of the algorithm variants
Instance Full Full Local cuts Full Lower bound Full Upper bound
Time(s) Nodes Time(s) Nodes Time(s) Nodes Time(s) Nodes
gr21 0.6 0 0.5 0 0.6 15 0.6 668
hk48 4.3 157 3.8 280 3.8 213 16.5 5544
eil76 6.2 390 471.3 116330 1427.2 179590 251.0 45520
gr96 93.9 1130 57.1 6835 110.2 9586 151.1 11161
pr136 si175 d198 62.5 150.6 2,424.5 1126 3854 16892 94.0 3169.5 1537.8 4691 103012 61366 78.6 10806.9 * 10810.3 * 3085 144123 99761 161.8 3579.2 10824.9 * 9320 77106 41125
tsp225 a280 lin318 pcb442 d493 135.0 196.8 499.3 9,013.8 4,114.4 682 481 1591 1592 7399 991.6 10826.9 * 461.8 10899.8 * 9118.0 22267 52096 1476 16840 14640 3096.0 10825.6 * 1014.8 10858.9 * 6568.6 37827 147408 6832 3401 4381 4232.0 10074.2 10835.7 * 10847.4 * 10862.8 * 31270 47577 45920 7621 3100
Average 1,391.8 2,941.2 3,136.0 33,319.4 4,633.5 53,018.5 5,153.1 27,161.0
Instance Our B&C CPLEX
LB LB Time UB UB Time LB LB Time UB UB Time
gr21 65 0.03 120 0.48 0 0.00 714 0.06
hk48 133 0.53 189 2.42 0 0.01 2612 0.06
eil76 2 0.06 5 1.23 0 0.01 60 0.04
gr96 281 6.30 561 5.04 0 0.02 5864 0.41
pr136 103 2.00 1149 3.23 0 0.06 12657 0.2
si175 5 0.82 21 5.70 0 0.06 303 1.92
d198 830 24.17 1355 9.23 0 0.08 2757 2.67
tsp225 6 1.57 21 14.76 0 0.48 494 16.78
a280 3 1.65 16 23.08 0 1.23 171 50.59
lin318 31 34.42 133 41.28 0 0.91 2929 45.89
pcb442 26 45.71 161 54.82 0 16.82 3790 3106.86
d493 34 57.87 1592 246.71 0 13.17 2947 1051.14
Average 132.18 15.92 473.00 37.05 0 2.99 3144.00 388.78
Table 5
5 Lower and upper bounds provided by the two algorithms
Table 6
6 Numerical results of the Branch-and-Cut algorithm on 65 TSPLIB instances. Instances with the bold objective value are solved to optimality for the first time and instances with objective values marked by ↓ are ones that our algorithm can produce better solutions.
Instance DT-based algorithm [9] Our B&C
LB Obj Time Opt? LB UB Obj Time Nodes
burma14 120 134 0.2 120 134 134 0.3 0
ulysses16 837 868 0.4 173 868 868 0.5 0
gr17 94 119 0.1 80 129 119 0.5 8
gr21 110 115 0.1 65 120 115 0.6 0
ulysses22 837 868 1.7 157 868 868 0.6 0
gr24 33 33 0.1 yes 33 45 33 0.7 0
fri26 21 21 0.1 yes 21 25 21 0.5 0
bayg29 23 29 0.3 23 34 29 0.8 17
bays29 36 38 0.3 36 49 38 1.9 642
dantzig42 13 13 0.2 yes 13 21 13 1.7 140
swiss42 14 14 0.4 yes 14 32 14 1.7 179
att48 156 192 14.1 133 223 190 ↓ 3.9 303
gr48 46 46 2.3 yes 46 96 46 2.9 173
hk48 138 156 9.8 133 189 156 4.3 157
eil51 3 3 0.3 yes 3 6 3 1.5 10
berlin52 139 151 11.5 113 151 149 ↓ 5.5 573
brazil58 912 1125 19.8 912 1124 1097 ↓ 7.7 264
st70 5 5 1.9 yes 5 6 5 1.9 59
eil76 2 2 1.1 yes 2 5 2 6.2 390
pr76 498 522 25.5 498 1015 522 8.6 186
gr96 281 314 941.1 281 561 314 93.9 1130
rat99 5 5 3.1 yes 5 9 5 9.2 333
kroA100 137 137 93.2 yes 137 463 137 83 1200
kroB100 129 145 111.1 129 471 145 65.7 917
kroC100 120 133 136.0 120 509 133 72.7 2500
kroD100 140 140 67.2 yes 137 269 140 422 4811
kroE100 137 139 173.5 137 452 139 60.9 865
rd100 43 43 23.9 yes 43 53 43 10.3 205
eil101 2 2 2.4 yes 2 3 2 3.5 12
lin105 95 100 217.4 95 183 100 26.9 221
pr107 877 877 84.4 yes 53 3645 877 25.2 1007
gr120 27 31 50.4 27 94 31 67.9 1174
pr124 364 411 500.4 364 731 406 ↓ 93.2 949
bier127 2915 3084 493.8 874 3459 2925 ↓ 29.8 106
ch130 18 22 36.5 17 60 22 56.7 827
pr136 103 126 58.8 103 1149 126 62.5 1126
gr137 403 428 3,239.7 354 825 424 ↓ 256.3 2647
pr144 259 259 347.8 yes 259 449 259 43 333
ch150 17 17 18.6 yes 17 33 17 196.9 520
kroA150 89 91 330.4 89 452 91 122.2 1279
kroB150 103 109 356.0 100 454 109 83.7 708
pr152 59 59 230.5 yes 59 378 59 63.3 1326
u159 142 142 111.0 yes 135 822 142 1933.8 42815
si175 7 7 0.0 yes 5 21 7 150.6 3854
brg180 0 0 0.7 yes 0 0 0 2.8 0
rat195 4 4 16.7 yes 4 16 4 499.6 3920
d198 1105 1140 391.8 830 1355 1122 ↓ 2424.5 16892
kroA200 71 76 660.3 71 599 76 1607.7 2050
kroB200 81 82 620.9 81 522 82 1242.7 4070
gr202 778 927 4,813.1 69 933 787 ↓ 289.2 241
ts225 0 21 50.9 0 696 21 503.1 6148
tsp225 6 6 88.7 yes 6 21 6 135 682
pr226 450 504 1,575.3 450 704 504 123.5 0
gr229 675 742 14,936.8 622 1660 706 ↓ 849.3 230
gil262 3 3 69.5 yes 3 7 3 99.5 110
pr264 238 415 3,132.9 238 3255 340 ↓ 7386.6 5589
a280 3 3 49.5 yes 3 16 3 196.8 481
pr299 89 89 1,173.6 yes 89 363 89 4258.6 476
lin318 31 31 1,442.0 yes 31 133 31 499.3 1591
rd400 11 11 491.7 yes 11 17 11 243.3 171
fl417 199 317 2,318.3 82 359 229 ↓ 10931.2 * 93800
gr431 1943 2230 42,966.3 * 502 2876 1962 ↓ 6555.5 11805
pr439 810 1620 5,973.9 256 2583 994 ↓ 11254.4 * 36687
pcb442 26 27 1,302.6 26 161 27 9013.8 1592
d493 1191 1459 9,416.0 34 1592 1193 ↓ 4114.4 7399 |
04102756 | en | [
"sdv.mhep"
] | 2024/03/04 16:41:20 | 2023 | https://dumas.ccsd.cnrs.fr/dumas-04102756/file/Th%C3%A8se%20GUILMOTEAU%20Thomas.pdf | M M Bacin Franck -Begue
René-Jean -Beytout Jean -Boire
Jean- Paul -Bommelaer Gilles -Boucher
Jean-Louis -Cano Noël - Cassagnes
Jean -Catilina Pierre -Chamoux
Jean -Chipponi Jacques - Chollet
Philippe -Citron Bernard -Clement Gilles -Dastugue
Bernard -Dauplat Jacques -Dechelotte
Pierre -Demeocq François -De Riberolles
Charles -Deteix Patrice -Escande
Georges -Mme Fonck
Yvette -M Gentou
Claude -Mme Glanddier
Phyllis - Mm Irthum
Bernard -Jacquetin Bernard -Mm Laveran
Henri -Lesourd Bruno -Levai
Jean-Luc -Mondie Jean
Michel -Philippe Pierre - Planche
Jean -Mme Rigal
Danièle -Mm Rozan
Pierre -Sirot Jacques -Ribal
Jean-Pierre -Souteyrand Pierre -Tanguy
Alain -Terver Sylvain -Thieblot
Philippe -Tournilhac Michel -Vanneuville Guy - Viallet
Jean-François -Mme Veyre
Annie Professeurs Emerites
M M Aumaitre
Olivier -Avan Paul -Bazin
Jean-Etienne -Caillaud Denis -Dapoigny Michel -Dubray
Claude -Durif Franck -Eschalier
Jean-Louis -Labbe André -Mme Lafeuille
Hélène -Mm Lemery Didier -Lusson
Jean-René -Pouly Jean-Luc
Exceptionnelle M Classe
Philippe Vago
Cytogénétique M Histologie-Embryologie
Louis Boyer
M Canis
Michel Gynécologie-Obstétrique
Mme Penault-Llorca
Frédérique Anatomie
Cytologie Pathologiques
M Bignon
Yves Jean Cancérologie
Biologique M Boirie
Yves Nutrition
Humaine M Clavelou
Pierre Neurologie
M Gilain
Laurent O R L M Lemaire
Jean-Jacques Neurochirurgie
M Camilleri
Lionel Chirurgie Thoracique
M Cardio-Vasculaire
Pierre-Michel Llorca
Adultes M Psychiatrie D'
Denis Pezet
Digestive M Chirurgie
Bertrand Souweine
Médicale M Réanimation
Stéphane Chirurgie Boisgard
Orthopédique
Mme Traumatologie
Martine Duclos
M Physiologie
Jeannot Schmidt
Urgence M Médecine D'
Marc Berger
M Hématologie
Jean-Marc Garcier
M Soubrier
Martin Rhumatologie
M Abergel
Armando Hépatologie
Mme Barthelemy
Isabelle Chirurgie Maxillo-Faciale
M Ruivard
Marc Médecine
Interne Mme Jalenques
Isabelle Psychiatrie D'
Adultes M Mom
Thierry Oto-Rhino-Laryngologie
M Coudeyre
M. D'INCAN Michel Dermatologie -Vénéréologie
M Gerbaud
Laurent Epidémiologie
Mme Pickering
Gisèle Pharmacologie
Clinique M Sapin-Defour
M Verrelle
Pierre Radiothérapie
Clinique M Tauveron
Igor Endocrinologie
Maladies Métaboliques
M Richard
Ruddy Physiologie
M Bay
Jacques-Olivier Cancérologie
Mme Godfraind
Catherine Anatomie
M Laurichesse
Henri Maladies
Infectieuses Et
Tropicales M Tournilhac
Olivier Hématologie
M Chiambaretta
Frédéric Ophtalmologie
M Filaire
Marc Anatomie
Chirurgie Thoracique
M Gallot
Denis Gynécologie-Obstétrique
M Guy
Laurent Urologie
M Traore
Ousmane Hygiène
Hospitalière M Andre
Interne M Bonnet
Richard Bactériologie
Virologie M Cachin
Florent Biophysique
Médecine Nucléaire
M Costes
Frédéric Physiologie
M Futier
Emmanuel Anesthésiologie-Réanimation
Mme Heng
Anne-Elisabeth Néphrologie
M Motreff
Pascal Cardiologie
Clinique M Rabischong
Benoît Gynécologie
Obstétrique M Chabrot
M Descamps
Stéphane Chirurgie Orthopédique
Mme Traumatologique
Cécile Henquell
Virologie M Bactériologie
Christophe Pomel
Générale M Cancérologie -Chirurgie
Fabien Thaveau
Vasculaire M Chirurgie
Georges Brousse
Mme Creveaux
M Faict
Justyna Pédiatrie
M Cornelis
François Génétique
M Lesens
Olivier Maladies
Tropicales M Authier
Nicolas Pharmacologie
Médicale M Lautrette
Alexandre Néphrologie
Réanimation Médicale
M Eschalier
Romain Cardiologie
M Merlin
Etienne Pédiatrie
Mme Tournadre
Anne Rhumatologie
M Durando
Xavier Cancérologie
M Dutheil
Frédéric Médecine
Santé Au Travail
Mme Fantini
Maria Livia
Neurologie M Sakka
Laurent Anatomie -Neurochirurgie
M Bourdel
Nicolas Gynécologie-Obstétrique
M Guieze
Romain Hématologie
M Poincloux
Laurent Gastroentérologie
M Souteyrand
Géraud Cardiologie
M Evrard
Bertrand Immunologie
M Poirier
Philippe Parasitologie
Mycologie Mme
Pham Dang
Nathalie Chirurgie
Maxillo-Faciale Stomatologie
Mme Sarret
Catherine Pédiatrie
M Bouvier
Damien Biochimie
Moléculaire M Biologie
Anthony Buisson
Mme Gastroentérologie
Lucie Cassagnes
M Gagniere
Johan Chirurgie
Viscérale Et Digestive
M Jabaudon-Gandet
Matthieu Anesthésiologie-Réanimation
Médecine Péri-Opératoire
M Lebreton
Aurélien Hématologie
M Moisset
Xavier Neurologie
M Samalin
Ludovic Psychiatrie D'
Adultes M Biau
Julian Radiothérapie
M Lachal
Jonathan Pédopsychiatrie
Classe Normale
M Bailly
Jean-Luc Bactériologie
Virologie Mme
Aubel Corinne
Oncologie Moléculaire
Mme Guillet
Christelle Nutrition
Humaine M Bidet
Yannick Oncogénétique
M Dalmasso
Guillaume Bactériologie
M Pizon
Frank Santé
Publique M Soler Cédric
M Giraudet
M Lolignier
Maitres De
Mme Eschalier Bénédicte
Mme Richard Amélie
Monsieur Le Professeur
Laurent Poincloux
PROFESSEURS DES UNIVERSITES-PRATICIENS HOSPITALIERS
Monsieur le Professeur Armando ABERGEL, Président du Jury, Je vous remercie d'avoir accepté de présider ce jury de thèse. Merci pour vos conseils avisés, prodigués tout au long de mon internat. Je vous prie d'accepter l'expression de toute ma considération et mon profond respect.
For such patients, biliary drainage is the key point of the best supportive care strategy enhancing survival and life quality, notably by improving jaundice, poor appetite and general weakness.
Non-surgical approaches are now preferred for management of unresectable malignant biliary obstruction and endoscopic drainage is deemed less invasive and more physiological in comparison to surgical and percutaneous approaches [START_REF] Webb | Endoscopic Management of Malignant Bile Duct Strictures[END_REF].
Nevertheless, it should be considered that many factors impact the success of malignant hilar biliary obstruction (MHBO) drainage.
LIVER VOLUME
Assessment of liver volume requires pre-operative imaging data and is a major information to consider before scheduling a biliary drainage for MHBO. Both computed tomography scan (CT scan) and magnetic resonance imaging (MRI) are available for hepatic volumetry, with a strong positive correlation. CT remains the preferred imaging technique due to its availability, cost and images quality (3) (Figure 1).
Before 2010, drainage of only one hepatic duct was recommended, usually the right hepatic duct, if pre or per-endoscopic retrograde cholangiopancreatography (ERCP) data showed that it drains more than 25% of total liver volume (4).
In (Figure 2).
PERCUTANEOUS OR ENDOSCOPIC DRAINAGE
The position of percutaneous and endoscopic drainage remains unclear for unresectable MHBO management. Percutaneous transhepatic biliary drainage (PTBD) offers greater technical feasibility and higher success rate in comparison with endoscopic drainage, and stent patency is superior if internal PTBD with metallic stent is performed (8,9) (Figure 3). mortality. However, this meta-analysis also reported a higher rate of bleeding issue in PTBD arm and a decrease in life quality of patients undergoing external PTBD [START_REF] Dumonceau | Endoscopic biliary stenting: indications, choice of stents, and results: European Society of Gastrointestinal Endoscopy (ESGE) Clinical Guideline -Updated October 2017[END_REF][START_REF] Moole | Endoscopic versus Percutaneous Biliary Drainage in Palliation of Advanced Malignant Hilar Obstruction: A Meta-Analysis and Systematic Review[END_REF]. Internal PTBD with metallic stent implantation sometimes requires a two-steps procedure, increasing infectious and bleeding issues [START_REF] Rerknimitr | Asia-Pacific consensus recommendations for endoscopic and interventional management of hilar cholangiocarcinoma[END_REF]. On the other hand, one step procedure has been described since at least 2006, with a higher risk of biliary leakage than the two steps procedure, furthermore, the catheter has to be left in place at the insertion site for correct hemostasis, and can only be removed 2 or 3 days after procedure [START_REF] Yoshida | One-step palliative treatment method for obstructive jaundice caused by unresectable malignancies by percutaneous transhepatic insertion of an expandable metallic stent[END_REF].
In addition, a systematic review published in 2019 aimed to compare the incidence of seeding metastasis (SM) after pre-operative endoscopic or percutaneous treatment of MBO, and showed a significant higher rate of SM after percutaneous drainage (22% vs 10.5% (p<0.00001)) ( 13).
More recently, several studies reassert the position of endoscopic biliary drainage (EBD) as a safe and efficacy way to manage Bismuth III & IV strictures.
The emergence of endoscopic ultrasound biliary drainage (EUS-BD) is a true revolution in the management of biliary strictures after ERCP failure, partial ERCP drainage (unilateral) or when ERCP is impossible (surgically altered anatomy). In 2021, Kongkam et al, showed that a combination of ERCP + EUS-BD is an interesting alternative to PTBD alone or to PTBD + ERCP, with a significant lower recurrent biliary obstruction (RBO) rate in ERCP + EUS-BD arm, similar complication rate and no mortality difference (14) (Figure 4).
Latest ESGE 2022 Guidelines now recommends EUS-BD as a second line therapy after ERCP failure, over ERCP + PTBD, regarding its similar technical success with lower post-procedural adverse event and longer median time to RBO [START_REF] Van Der Merwe | Therapeutic endoscopic ultrasound: European Society of Gastrointestinal Endoscopy (ESGE) Guideline[END_REF][START_REF] Vanella | EUS-guided intrahepatic biliary drainage: a large retrospective series and subgroup comparison between percutaneous drainage in hilar stenoses or postsurgical anatomy[END_REF]. Improvement of life quality is also important to notice in such palliative situations.
A 2020 literature review on biliary stenting for MHBO seems to favor endoscopic approach if experienced endoscopist is available, reminding us the local specificities of each hospital and the importance of multi-disciplinary concertation in those challenging situations [START_REF] Lee | Biliary stenting for hilar malignant biliary obstruction[END_REF].
UNILATERAL OR BILATERAL DRAINAGE
Several studies tried to compare unilateral and bilateral biliary drainage in MHBO (Figure 5a & 5b).
A single stenting is clearly recommended in Bismuth I stricture, as that kind of stricture stenting is technically similar to distal biliary obstructions [START_REF] Dumonceau | Endoscopic biliary stenting: indications, choice of stents, and results: European Society of Gastrointestinal Endoscopy (ESGE) Clinical Guideline -Updated October 2017[END_REF][START_REF] Rerknimitr | Asia-Pacific consensus recommendations for endoscopic and interventional management of hilar cholangiocarcinoma[END_REF].
In Bilateral metallic stenting is a technical challenge because of the great difficulty to achieve insertion and deployment of a contralateral second stent, so, the emergence of new endoscopic devices improved the technical and clinical success of bilateral endoscopic drainage.
In a 2009 retrospective study, Naitoh et al. showed similar technical and clinical success, with similar early and late complications in both unilateral and bilateral drainage, but bilateral metallic stenting provided a significantly better stent patency, reducing reintervention for recurrent biliary obstruction [START_REF] Naitoh | Unilateral versus bilateral endoscopic metal stenting for malignant hilar biliary obstruction[END_REF]. Those results were confirmed in a randomized clinical trial in 2017, where 133 patients were enrolled with the intention of comparing unilateral and bilateral metallic biliary stenting for unresectable MHBO and showed similar technical success with lower reintervention rate and prolonged stent patency in the bilateral stenting arm [START_REF] Lee | Bilateral versus unilateral placement of metal stents for inoperable high-grade malignant hilar biliary strictures: a multicenter, prospective, randomized study (with video)[END_REF].
Two recent meta-analysis confirmed that both bilateral and unilateral stenting for unresectable MHBO are similar in terms of clinical success and safety profile. Technical success was significantly higher in unilateral group (97% vs 89%, p=0.003)( 21) and significantly longer stent patency was observed in the bilateral stenting arm (HR 1.28, p=0.01) [START_REF] Yang | Endoscopic metal stenting for malignant hilar biliary obstruction: an update meta-analysis of unilateral versus bilateral stenting[END_REF].
STENT TYPE
While optimal strategy of biliary drainage in unresectable MHBO is still being debated, the choice of self-expandable metallic stent (SEMS) is no longer contested, since it showed its superiority in several randomized clinical trials and multicentric study (Figure 6 & 7).
It has been proved that SEMS is superior to plastic stent in this indication, providing a significantly higher clinical success, better stent patency and reducing the number of reintervention for stent dysfunction [START_REF] Xia | Comparison of endoscopic bilateral metal stent drainage with plastic stents in the palliation of unresectable hilar biliary malignant strictures: Large multicenter study[END_REF][START_REF] Sangchan | Efficacy of metal and plastic stents in unresectable complex hilar cholangiocarcinoma: a randomized controlled trial[END_REF][START_REF] Mukai | Metallic stents are more efficacious than plastic stents in unresectable malignant hilar biliary strictures: a randomized controlled trial[END_REF].
Furthermore, the American Society of Gastrointestinal Endoscopy (ASGE) reversed its stance concerning the use of plastic stents in patients with short life expectancy (<3 months) and now recommends since 2021 the use of SEMS for all patients with MHBO in palliative situation [START_REF] Qumseya | ASGE guideline on the role of endoscopy in the management of malignant hilar obstruction[END_REF][START_REF] Pfau | Pancreatic and biliary stents[END_REF].
STENT IN STENT OR SIDE BY SIDE (SEQUENTIAL AND SIMULTANEOUS PROCEDURES)
Recent studies reported that endoscopic bilateral drainage is now a first-line option for management of unresectable MHBO regarding lower cholangitis rate and physiological concerns [START_REF] Sherman | Endoscopic drainage of malignant hilar obstruction: Is one biliary stent enough or should we work to place two?[END_REF]. Two main techniques have been developed those past years: "side-by-side" and "stent-in-stent" methods.
The procedure begins with the initial stenting of one intrahepatic duct, followed by placement of a second stent in another intrahepatic duct: parallel to the first stent (i.e "side-by-side" technique), or by crossing through the mesh of the initial stent (i.e "stent-in-stent" technique)(28) (Figure 8).
Overall technical success rates of those two methods range from 73.3% to 100% [START_REF] Lee | Bilateral versus unilateral placement of metal stents for inoperable high-grade malignant hilar biliary strictures: a multicenter, prospective, randomized study (with video)[END_REF], and no difference have been shown regarding clinical success, adverse event and stent patency duration [START_REF] Lee | Biliary stenting for hilar malignant biliary obstruction[END_REF].
One of the limiting factors for a successful stent-in-stent (SIS) placement is the difficulty for inserting the second metallic stent trough the mesh of the first stent. 9b).
Before 2010, side-by-side (SBS) technique required a "sequential" procedure: first stent placement, followed by second stent placement. Second stent placement is frequently very difficult to achieve because of the resistance of the second delivery system against the first deployed stent [START_REF] Lee | Technical Tips and Issues of Biliary Stenting, Focusing on Malignant Hilar Obstruction[END_REF] and the radial forces exerted on the bile duct wall, resulting in the impaction of the delivery system of the second stent [START_REF] Hookey | Use of a temporary plastic stent to facilitate the placement of multiple self-expanding metal stents in malignant biliary hilar strictures[END_REF].
Development of a new type of uncovered biliary metallic stent, carried by an ultra-thin (6Fr)
delivery system (Zilver ® ; Cook Medical, Ireland), permitted the rise of a new type of bilateral biliary drainage procedure: "simultaneous side-by-side technique" (Figure 10).
The thin design of the new delivery system allows to insert two stents, simultaneously, in the same 4,2mm working channel of a duodenoscope. The two stents are sequentially introduced over left and right guidewires; stents are then carefully released across the strictures simultaneously [START_REF] Chennat | Initial performance profile of a new 6F self-expanding metal stent for palliation of malignant hilar biliary obstruction[END_REF]. In 2013, Law & Baron confirmed safety, feasibility and effectiveness of simultaneous bilateral stenting using this new delivery system [START_REF] Law | Bilateral Metal Stents for Hilar Biliary Obstruction Using a 6Fr Delivery System: Outcomes Following Bilateral and Side-by-Side Stent Deployment[END_REF].
Some concerns exist regarding both new type of stent. Because of it specific wide-open mesh design, such stent may favor tumor ingrowth, which might lower stent patency and the need for biliary reintervention [START_REF] Chen | Side-by-side versus stent-in-stent bilateral stenting for malignant hilar biliary obstruction: a meta-analysis[END_REF][START_REF] Heo | Clinical Outcomes of Bilateral Stent-in-Stent Placement Using Self-Expandable Metallic Stent for High-Grade Malignant Hilar Biliary Obstruction[END_REF]. It is important to note that biliary reintervention may be harder with stent-in-stent technique. It is indeed difficult to catheterize the stent mesh in the hilar region again [START_REF] Inoue | Multi-center study of endoscopic revision after side-by-side metal stent placement for malignant hilar biliary obstruction[END_REF] and achieve bilateral biliary desobstruction: contrary to side by side drainage, where both distal tips of stents are visible in the duodenal lumen, desobstruction of the crossing stent must be achieved by its catheterization through the main stent (which is the only stent visible the duodenal lumen).
Side-by-side stenting may increase post-procedure complication, particularly cholangitis, due to the risk of portal vein occlusion caused by the overexpansion of the common bile duct [START_REF] Moon | Topic controversies in the endoscopic management of malignant hilar strictures using metal stent: side-by-side versus stent-in-stent techniques[END_REF].
In order to reduce such complication and to improve clinical success, the use of a SEMS carried by a 6Fr delivery system may be now preferred, with no shorten in stent patency [START_REF] Kawakubo | Singlestep simultaneous side-by-side placement of a self-expandable metallic stent with a 6-Fr delivery system for unresectable malignant hilar biliary obstruction: a feasibility study[END_REF] and a higher success rate when reintervention after recurrent biliary obstruction is needed.
However, in a prospective randomized multicentric study, Lee et al. [START_REF] Lee | Prospective comparison of endoscopic bilateral stent-in-stent versus stent-by-stent deployment for inoperable advanced malignant hilar biliary stricture[END_REF] drainage as a first-line option, using preferentially uncovered metallic stents over plastic stents [START_REF] Dumonceau | Endoscopic biliary stenting: indications, choice of stents, and results: European Society of Gastrointestinal Endoscopy (ESGE) Clinical Guideline -Updated October 2017[END_REF][START_REF] Rerknimitr | Asia-Pacific consensus recommendations for endoscopic and interventional management of hilar cholangiocarcinoma[END_REF][START_REF] Xia | Comparison of endoscopic bilateral metal stent drainage with plastic stents in the palliation of unresectable hilar biliary malignant strictures: Large multicenter study[END_REF][START_REF] Sangchan | Efficacy of metal and plastic stents in unresectable complex hilar cholangiocarcinoma: a randomized controlled trial[END_REF][START_REF] Mukai | Metallic stents are more efficacious than plastic stents in unresectable malignant hilar biliary strictures: a randomized controlled trial[END_REF]. Bilateral drainage must be preferred to unilateral drainage in high-grade malignant hilar stricture because it provides a longer stent patency, with a lower reintervention rate for recurrent biliary obstruction (RBO), with similar clinical outcomes and adverse event [START_REF] Naitoh | Unilateral versus bilateral endoscopic metal stenting for malignant hilar biliary obstruction[END_REF][START_REF] Lee | Bilateral versus unilateral placement of metal stents for inoperable high-grade malignant hilar biliary strictures: a multicenter, prospective, randomized study (with video)[END_REF][START_REF] Meybodi | Unilateral versus bilateral endoscopic stenting in patients with unresectable malignant hilar obstruction: a systematic review and meta-analysis[END_REF][START_REF] Yang | Endoscopic metal stenting for malignant hilar biliary obstruction: an update meta-analysis of unilateral versus bilateral stenting[END_REF]. If unilateral drainage is performed, drainage of at least 50% of total liver volume must be obtained [START_REF] Vienne | Prediction of drainage effectiveness during endoscopic stenting of malignant hilar strictures: the role of liver volume assessment[END_REF]. Bilateral drainage can be performed using either side by side or stent in stent techniques, with no difference regarding technical and clinical success, stent patency and survival [START_REF] Lee | Prospective comparison of endoscopic bilateral stent-in-stent versus stent-by-stent deployment for inoperable advanced malignant hilar biliary stricture[END_REF][START_REF] Naitoh | Optimal endoscopic drainage strategy for unresectable malignant hilar biliary obstruction[END_REF]. Nevertheless, bilateral biliary drainage using metallic still is a technical challenge, including many failed procedures even for a trained operator.
Emergence of a new ultra-thin (6Fr or smaller) delivery system enables, since a few years now, simultaneous side by side metallic stent deployment. To date, there are few data reported in the literature regarding the simultaneous bilateral biliary stenting with this new device, only a small series of Inoue et al. reported a higher technical success and a shorter procedure time for simultaneous drainage compared with sequential side by side procedure [START_REF] Law | Bilateral Metal Stents for Hilar Biliary Obstruction Using a 6Fr Delivery System: Outcomes Following Bilateral and Side-by-Side Stent Deployment[END_REF][START_REF] Kawakubo | Singlestep simultaneous side-by-side placement of a self-expandable metallic stent with a 6-Fr delivery system for unresectable malignant hilar biliary obstruction: a feasibility study[END_REF][START_REF] Inoue | Simultaneous Versus Sequential Side-by-Side Bilateral Metal Stent Placement for Malignant Hilar Biliary Obstructions[END_REF].
In this study, we aimed to compare simultaneous side-by-side versus sequential ("side by side" and "stent in stent") bilateral biliary drainage in a large monocentric series.
METHODS
Patients
We investigated all patients who underwent bilateral biliary metallic stenting during ERCP for MHBO in Clermont-Ferrand University Hospital Centre between January 1 st 2010 and January Unresectable status was assessed according to general medical condition, locoregional or metastatic extent, after medico-surgical multi-disciplinary approach.
The study was approved by local Ethics Committee (IRB00013412, "CHU de Clermont Ferrand IRB #1", IRB number 2023-CF003) with compliance to the French policy of individual data protection and was performed in accordance with the principles of the Declaration of Helsinki.
transpapillary catheterism was unsuccessful. In some cases, sphincterotomies had been previously performed.
Sequential "side by side" technique
Once deep biliary catheterism was achieved, multiple guidewires (0.035 inch straight or angle tip hydrophilic guidewire (JagWire ® or DreamWire ® , Boston Scientific Co, USA)) were inserted whenever possible in both left and right hepatic ducts. Contrast agent was injected selectively only after a guidewire was passed through the hilar stricture. Following the insertion of the guidewires into the main left and right intrahepatic bile ducts, both strictures were dilated using 6 or 8mm diameter dilation balloons (Hurricane ® , Boston Scientific Co, USA), followed by bilateral SEMS placement (sequential "side by side" or "stent in stent" technique and simultaneous technique).
After dilation of both strictures, a first SEMS (10mm diameter, 8, 10 or 12cm length, 8Fr delivery shaft catheter, uncovered Wallflex Biliary RX Stent ® , Boston Scientific Co, USA) (Figure 11b) was placed, usually in the left biliary duct because of a more challenging stent placement due to the sharp angulation between the left and main bile ducts, compared with right bile duct. The second SEMS was next inserted and placed alongside the first SEMS over the other guidewire and then deployed parallel to the first SEMS.
Sequential "stent in stent" technique
We used a Y-shaped stent with a wide wire mesh design (either M-Hilar Bonastent ® , Mi-Tech, South Korea or Y-Type Niti-S Stent ® , Taewoong Medical, South Korea) (Figure 9) to perform the "stent in stent" technique (or "through the mesh"). These stents have a thin delivery catheter of 7Fr allowing to cross the wide wire mesh and an 8mm diameter once they are deployed. For this technique, a firstly bilateral guidewire catheterism is not mandatory. After deployment of the first stent across the hilar stricture, the guidewire left across the primary stent was carefully withdrawn, without pulling it back completely, and was then inserted into the undrained contralateral hepatic duct through the central wide mesh of the primary stent. Another uncovered SEMS was then introduced over the guidewire and deployed in the contralateral hepatic duct.
Simultaneous "side by side" technique
For this technique, we used SEMS with an ultra-thin delivery system that allows simultaneous SBS bilateral hilar stenting achievement with a 4.2mm working channel duodenoscope. First, a laser-cut SEMS aixstent ® (Leufen Medical GmbH, Berlin, Germany) carried by a 5Fr delivery system from 2017 to 2020, then the Niti-S [M-Type] ® (TaeWoong Medical, Seoul, Korea) carried by a 6Fr delivery system from 2020 to 2023 (easier to use in our own experience) (Figure 11a). Due to the ultra-thin delivery system, we had to use only 0.025 inch guidewire (either straight VisiGlide ® (Olympus, Japan) or angle tip JagWire ® Revolution (Boston Scientific Co, USA) when selective insertion of the guidewire into intrahepatic duct was difficult) to achieve bilateral main left and right bile duct catheterism. Biliary strictures were dilated using a 4,6 or 8mm balloon dilation (Hurricane ® RX, Boston Scientific, USA), followed by simultaneous insertion of two SEMS delivery system devices and pushed over each guidewire. Then, the two SEMS were simultaneously deployed across the hilar stenosis in a SBS configuration.
This procedure required the assistance of two endoscopy nurses. All SEMS had an 8mm diameter, with a varying length from 10 to 12cm, according to the length of the stricture and the need for all SEMS to be positioned above the duodenal papilla. Furthermore, alignment of the distal stent ends was attempted. Nevertheless, if needed, a third distal stent could be inserted in the distal tip of one of the initial SEMS to achieve duodenal lumen expansion.
Outcomes and definitions
The primary outcome for this study was to determine technical success rate for each procedure (sequential or simultaneous bilateral stenting).
Secondary outcomes included clinical success, procedure duration (in min), early adverse event rate and stent-related adverse event rate, recurrent biliary obstruction (RBO) rate and time to RBO (or stent patency, in days).
Technical success was defined by the accurate positioning of two metallic stent across left and right hilar strictures ("side by side" (SBS) in simultaneous group, "side by side" or "stent in stent" (SIS) in sequential group).
Clinical success was defined by decrease of at least 50% of total bilirubin (in µmol/L) at D7 compared to D0.
Adverse events were classified as early adverse event when they occurred in intra procedural time or within one week after procedure and were graded according the new AGREE (Adverse Event GastRointEstinal Endoscopy) Classification (44) (Figure 12).
Stent dysfunction was defined as recurrent biliary obstruction.
Statistical analysis
Study data were collected and managed using REDCap electronic data capture tools hosted at CHU de Clermont-Ferrand, France. Statistical analysis was performed, using STATA 15 (Stata Corp LLC, Texas, USA). Categorical variables are expressed as effectives and percentages.
Continuous variables are expressed as mean or median (+/-IQR). Group differences were evaluated, using Fisher's or Chi²'s tests for categorical variables. For continuous variables, group differences were evaluated, using Wilcoxon's, Mann-Whitney's or Kruskal Wallis' tests.
Survival time was determined using Kaplan-Meier method. Univariate and multivariate analyses were evaluated using logistic regression model.
Significance was defined as p < 0.05.
RESULTS
Patient characteristics
We identified 146 patients who benefited bilateral drainage for MHBO between 2010 and 2023. 1 and we observed no significant differences between the two groups.
Technical success
Technical success was achieved in 94% of patients in simultaneous group and in 77% of patients in sequential group (p=0.008). In sequential group, technical success was significantly more often achieved for SIS technique (95%) than with SBS technique (57%) (p=0.001). A total of 58 patients underwent technical success in simultaneous group: 21 using aixstent BDH ® device (Leufen Medical GmbH (Berlin, Germany)), 37 using Niti-S [M-Type] ® device (TaeWoong Medical (Seoul, Rep. of Korea)).
Technical failure was associated with a higher risk of death (HR 1. 35, p=0.22).
Baseline bilirubin rate did not appear to impact technical success. In univariate and multivariate analysis, technical success was less often observed for Bismuth & Corlette IV type and metastatic hilar obstruction (Table 2).
Patient outcomes are presented in Table 3.
Clinical success
Clinical success was observed in 75% of patients in simultaneous group and in 72% of patients in sequential group. There was no significant difference observed concerning clinical success between SBS and SIS techniques in the sequential group (76% and 69% of success respectively) (p=0.823).
Procedure duration
Median procedure time was 80 minutes in sequential group and 72 minutes in simultaneous group (p=0.92). Median procedure time was not different between sequential SBS and SIS techniques, 75 and 80 minutes respectively (p=0.75).
Early adverse events related to the procedure
Overall early adverse event was observed in 31% of patients in sequential group and 21% in simultaneous group, with no significant difference (p=0.205). Complication rates were similar between SBS and SIS techniques in sequential group, 34% and 29% respectively (p=0.39).
Early adverse event included 13 pancreatitis (none patient required ICU admission or endoscopic/surgical intervention), 16 cholangitis (2 patients admitted in ICU, 4 died), 6 hemorrhagic complications (due to sphincterotomy or iatrogenic duodenal ulcer related to distal tip of metal stents, all medically managed) and 1 duodenal perforation occurred (in SBS sequential group, surgically managed). Cholangitis was more often observed in sequential group (16% vs 6%, p=0.07), with no differences between SBS or SIS techniques (17% and 16% respectively, p=0.19). Early adverse event rates are represented in Figure 14.
Among adverse events, most were graded AGREE II (64% of overall AE). We observed 3 fatal complications (AGREE V) in sequential group and 1 fatal complication in simultaneous group; both were infectious complications due to cholangitis.
Technical and clinical failures were associated with a higher rate of AGREE V complications We did not report procedure-related death, excepted for patients who suffered cholangitis with multivisceral failure due to septic shock (4 patients (3% of overall patients)).
Repartition of adverse event severity is represented in Figure 15.
Recurrent biliary obstruction (RBO)
RBO was observed in 13 patients in simultaneous group and 9 patients in sequential group (22% and 18% respectively, p=0.049). In sequential group, RBO occurred in 1 patient (5%) in SBS group and in 8 patients (20%) in SIS group.
Mean time to RBO was 144 and 112 days in simultaneous and sequential groups respectively.
Only one patient experienced RBO in sequential group, after 182 days. Mean time to RBO in sequential SIS group was 103 days.
After RBO, all patients underwent endoscopic reintervention, excepted 2 patients in simultaneous group (15%, p=0.49) whom general condition was not compatible with general anesthesia.
Technical and clinical success were obtained in 82%, 75% and 100% of cases in simultaneous, "stent in stent" and "side by side" groups respectively.
Reintervention seems to be more efficient when "side by side" bilateral drainage (either simultaneous or sequential) was performed, compared to "stent in stent" technique (83% versus 75% (p = 0.53)).
DISCUSSION
Complete biliary drainage in case of MHBO frequently requires bilateral drainage, a technically challenging endoscopic procedure.
Zilver 635® (Cook Medical, Ireland) device carried by a 6Fr delivery system, was thought initially to allow parallel insertion of the two stents in a duodenoscope working channel to achieve a simultaneous bilateral biliary stenting, but in our own experience with this device, a simultaneous bilateral stent placement in MHBO was unsuccessful due to severe frictions of the two parallel devices, with the consequence of a weak pushability of the delivery systems.
Law & Baron [START_REF] Law | Bilateral Metal Stents for Hilar Biliary Obstruction Using a 6Fr Delivery System: Outcomes Following Bilateral and Side-by-Side Stent Deployment[END_REF] reported the use of this device in MHBO to perform a bilateral stent deployment thanks to a SIS or SBS approach and reported 71% of technical success, a success that seems far from the reality we observed in our center, whether with SBS or SIS techniques.
Moreover, this high technical success might be explained by the proportion of type II of
Bismuth & Corlette patients treated in that study (close to 50%).
The emergence of the new aixstent® BDH (Leufen Medical GmbH, Berlin, Germany) carried by a 5Fr delivery system & Niti-S [M-Type]® (TaeWoong Medical, Seoul, Korea) carried by a 6Fr delivery system has been a true revolution in our practice, highly increasing technical success without impacting clinical outcomes or RBO rate, compared with sequential stenting.
Our study shows that simultaneous bilateral biliary drainage, using a 6 or 5Fr delivery system, is superior to sequential drainage, when a side by side approach is chosen. Technical success was achieved in 94% of cases, a success rate that is similar with previous studies which evaluated feasibility of simultaneous side by side drainage, with a technical success ranged from 71 to 100% [START_REF] Chennat | Initial performance profile of a new 6F self-expanding metal stent for palliation of malignant hilar biliary obstruction[END_REF][START_REF] Law | Bilateral Metal Stents for Hilar Biliary Obstruction Using a 6Fr Delivery System: Outcomes Following Bilateral and Side-by-Side Stent Deployment[END_REF][START_REF] Kawakubo | Singlestep simultaneous side-by-side placement of a self-expandable metallic stent with a 6-Fr delivery system for unresectable malignant hilar biliary obstruction: a feasibility study[END_REF][START_REF] Inoue | Simultaneous Versus Sequential Side-by-Side Bilateral Metal Stent Placement for Malignant Hilar Biliary Obstructions[END_REF]. Our results concerning technical success of sequential SBS drainage are comparable with Inoue et al. results (71% of technical success, p=0.045).
We also observed a high technical success rate of 95% for stent in stent procedure, also comparable with previously existing literature (≈ 100%).
Interestingly, we observed a higher rate of technical failure for patients who suffered hilar metastatic obstruction and for type IV of Bismuth & Corlette strictures. Baseline bilirubin rate did not appear to interfere with technical success or failure.
A high technical success is even more important because it is closely related to the risk of postprocedure adverse event, particularly cholangitis, even though contrast agent injection in the non-catheterized liver segments was carefully avoided, whatever technique we used.
We reported a lower rate of early adverse event in simultaneous group compared with sequential group (21% vs 31%, p=0.205). Cholangitis was more observed in sequential group, regardless of side-by-side or stent-in stent technique (Figure 16). The higher rate of cholangitis in sequential SBS group might be explained by the higher rate of technical failure, however, we found no explanation for the higher rate in the sequential SIS group. Recent meta-analysis evaluating SIS and SBS bilateral drainage found no differences in terms of AE between the two techniques (Risk Difference -0.09, p=0.07) [START_REF] De Souza | Endoscopic retrograde cholangiopancreatography drainage for palliation of malignant hilar biliary obstructionstent-in-stent or side-by-side? A systematic review and meta-analysis[END_REF], despite the fact that some authors suggest a higher rate of cholangitis with "side by side" techniques due to higher incidence of portal vein occlusion or obstruction of one of the two stents [START_REF] Naitoh | Side-by-Side Versus Stent-in-Stent Deployment in Bilateral Endoscopic Metal Stenting for Malignant Hilar Biliary Obstruction[END_REF]. [START_REF] Chen | Side-by-side versus stent-in-stent bilateral stenting for malignant hilar biliary obstruction: a meta-analysis[END_REF].
That observation might also explain the lower RBO rate we observed in our study (18% and 22%, p=0.049), knowing that usual RBO rate ranges from 18.2% to 52.9% in the studies who compared SIS & SBS, with a median stent patency of 118 days to 262 days [START_REF] Chen | Side-by-side versus stent-in-stent bilateral stenting for malignant hilar biliary obstruction: a meta-analysis[END_REF]. Mean time to RBO was 144 days in simultaneous group and 103 days in "stent in stent" group, a difference that might be explained by the large cell mesh design of the stents used in SIS technique, favorizing tumor ingrowth [START_REF] Chen | Side-by-side versus stent-in-stent bilateral stenting for malignant hilar biliary obstruction: a meta-analysis[END_REF][START_REF] Heo | Clinical Outcomes of Bilateral Stent-in-Stent Placement Using Self-Expandable Metallic Stent for High-Grade Malignant Hilar Biliary Obstruction[END_REF].
Reintervention after RBO seems to be more successful with SBS stenting (sequential and simultaneous combined) than with SIS stenting (83% vs 75%, p=0.53), a non-significant difference, that might be explained by the easier access to both stent with SBS drainage rather than with SIS techniques (two end-tip stents in the duodenal lumen with SBS technique, versus one with SIS technique).
To date, this study is the largest concerning comparison of simultaneous and sequential bilateral biliary drainage. Patients were recruited over a large period of 13 years, and procedures were realized all along by only 2 trained operators. Most reinterventions for RBO were proceed in our tertiary center. Those results are even more valuable that both groups were appaired, reducing the risk of selection bias.
The limits of this study are the single center retrospective design and a potential lack of power, with missing data concerning especially clinical success and procedure duration. It is important to note that the only study which compared simultaneous and sequential bilateral biliary drainage was also a retrospective monocentric study, with a smaller effective of 34 patients [START_REF] Inoue | Simultaneous Versus Sequential Side-by-Side Bilateral Metal Stent Placement for Malignant Hilar Biliary Obstructions[END_REF]. Furthermore, due to a large recruitment period, technical success of sequential technique might have been negatively impacted (procedures performed from 2010 to 2017, in comparison with simultaneous technique which was performed from 2017 and 2023).
CONCLUSION
In conclusion, simultaneous side by side endoscopic bilateral metallic stent placement using a new SEMS device with an ultra-thin (5 or 6Fr) delivery system is technically easier and as efficient as sequential bilateral stenting in unresectable MHBO to achieve bilateral drainage and can be useful to avoid the risk of a failed second stent placement. However, we reported a similar technical and clinical success in simultaneous SBS and sequential SIS groups with a higher rate of infectious complications in sequential group, even after successful SIS placement.
Technical failure was significantly associated with fatal infectious complications. RBO rate was similar in simultaneous and sequential SIS groups, but with a shorter time to RBO in SIS group.
We failed to show a difference in terms of procedure duration between the two techniques.
Both simultaneous SBS and sequential SIS are valuable options for palliative endoscopic drainage in MHBO. Further prospective multicentric trials are needed to investigate the place of simultaneous SBS technique as a first-line treatment in unresectable MHBO compared to SIS technique. Je donnerai mes soins gratuits à l'indigent et je n'exigerai jamais un salaire au-dessus de mon travail. Admis dans l'intérieur des maisons, mes yeux ne verront pas ce qui s'y passe, ma langue taira les secrets qui me seront confiés et mon état ne servira pas à corrompre les moeurs ni à favoriser le crime.
Pierre CLAVELOU Doyen -Directeur
Respectueux et reconnaissant envers mes MAÎTRES, je rendrai à leurs enfants l'instruction que j'ai reçue de leurs pères.
Que les HOMMES m'accordent leur estime si je suis fidèle à mes promesses. Que je sois couvert d'OPPROBRE et méprisé de mes confrères si j'y manque.
DIRECT COMPARISON OF SIMULTANEOUS AND SEQUENTIAL ENDOSCOPIC METALLIC BILATERAL STENTING IN MALIGNANT HILAR BILIARY OBSTRUCTION: A SINGLE CENTER SUPERIORITY STUDY
Background
Endoscopic bilateral biliary drainage is a first line palliative treatment for unresectable MHBO but remains technically challenging. The emergence of news SEMS carried by an ultra-thin (6Fr or smaller) delivery system now permits simultaneous bilateral stent placement. To date, only few data comparing this new method to conventional sequential bilateral stenting are reported. We conducted a monocentric retrospective study to evaluate a possible superiority of simultaneous "side by side" (SBS) biliary drainage.
Methods
We identified 135 patients who benefited from bilateral drainage using uncovered SEMS between 2010 and 2023. Among them, 62 benefited from simultaneous SBS bilateral drainage between 2017 and 2023, and 73 benefited from sequential bilateral drainage (38 using "stent in stent" (SIS) technique and 35 using SBS technique, between 2010 and 2017).
Results
Technical success was significantly more observed in simultaneous drainage compared to sequential drainage (94% vs 75%, p=0.008). However, simultaneous SBS drainage and sequential SIS drainage had a similar technical success (94% vs 95%). We observed no differences regarding clinical success, procedure duration and RBO rate. Stent patency was shorter in SIS group compared with simultaneous group (103 vs 144 days). Early adverse events (AE) were more frequent in sequential group (31% vs 21%, p = 0.205), with no differences regarding SIS or SBS technique. Technical failure was associated with a higher rate of infectious fatal AE (9.5% vs 1.7%, p=0.02). Reintervention after RBO seems to be more successful after using SBS rather than SIS techniques (83% vs 75%, p=0.53).
Conclusion
Simultaneous side by side endoscopic bilateral metallic stent placement using an ultra-thin delivery system is technically easier and as efficient as sequential bilateral stenting in unresectable MHBO to achieve bilateral drainage. Stent in stent procedure remains a good option in unresectable MHBO. Further prospective multicentric trials are needed to investigate the place of simultaneous SBS technique as a first-line treatment compared to SIS technique.
Keywords : MHBO -ERCP -SEMS -SIMULTANEOUS DRAINAGE -SIDE-BY-SIDE -STENT-IN-STENT
UFR DE MÉDECINE ET DES PROFESSIONS PARAMÉDICALES THÈSE D'EXERCICE pour le DIPLÔME D'ÉTAT DE DOCTEUR EN MÉDECINE par GUILMOTEAU Thomas Présentée et soutenue publiquement le 27 avril 2023 DIRECT COMPARISON OF SIMULTANEOUS AND SEQUENTIAL ENDOSCOPIC METALLIC BILATERAL STENTING IN MALIGNANT HILAR BILIARY OBSTRUCTION : A SINGLE CENTER SUPERIORITY STUDY Directeur de thèse : Monsieur le Professeur POINCLOUX Laurent, CHU de Clermont-Ferrand (hépato-gastro-entérologie), UFR de Médecine et des Professions Paramédicales Président du jury : Monsieur le Professeur ABERGEL Armando, CHU de Clermont-Ferrand (hépato-gastro-entérologie), UFR de Médecine et des Professions Paramédicales Membres du jury : Monsieur le Professeur GAGNIERE Johan, CHU de Clermont-Ferrand (chirurgie viscérale), UFR de Médecine et des Professions Paramédicales Madame le Docteur JARY Marine, CHU de Clermont-Ferrand (oncologie médicale) Monsieur le Docteur MAGNIN Benoît, CHU de Clermont-Ferrand (radiologie et imagerie médicale)
Figure : Usual liver volume distributionFigure : Figure : Figure : Figure : Figure : Figure : Figure : FlowchartFigure : Figure : Figure : Figure :
::::::::::: Figure : Usual liver volume distribution
1 st 2023 .
2023 Conventional sequential SBS placement was performed at our institution between 2010 and 2017. Since 2017, simultaneous SBS placement has been performed on patients with unresectable MHBOs, requiring bilateral drainage. Patient's cohort was divided into those who underwent sequential SBS or SIS placement between 2010 and 2017 (sequential group) and those who underwent simultaneous SBS placement between 2017 and 2023 (simultaneous group). The two groups were compared retrospectively.Exclusion criteria were patients with no jaundice, resectable obstruction, minors, pregnant woman or patients concerned with limited judicial protection.Malignancy of hilar biliary obstruction was proved after histological or cytological analysis (EUS-FNA or FNB, ERCP bile duct brushing, single-operator cholangioscopy system biopsy (SpyGlass ® , Boston Scientific Corporation, USA) or percutaneous biopsy) when possible and recommended. Due to the poor diagnosis rentability of cytological brushing and the difficulties to obtain direct endoscopic biopsies in cholangiocarcinoma, histological/cytological proof was not systematically achieved, thus, diagnosis was sometimes confirmed regarding medical history, biological and imaging findings.
( 9 ,
9 5% and 7,4% respectively, p=0.02). Patients who benefited technical success less suffered from infectious complications (OR = 0.25 [0.08-0.78], p = 0.016), and technical failure was more exposing patients to fatal sepsis (AGREE V) (OR 5.89 [0.78-44.4], p=0.085).
FIGURESFigure 1 :Figure 2 :
12 FIGURES
Figure 3 :
3 Figure 3 : Schematic representation of internal-external percutaneous transhepatic biliary drainage in 4 steps, for the management of distal biliary obstacle. First step, insertion of a catheter inside the intrahepatic bile duct, followed by the placement of a guidewire through the obstacle. The catheter is removed, guidewire left in place and a plastic tube is passed over the guidewire. The guidewire is removed and the external tip of the tube is stitched to the skin. From UW Health website, 2019.
Figure 4 :
4 Figure 4 : 4a : Malignant hilar biliary obstruction with dilated intrahepatic bile ducts. 4b : After failure of bilateral stent placement during ERCP, left intrahepatic bile duct remains undrained. Hepaticogastrostomy was performed using EUS-BD. 4c : After failure of bilateral stent placement during ERCP, right intrahepatic bile duct remains undrained. Hepaticoduodenostomy was performed using EUS-BD. 4d : After failure of stent placement during ERCP, both right and left intrahepatic bile ducts remain undrained. Hepaticoduodenostomy associated with hepaticogastrostomy were performed using EUS-BD. From Kongkam et al (14).
Figure 5a :
5a Figure 5a : Post-operative x-ray showing "stent in stent" (left) and "side by side" (right) bilateral endoscopic drainage.
Figure 7 :
7 Figure 7 : An uncovered self-expandable-metallic-stent (SEMS) (Niti-S Biliary Uncovered Stent ® , TaeWoong Medical, Seoul, Rep. of Korea). Delivery system 7Fr. Diameter 10mm.
Figure 8 :
8 Figure 8 : Schematic representation of SBS (left) and SIS (right) biliary stenting. From De Souza GMV et al. (45).
Figure 9a :
9a Figure 9a : Niti-S Biliary Uncovered Metallic Stent (Y-Type) ® (TaeWoong Medical, Seoul, Rep. of Korea) with central wide-open mesh. Delivery system 8Fr. Diameter 10mm.
Figure 9b :
9b Figure 9b : BONASTENT M-HILAR ® (Standard Sci-Tech Inc., Seoul, Rep. of Korea). Delivery system : 7 Fr. Diameter 10mm.
Figure 10 :
10 Figure 10 : Zilver 635 ® Biliary Self-Expanding Stent (Cook Medical, Ireland). Delivery system 6Fr. Diameter 8mm.
Figure 11a :Figure 11b :Figure 14 :
11a11b14 Figure 11a : Stents used in our study (simultaneous group) Above : Niti-S [M-Type]® (TaeWoong Medical, Seoul, Korea). Delivery system : 6Fr Below : laser cut aixstent® BDH (Leufen Medical GmbH, Berlin, Germany). Delivery system : 5Fr
Figure 15 :
15 Figure 15 : Repartition of severity according the AGREE classification, among patients who suffered from adverse event.
Figure 16 :
16 Figure 16 : Repartition of severity according the AGREE classification, among patients with cholangitis.
Figure 17 :
17 Figure 17: Survival time curves by Kaplan-Meier analysis. We observed no significant differences in overall survival between groups (p=0.24).
DE CONFERENCES DES UNIVERSITES -PRATICIENS HOSPITALIERS HORS CLASSE
CONTENTS BACKGROUND ................................................................................................ INTRODUCTION .............................................................................................................. LIVER VOLUME .............................................................................................................. PERCUTANEOUS OR ENDOSCOPIC STENTING .................................................... UNILATERAL OR BILATERAL DRAINAGE ............................................................. STENT TYPE .....................................................................................................................
UNIVERSITE CLERMONT AUVERGNE PROFESSEURS DES UNIVERSITES DE MEDECINE GENERALE 1ère CLASSE
M. MORVAN Daniel Mme GOUMY Carole 1ère CLASSE Biophysique et Traitement de l'Image ___________________ Cytologie et Histologie, Cytogénétique
M. VORILHON Philippe Mme VERONESE Lauren Mme LAPORTE Catherine Mme MIRAND Audrey M. OUCHCHANE Lemlih Cytologie et Histologie, Cytogénétique Médecine Générale Bactériologie Virologie Médecine Générale Biostatistiques, Informatique Médicale
et Technologies de Communication
PRESIDENTS HONORAIRES M. LIBERT Frédéric PROFESSEURS DES UNIVERSITES : JOYON Louis UFR DE MEDECINE Pharmacologie Médicale Mme COSTE Karen Pédiatrie
UNIVERSITE D'AUVERGNE Mme AUMERAN Claire ET DES PROFESSIONS PARAMEDICALES Hygiène Hospitalière : DOLY Michel Mme NOURRISSON Céline Parasitologie -Mycologie
Mme PONS Hanaë M. COLL Guillaume M GODET Thomas Biologie et Médecine du Développement : TURPIN Dominique et de la Reproduction 2ème CLASSE Neurochirurgie : VEYRE Annie Anesthésiologie-Réanimation et Médecine
Péri-Opératoire : DULBECCO Philippe
DOYENS HONORAIRES : DETEIX Patrice
Mme MALPUECH-BRUGERE Corinne : ESCHALIER Alain : CHAZAL Jean Nutrition Humaine 2ème CLASSE
PRESIDENTS HONORAIRES UNIVERSITE BLAISE PASCAL DOYEN PROFESSEURS ASSOCIES DES UNIVERSITES : CABANES Pierre M. MOUSTAFA Farès Médecine d'Urgence : CLAVELOU Pierre RESPONSABLE ADMINISTRATIVE M. CHENAF Chouki Pharmacologie Clinique : ROBERT Gaëlle M. ANIORT Julien Néphrologie : FONTAINE Jacques Mme DUPUY Claire Médecine Intensive -Réanimation
Mme BOTTET-MAULOUBIER Anne Mme JULIAN Valérie M. CAMBON Benoît M. LAHAY Clément Médecine Générale Physiologie : BOUTIN Christian Médecine Générale Gériatrie et Biologie du Vieillissement
M. BERNARD Pierre M. TANGUY Gilles M. CLERFOND Guillaume Médecine Générale : MONTEIL Jean-Marc Médecine Générale Cardiologie
1ère CLASSE : ODOUARD Albert
: LAVIGNOTTE Nadine : BERNARD Mathias : FOGLI Anne Biochimie Biologie Moléculaire VICE PRESIDENTE CHARGEE DE LA FORMATION PRESIDENT DE L'UNIVERSITE PREMIERE VICE-PRESIDENTE CHARGEE DU PILOTAGE ET DES MOYENS Mme FOGLI Anne : PEYRARD Françoise Nutrition Mme VAILLANT-ROUSSEL Hélène Médecine Générale MAITRES DE CONFERENCES DES UNIVERSITES MAITRES Mme BOUTELOUP Corinne HORS CLASSE
VICE-PRESIDENTE CHARGEE DE LA RECHERCHE Mme GOUAS Laetitia Cytologie et Histologie, Cytogénétique : PREVOT Vanessa M. DELMAS Julien Bactériologie M. BLANCHON Loïc Biochimie Biologie Moléculaire
DIRECTEUR GENERAL DES SERVICES M. ROBIN Frédéric M. MARCHAND Fabien M. MARCEAU Geoffroy Mme VAURS-BARRIERE Catherine Bactériologie Pharmacologie Médicale : PAQUIS François Biochimie Biologie Moléculaire Biochimie Biologie Moléculaire
Mme MINET-QUINARD Régine Biochimie Biologie Moléculaire
STENT IN STENT OR SIDE BY SIDE (SEQUENTIAL AND SIMULTANEOUS PROCEDURES)
.
................................................................................................................ ARTICLE ........................................................................................................... INTRODUCTION .............................................................................................................. METHODS ......................................................................................................................... Patients ......................................................................................................................................... Data collection .............................................................................................................................. Procedures .................................................................................................................................... Outcomes and definitions............................................................................................................ Statistical analysis ........................................................................................................................ RESULTS ............................................................................................................................ Patient characteristics ................................................................................................................. Technical success ......................................................................................................................... Clinical success ............................................................................................................................ Procedure duration ..................................................................................................................... Early adverse events related to the procedure .......................................................................... Recurrent biliary obstruction (RBO) ........................................................................................ DISCUSSION ..................................................................................................................... CONCLUSION ................................................................................................................... REFERENCES .................................................................................................. TABLES ............................................................................................................. FIGURES ...........................................................................................................
TABLE AND FIGURE CONTENTS
AND
Table 1 :
1 Patient characteristics
Table 2 :
2 Univariate and multivariate analyses for technical success
Table 3 :
3 Patient outcomes
Magnetic Resonance Imagery PTBD : Percutaneous Transhepatic Biliary Drainage
INTRODUCTION
Complete biliary drainage of malignant hepatic hilar strictures, which frequently requires a
bilateral drainage still is a technical challenge. Malignant causes of hepatic hilar obstruction
commonly include peri-hilar cholangiocarcinoma (Klatskin's tumor), gallbladder
adenocarcinoma with hilar extension, central liver metastasis (mostly from colonic or breast
cancer), hepatocellular carcinoma with endoluminal biliary invasion and hilar
lymphadenopathy (1).
The only curative treatment for biliary malignant disease remains surgery, however, regardless
of tumor's histology, less than 30% of patients are suitable for curative resection because of
locoregional/metastatic disease, poor ECOG-PS or comorbidities.
RCT : Randomized Clinical Trial
RBO : Recurrent Biliary Obstruction
SBS : Side-by-side
SEMS : Self Expandable Metallic Stent
SFAR : Société Française d'Anesthésie et de Réanimation
SIS : Stent-in-stent
SM : Seeding Metastasis
ABREVIATIONS AE : Adverse Event AGREE : Adverse Event GastRointEstinal Endoscopy CT : Computed Tomography EBD : Endoscopic Biliary Drainage ECOG-PS : Eastern Cooperative Oncology Group -Performance Status ERCP : Endoscopic Retrograde Cholangiopancreatography ESGE : European Society of Gastrointestinal Endoscopy EUS : Endoscopic Ultrasound EUS-BD : Endoscopic Ultrasound -Biliary Drainage FNA : Fine Needle Aspiration FNB : Fine Needle Biopsy Fr : French (=0.33mm) MHBO : Malignant Hilar Biliary Obstruction MRI :
showed no difference in terms of efficacy for both techniques, regarding technical and clinical success, adverse event rate, stent patency and survival.In 2017, Inoue et al compared technical success of sequential (side by side or stent in stent) and simultaneous bilateral biliary drainage. He showed a significantly higher technical success and shorter intervention time to the benefit of simultaneous drainage, using a 5.7Fr delivery system, compared with sequential technique (regardless the use of side by side or stent in stent technique)[START_REF] Inoue | Simultaneous Versus Sequential Side-by-Side Bilateral Metal Stent Placement for Malignant Hilar Biliary Obstructions[END_REF].
ARTICLE
DIRECT COMPARISON OF SIMULTANEOUS AND SEQUENTIAL ENDOSCOPIC
METALLIC BILATERAL STENTING IN MALIGNANT HILAR BILIARY
OBSTRUCTION: A SINGLE CENTER SUPERIORITY STUDY
INTRODUCTION
Management of malignant hilar biliary obstruction (MHBO) remains today a challenging
situation. Most obstructions are unresectable, due to patient's general condition or
local/metastatic tumor extension (41). Latest guidelines recommends endoscopic biliary
Technical failure was associated with a significant higher rate of fatal cholangitis, (OR 5.89 [0.78-44.4], p = 0.085), reminding us the absolute necessity of adequate drainage of all opacified biliary areas, using if necessary, in case of ERCP failure, alternative techniques such as PTBD or EUS-BD. Furthermore, Vienne et al. proved that drainage of an atrophic biliary area is useless and improved cholangitis, supporting the necessity for hepatic volumetry assessment prior to ERCP, in order to reduce post-procedural cholangitis[START_REF] Vienne | Prediction of drainage effectiveness during endoscopic stenting of malignant hilar strictures: the role of liver volume assessment[END_REF]. et al. compared the association of ERCP + EUS-BD over PTBD for unresectable MHBO, and showed a high technical success rate with similar clinical success rate and a significant lower RBO rate at 3 and 6 months (14). Vanella et al. confirmed superiority of EUS-BD over PTBD after failed ERCP, and purposed EUS-BD as a first option after failure of retrograde stenting[START_REF] Vanella | EUS-guided intrahepatic biliary drainage: a large retrospective series and subgroup comparison between percutaneous drainage in hilar stenoses or postsurgical anatomy[END_REF]. Those results are supported by latest 2022 ESGE recommendations, who now suggest hepaticogastrostomy using EUS-BD after ERCP or PTBD failure[START_REF] Van Der Merwe | Therapeutic endoscopic ultrasound: European Society of Gastrointestinal Endoscopy (ESGE) Guideline[END_REF]. Paik et al. reported a clinical success rate of 69% to 97% after using metallic stent, those results are comparable to our findings[START_REF] Paik | Palliative treatment with self-expandable metallic stents in patients with advanced type III or IV hilar cholangiocarcinoma: a percutaneous versus endoscopic approach[END_REF]. Those results support the fact that despite the use of less wide stents in simultaneous group (8mm versus 10mm diameter), we did not observe a decrease of clinical success or stent patency.
anesthetic induction and/or orotracheal extubation, that probably increased procedure time in
our study.
Clinical success was defined in our study as a decrease of at least 50% of bilirubin rate at day
7. Recent ESGE recommendations suggest that clinical success should be defined as a decrease
of 50-75% bilirubin rate after 2 to 4 weeks. We observed a similar clinical success rate (72 to
75%, p=0.82) between sequential and simultaneous group, a result that might be better with a
different definition of clinical success, such as recommended by ESGE (47). Though, due to retrospective collection of data in our study, missing data would have been too important with Kongkam We expected a shorter procedure time in simultaneous group, since Inoue et al. and Kawabuko such definition. We observed a shorter median survival time compared with other studies (61 days in sequential
et al. proved that simultaneous drainage shortens procedure time compared to sequential group and 67 days in simultaneous group in our study), especially in sequential stent in stent
procedure (22 and 25 minutes for bilateral stenting in their study, respectively) (38,40). In our group (43 days), that might be explained by the higher proportion of metastatic disease in this
study, median procedure time was not different between simultaneous and sequential procedure population (Kaplan-Meier curves for survival time are represented in Figure 17). In comparison,
(80 and 72 minutes, p=0.92), but close to simultaneous procedure time found by Chennat et al. median survival time ranges from 146 days to 381 days in Chen et al. meta-analysis (5 studies
and Law & Baron (64 and 75 minutes respectively) (32,33). We suffered from a lack of data comparing SIS & SBS, 250 patients)
concerning procedure duration, explained by the possibility for a same patient to benefit from
both diagnosis EUS and therapeutic ERCP in the same procedure time. We excluded from
procedure duration analysis, all patient who benefited the two procedures at the same time, in
order not to interfere with data of ERCP alone. Procedure time could sometimes include
Table 3 :
3 Patient outcomes.
Remerciements (Conseil national de l'ordre des médecins) SERMENT D'HIPPOCRATE
Au moment d'être admis(e) à exercer la médecine, je promets et je jure d'être fidèle aux lois de l'honneur et de la probité.
Mon premier souci sera de rétablir, de préserver ou de promouvoir la santé dans tous ses éléments, physiques et mentaux, individuels et sociaux.
Je respecterai toutes les personnes, leur autonomie et leur volonté, sans aucune discrimination selon leur état ou leurs convictions. J'interviendrai pour les protéger si elles sont affaiblies, vulnérables ou menacées dans leur intégrité ou leur dignité. Même sous la contrainte, je ne ferai pas usage de mes connaissances contre les lois de l'humanité.
J'informerai les patients des décisions envisagées, de leurs raisons et de leurs conséquences.
Je ne tromperai jamais leur confiance et n'exploiterai pas le pouvoir hérité des circonstances pour forcer les consciences.
Je donnerai mes soins à l'indigent et à quiconque me les demandera. Je ne me laisserai pas influencer par la soif du gain ou la recherche de la gloire.
Admis(e) dans l'intimité des personnes, je tairai les secrets qui me seront confiés. Reçu(e) à l'intérieur des maisons, je respecterai les secrets des foyers et ma conduite ne servira pas à corrompre les moeurs. Je ferai tout pour soulager les souffrances. Je ne prolongerai pas abusivement les agonies. Je ne provoquerai jamais la mort délibérément.
Je préserverai l'indépendance nécessaire à l'accomplissement de ma mission. Je n'entreprendrai rien qui dépasse mes compétences. Je les entretiendrai et les perfectionnerai pour assurer au mieux les services qui me seront demandés.
J'apporterai mon aide à mes confrères ainsi qu'à leurs familles dans l'adversité.
Que les hommes et mes confrères m'accordent leur estime si je suis fidèle à mes promesses ; que je sois déshonoré(e) et méprisé(e) si j'y manque.
Data collection
We retrospectively collected clinical data from patients charts including: demographic information, type of procedure (sequential or simultaneous), the type of stent and deliverysystem catheter used, date of procedure, tumor's type and biliary extent according to Bismuth & Corlette classification, procedure duration (in min), post-operative chemotherapy status, total bilirubin rate (µmol/L) at D0 and D7, early adverse event related to the procedure (D0-D7), late stent-related complications (RBO i.e stent patency and/or dysfunction), type and success of reintervention after RBO, time until death or latest news.
Procedures
All patients underwent sectional cross imaging (computed tomography or magnetic resonance imagery) to evaluate hepatic volumetry, biliary extension of the malignant hilar obstruction and to assess surgical resectability. All procedures were performed using carbon dioxide insufflation with a 4.2mm working channel duodenoscope (TJF-160 ® and TJF-190 ® , Olympus Medical Systems Corp., Tokyo, Japan).
All patients underwent complete biliary sphincterotomy when deep biliary catheterism (either transpapillary or using "double guidewire technique") was achieved, or large fistulotomy if |
00410290 | en | [
"phys.cond.cm-sm",
"phys.phys.phys-bio-ph"
] | 2024/03/04 16:41:20 | 2009 | https://ens-lyon.hal.science/ensl-00410290/file/EPL-DNA-vsF2.pdf | Santiago Cuesta-Lopez
Dimitar Angelov
Michel Peyrard
Adding a New Dimension
Keywords: 87.15.Hp -Dynamics and conformational changes 82.39.Pj -Nucleic acids, DNA and RNA bases 87.15.Cc -Folding: thermodynamics, statistical mechanics, models, and pathways
published or not. The documents may come
DNA melting, i.e. the separation of the two strands of the double helix, and its reverse process hybridisation are ubiquitous in biology, in vivo for instance for DNA transcription in the reading of the genetic code of a gene, as well as in vitro in biological laboratories for PCR (Polymerase Chain Reaction) or the use of DNA microarrays. This is why DNA melting has been extensively studied even in the early days of DNA structural studies [1]. An approximate understanding of the melting curves of long DNA segments, with thousands of base pairs, can be provided by simple statistical physics models, using empirical parameters because, at this large scale, the subtle effects of the base pair sequence are smoothed out. Understanding the fluctuations and melting of short DNA fragments of a few tens of base pairs with a high degree of heterogeneity is much more challenging. And it is also very important because this size is the scale at which the genetic code can be resolved. This would have some significant biological consequences to unravel the processes by which specific binding sites are recognised by proteins, drugs, mutagens and other molecules. This would also have a lot of practical importance in the design of the PCR primers which are used everyday in most of the biological laboratories [2].
The two kind of base pairs which exist in the DNA double helix have different thermal stability, the AT pair, bound by two hydrogen bonds, being weaker than the GC pair bound by three hydrogen bonds. This explains why the melting curve of a heterogeneous DNA sequence, which shows the fraction of open pairs as a function of temperature, can exhibit complex features. Those curves are easy to record experimentally because the UV absorbance of a DNA solution increases drastically when the bases are unstacked, which is (a) Mailing Address: [email protected] the case in the broken regions of the molecule. But such a curve only provides an integral information on the open fraction of base pairs. Getting more local information requires involved methods. Using a clever choice of sequences such that single strands can form hairpins, and a combination of heating and quenching, Montrichok et al. [3,4] managed to get some data on the melting process of short DNA sequences, detecting whether they open at one end or by starting with an open bubble in the centre. The kinetics of proton-deuterium exchange for the protons involved in the hydrogen bonds within pairs, coupled with NMR studies to detect the location of the exchanged protons, can also provide partial information on the spatial aspect of DNA fluctuations, at the expense of heavy experiments [5]. Another approach relies on special molecular constructs which attach a fluorophore and a quencher to DNA to detect its local opening at a particular site [6,7]. Only one position can be monitored, and one cannot exclude local perturbations of the fluctuations by the large residues attached to the DNA.
Due to their importance, the statistical and dynamical properties of DNA fluctuations and their relation to biological functions have been the subject of many theoretical studies [8][9][10]. These studies raised a debate on the role of statistical properties and dynamical phenomena in connection to biological function. But the validity of those theoretical approaches can only be tested if one can compare their predictions to measurements of the local fluctuations of the molecule. Moreover the study of dynamical and conformational phenomena in DNA requires a method not only able to give a precise local information, but also able to provide coupled information of events along the chain. In this letter we present an original method that can provide a mapping of the strength of the fluctuations of the double helix as function of their position along the sequence. This adds a new dimension, space, to the traditional melting curves. Our approach does not require special molecular constructs like those using dyes or fluorophores. Instead it uses DNA itself to report on its internal state and gives a snapshot of the opening of DNA at each guanine site at once. The method relies on the oxidative chemistry of the guanine bases G, and their propensity to be ionised by a two step resonance excitation [11] from a strong UV laser pulse. Two guanine modifications, oxazolone and 8-oxo-7,8-dihydro-2-oxoguanine (8-oxodG) have p-2 been identified as the major one-electron oxidative DNA lesions. Their formation depends on the local DNA conformation and on the charge-transfer efficiency, which is affected by local fluctuations [12][13][14]. While oxazolone is the unique product resulting from one-electron oxidation of the free 2 ′ -deoxyguanosine, 8-oxodG appears as soon as the nucleoside is incorporated in a helical structure. Hence the measurement of the relative yield of these photoproducts at each G site (labelled R F pg /R pip due to the method used for its detection [START_REF]fragments of length l are those which were cleaved at a guanine situated at distance l from the marked end. The ratio RF pg /Rpip of the number of radioactively labeled fragments of length l produced by Fpg cleavage or by piperidine cleavage indicates the probability of closing for this particular guanine, which is unambiguously identified by its distance from the marked end[END_REF]) tells us whether this G was in an helical structure (closed) or whether it was open when the molecule was hit by the laser pulse. As the experiment is not performed on a single molecule but in solution the results are obtained on a statistical ensemble and they give a signal representative of the probability that each G site is closed at the temperature of the study. Standard biological methods can be used to measure the relative yield of the production of oxazolone and 8-oxodG [START_REF]fragments of length l are those which were cleaved at a guanine situated at distance l from the marked end. The ratio RF pg /Rpip of the number of radioactively labeled fragments of length l produced by Fpg cleavage or by piperidine cleavage indicates the probability of closing for this particular guanine, which is unambiguously identified by its distance from the marked end[END_REF]. However, it is important to notice that the value of R F pg /R pip should not be considered as a quantitative measure of the local closing probability because it is also affected by the configuration of the DNA molecule near the probe, which depends slightly on the sequence and influence the charge transfer. Only the temperature dependence of this ratio for a given probe can be analysed quantitatively [16].
By splitting the sample into several aliquots, the measurement can be performed at different temperatures, which allows us to produce a set of melting curves for each guanine in the sequence. Figure 1 shows the results of a UV laser irradiation analysis for two guanines, labelled S1-G1 and S1-G2, belonging to a test sequence S1 (details of the sequence are provided in the figure caption and in Figure 3). It demonstrates how our method reports two complementary curves at once for the same single DNA sequence, adding a valuable information of the state of the system. The results that it provides changes the view in which DNA denaturation can be studied, adding the spatial correlations to the notion of local conformation (melted or packed helix).
In order to quantitatively analyse the measurements shown in Fig. 1 we have fitted the experimental curves by the function
f (T ) = A -B 1 tanh[C 1 (T -T 1 )] -B 2 tanh[C 2 (T -T 2 )],
selected according to the shape of the curves, particularly that of the probe S1-G1. Once the p-3 optimal parameters are determined, we plot df (T )/dT for each probe (right part of Fig. 1), to highlight the fine structure of each melting curve. Figure 1 clearly shows that, at the level of probe S1-G1, the melting occurs in two steps, with a precursor at T 2 = 45.7 • C while the full melting is achieved at 55.0 • C. Although a slight precursor effect can be detected for probe S1-G2 it is very weak and hardly visible on the figure. The fit of the data indicates that it occurs at T ′ 2 = 51.8 • C, very close to the full melting detected at T ′ 1 = 54.4 • C for this probe. Interestingly these results show that the conformational melting can follow different paths in various parts of the same short sequence.
For longer sequences such as sequence S2 (Fig. 3), the existence of several guanine probes, located in some interesting domains, allows us to obtain a collection of snapshots for the local state of the system that can be combined to build a three-dimensional melting profile like the one shown in Figure 2. This 3D plot shows the ratio R F pg /R pip for a particular DNA sequence containing four guanines (S2-G1 to S2-G4) that are monitored as probes.
The study as a function of temperature provides a melting profile for each probe, together with a view of the spatial correlations along the full sequence. In order to get a more precise insight on the origin of the two-step melting detected at the level of probe S1-G1, and to relate it to the effect of bubble nucleation in DNA, we have studied several artificial sequences specially designed and synthesised [START_REF]DNA Oligos have been HPLC-RP purified and artificially synthesized by Eurogentec[END_REF] to investigate possible non-local effects of the fluctuations. They are shown on Figure 3. Sequences S1 and S2 contain a 10 base-pair-long AT-rich fragment which is analogous to the "TATA-box', a motif that exists in the transcription-initiation regions of the genes of various species. As the AT base pairs bound by only two hydrogen bonds are weaker than the GC pairs, these segments are expected to exhibit large fluctuations even at biological temperature because they are closer to their melting temperature.
All sequences S1, S2 and S3 contain different guanines labeled Sn-Gx, where n refers to the sequence and x to the particular guanine. They are spread along the strand, and according to the basis of our method, act as probes informing us about the local structural state at each temperature. As mentioned above the level of the R F pg /R pip signal that we record depends on the structure in the vicinity of the guanine of interest. This is why, in order to allow a quantitative comparison between different sequences, we have selected guanines with the same environment. For instance guanines S1-G1, S2-G1, S3-G1 are all parts of the sequence CGA, and guanines S1-G2 and S2-G2 are part of the sequence AGT.
The major difference between sequences S1 and S2 lies in the length of the domain that separates probe G1 from the AT-rich region. This "buffer region", which is an heterogeneous domain with AT and GC pairs, has been extended from 7 base pairs in sequence S1 to 13 base pairs in sequence S2. In sequence S3 we have eliminated the large "TATA box" to keep only short AT-rich domains around probe S3-G1. In all cases these short DNA molecules are terminated by GC rich domains which act as clamps to prevent large fluctuations of the free ends of the molecules, and hold the two strands together even when we heat the sample up to 60 • C. [18].
The variation versus temperature of the ratios R F pg /R pip for all the guanine probes studied in sequences S1, S2, S3, are summarised in Fig. 4. The comparison of the various curves gives some clues to understand the differences between the two melting curves of probes S1-G1 and S1-G2 discussed above, and points out some interesting features of DNA fluctuations, which can be revealed by an experimental method able to record local melting profiles. When they are analysed in the context of the particular sequences that we studied, the curves suggest three important properties of DNA fluctuations: i) a large AT-rich domain undergoes very large fluctuations, even at room or biological temperature, and therefore tends to easily form an "open bubble".
ii) there is a minimum size of the AT-rich domain that allows the formation of such a bubble.
iii) the influence of such a bubble does not only affect its immediate vicinity, but extends to some distance.
Let us see how these statements are supported by our results.
As our method relies on the ionisation of the guanines, we do not directly measure the opening of the AT pairs but their fluctuations can be inferred from their influence on the adjacent guanines. Although we stressed that quantitative comparisons cannot be made between different probes because the signal that we record depends on the local structure of DNA, the very small value of R F pg /R pip for probe S1-G2 is nevertheless a strong indication that the closing probability of this guanine, which lies next to a series of 10 AT pairs is very low even at room temperature. This can be understood as an effect of the strong tendency of the large AT rich region to open into transient bubbles, called "premelting phenomena", starting at physiological temperatures [START_REF] Erfurth | [END_REF][20][21][22], which certainly affects the base pair which is right next to it. Probe S3-G1 is also surrounded by AT-rich regions, but its closing probability deduced from the corresponding value of R F pg /R pip , shown on the top panel of Fig. 4, appears to be much higher that for probe S1-G2. This indicates that the fluctuations of the five-base-long AT regions which are next to probe S3-G1 are not sufficient to form open bubbles that would promote the opening of this probe. This is in agreement with the existence of a minimum size needed to allow the formation of a bubble [3].
The striking point is that the large fluctuations of the TATA box do not only perturb the adjacent geometry but also induce conformational changes that distort the closed packed helicoidal structure in regions distant from the bubble nucleation segment, giving rise to premelting intermediate structural states that coexist in consonance with the nucleated bubbles. This shows up in the two-step denaturation that we observed for probe S1-G1, as discussed above. Although this probe is 7 base-pairs away from the TATA box, Fig. 1 shows that precursor effects appear well below the full denaturation of this probe. Those precursors are also visible in Fig. 4. Note again how the melting at the level of probe S1-G1 differs from a simple sigmoidal curve, particularly in the temperature range from 37 • C to 52 • C. It is tempting to assign them to the influence of the fluctuations of the TATA, which grow when the temperature is increased and might influence the opening structure and fluctuations of the double helix even rather far away. But, to confirm such an assignment control experiments are necessary. Their results are shown on the bottom panel of Fig. 4 which shows the thermal variation of the ratio R F pg /R pip for several guanine probes in sequence S2.
In this sequence the TATA box is present, as in sequence S1, but the length of the buffer p-5 region that separates it from probe S2-G1, which has the same local structure as probe S1-G1, has been increased. Moreover this buffer region is strengthened because it contains several GC pairs, including two adjacent ones at the site of probe S2-G3. In this sequence the existence of the large fluctuations of the TATA box are attested by the very low closing probability of probe S2-G2, similar to what is observed for probe S1-G2. But, contrary to what was observed for probe S1-G1, the denaturation of probe S2-G1 does not show any significant precursor effect. Similar one-step transitions are also observed for probes S2-G3 and S2-G4. In summary, these experiments clearly suggest that thermal fluctuations, which are stronger in AT-rich tracks, induce bubble nucleation phenomena and structural changes that affect not only the local geometry and dynamics of DNA in the breathing portion, but also to some distance along the helix. How this happens is highly dominated by the characteristics of the fragment sequence as well as the length of the sections involved. Sigmoidal non-linear fits (dotted lines) have been used to describe the evolution of probes S1-G1 and S3-G1. Both fits have been performed with the same identical functional form. While the response of S3-G1 agrees with a one step standard melting transition, the result for S1-G1 suggests a two steps transition, with a premelting region, clearly shown in Fig. 1. The bottom panel shows the ratio RF pg /Rpip for the guanine probes of sequence S2 versus temperature. For probe S2-G1, the premelting effect has been suppressed due to the existence of a longer more stable intermediate buffer region.
p-6
Those results are only accessible because we have introduced a method which is able to provide a spatial information that was not available until now for the study of local melting in short DNA fragments. Further studies are certainly necessary to confirm and precise the non-local effect of the fluctuations of large AT-rich regions that we have presented in this letter. They become possible with the "three-dimensional melting curves" that can now be measured.
Finally, we would like to emphasise the importance of taking into account the structural modifications induced by bubble dynamics in terms of DNA-protein binding interactions, Transcription Factor recognition or DNA-drug binding. Further studies of local fluctuations in DNA may be of significant importance in the analysis of these biological phenomena.
We would like to thank the program CIBLE of Région Rhône-Alpes which supported this work.
Fig. 1 :
1 Fig.1: Local melting curves for different sections of a DNA sequence (S1). Symbols represent the thermal evolution of the rate RF pg /Rpip exhibited by two guanines G1 and G2, used as probes located along the molecule. This ratio reports on the closing probability of each guanine at a given temperature. The lines are the fitting curves described in the paper and the insets plot the derivative of the fitting curve for each probe. They highlight the fine structure of the melting curves and point out the existence of different local conformational transitions along the DNA sequence.
Fig. 2 :
2 Fig. 2: Adding a new dimension to the study of a short DNA fragment melting profile: the figure displays a 3D view of the spatially resolved ratio RF pg /Rpip, showing the closing for different guanine probes placed along a longer DNA sequence S2.
Fig. 3 :
3 Fig. 3: Sequences of the DNA fragments investigated in this study. S1 and S2, are artificial sequences containing a large TATA box and guanines on the 5'-3' strand, used as probes. S3 is a shorter control sequence that eliminates the TATA motif. All sequences are completed by GC-rich terminal domains (marked as dotted boxes) to stabilise them and ensure proper closing of these short DNA helices.
Fig. 4 :
4 Fig. 4: The top panel shows the melting profiles for the probes studied in sequences S1 and S3.Sigmoidal non-linear fits (dotted lines) have been used to describe the evolution of probes S1-G1 and S3-G1. Both fits have been performed with the same identical functional form. While the response of S3-G1 agrees with a one step standard melting transition, the result for S1-G1 suggests a two steps transition, with a premelting region, clearly shown in Fig.1. The bottom panel shows the ratio RF pg /Rpip for the guanine probes of sequence S2 versus temperature. For probe S2-G1, the premelting effect has been suppressed due to the existence of a longer more stable intermediate buffer region. |
00410291 | en | [
"phys.qphy",
"math.math-pr",
"phys.mphy",
"math.math-mp"
] | 2024/03/04 16:41:20 | 2009 | https://hal.science/hal-00410291/file/Random2.pdf | Ion Nechita
email: [email protected]
Clément Pellegrini
Quantum Trajectories in Random Environment: the Statistical Model for a Heat Bath
, where the Gibbs model of heat bath has been studied. It is shown that the statistical model of a heat bath provides clear physical interpretation in terms of emissions and absorptions of photons. Our approach yields models of random environment and unravelings of stochastic master equations. The equations are rigorously obtained as solutions of martingale problems using the convergence of Markov generators.
Introduction
The theory of Quantum Trajectories consists in studying the evolution of the state of an open quantum system undergoing continuous indirect measurement. The most basic physical setting consists of a small system, which is the open system, in contact with an environment. Usually, in quantum optics and quantum communication, the measurement is indirectly performed on the environment [START_REF] Barchielli | Direct and heterodyne detection and other applications of quantum stochastic calculus to quantum optics[END_REF][START_REF] Barchielli | Quantum Trajectories and Measurements in Continuous Time The Diffusive Case[END_REF][START_REF] Barchielli | On a class of stochastic differential equations used in quantum optics[END_REF][START_REF] Barchielli | Alberto Quantum stochastic calculus, measurements continuous in time, and heterodyne detection in quantum optics[END_REF][START_REF] Breuer | Francesco The theory of open quantum systems[END_REF][START_REF] Wiseman | J Milburn interpretation of quantum jump and diffusion processes illustrated on the Bloch sphere[END_REF][START_REF] Wiseman | Quantum trajectories and feedback Ph[END_REF]. In this framework, the reduced time evolution of the small system, obtained by tracing over the degrees of freedom of the environment, is described by stochastic differential equations called stochastic Schrödinger equations or stochastic Master equations. The solutions of these equations are called Continuous Quantum Trajectories. In the literature, two generic types of equations are usually considered 1. Diffusive equations
dρ t = L(ρ t )dt + Cρ t + ρ t C ⋆ -Tr ρ t (C + C ⋆ ) ρ t dW t , (1)
where (W t ) t≥0 is a one dimensional Brownian motion.
Jump equations
dρ t = L(ρ t )dt + C ρ t C ⋆ Tr C ρ t C ⋆ -ρ t d Ñt -Tr C ρ t C ⋆ dt , (2)
where ( Ñt ) t≥0 is a counting process with stochastic intensity t → t 0 Tr C ρ s C ⋆ ds. Physically, equation [START_REF] Attal | Quantum noises[END_REF] describes photon detection models called heterodyne or homodyne detection [START_REF] Barchielli | Direct and heterodyne detection and other applications of quantum stochastic calculus to quantum optics[END_REF][START_REF] Barchielli | Quantum Trajectories and Measurements in Continuous Time The Diffusive Case[END_REF][START_REF] Wiseman | J Milburn interpretation of quantum jump and diffusion processes illustrated on the Bloch sphere[END_REF][START_REF] Wiseman | Quantum trajectories and feedback Ph[END_REF]. The equation [START_REF] Attal | Quantum Noises Book to appear[END_REF] relates direct photon detection model [START_REF] Barchielli | Direct and heterodyne detection and other applications of quantum stochastic calculus to quantum optics[END_REF][START_REF] Wiseman | J Milburn interpretation of quantum jump and diffusion processes illustrated on the Bloch sphere[END_REF][START_REF] Wiseman | Quantum trajectories and feedback Ph[END_REF]. The driving noise depends then on the type of measurement. Mathematically, a rigorous approach for justifying these equations is based on the theory of Quantum Stochastic Calculus [START_REF] Barchielli | Alberto Continual measurements in quantum mechanics and quantum stochastic calculus[END_REF][START_REF] Barchielli | Constructing quantum measurement processes via classical stochastic calculus[END_REF][START_REF] Bouten | Hans Stochastic Schrödinger equations[END_REF][START_REF] Parthasarathy | An introduction to quantum stochastic calculus[END_REF]. In such a physical setup, the action of the environment (described usually by a Fock space) on the small system is modeled by quantum noises [START_REF] Attal | Quantum noises[END_REF][START_REF] Attal | Quantum Noises Book to appear[END_REF][START_REF] Gardiner | Quantum noise. A handbook of Markovian and non-Markovian quantum stochastic methods with applications to quantum optics[END_REF]. The evolution is then described by the so-called Quantum Stochastic Differential Equations [START_REF] Attal | Quantum noises[END_REF][START_REF] Attal | Quantum Noises Book to appear[END_REF][START_REF] Parthasarathy | An introduction to quantum stochastic calculus[END_REF][START_REF] Fagnola | Quantum stochastic differential equations and dilation of completely positive semigroups[END_REF]. Next, by using the quantum filtering [START_REF] Belavkin | Quantum stochastic calculus and quantum nonlinear filtering[END_REF][START_REF] Bouten | A discrete invitation to quantum filtering and feedback control To appear[END_REF][START_REF] Bouten | An introduction to quantum filtering[END_REF] technique, one can derive the stochastic Schrödinger equations by taking into account the indirect observations. Another approach, not directly connected with quantum stochastic calculus, consists in using instrumental operator process and notion of a posteriori state [START_REF] Barchielli | Direct and heterodyne detection and other applications of quantum stochastic calculus to quantum optics[END_REF][START_REF] Barchielli | Quantum Trajectories and Measurements in Continuous Time The Diffusive Case[END_REF][START_REF] Barchielli | Alberto Quantum stochastic calculus, measurements continuous in time, and heterodyne detection in quantum optics[END_REF][START_REF] Barchielli | Giancarlo Instruments and mutual entropies in quantum information[END_REF][START_REF] Barchielli | Instrumental processes, entropies, information in quantum continual measurements[END_REF][START_REF] Mora | Basic Properties of Non-linear Stochastic Schrödinger Equations Driven by Brownian MOotions[END_REF].
In this work, we shall use a different approach, introduced recently by the second author in [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF][START_REF] Pellegrini | Markov Chains Approximation of Jump-Diffusion Quantum Trajectories[END_REF]. This discrete-time model of indirect measurement, called Quantum Repeated Measurements is based on the model of Quantum Repeated Interactions [START_REF] Attal | Alain The Langevin equation for a quantum heat bath[END_REF][START_REF] Attal | Yan From repeated to continuous quantum interactions[END_REF][START_REF] Attal | Yan From (n + 1)-level atom chains to n-dimensional noises[END_REF] introduced by S. Attal and Y. Pautrat. The setup is the following: a small system H is in contact with an infinite chain, ∞ k=1 E k , of identical and independent quantum systems, that is E k = E for all k. The elements of the chain interact with the small system, one after the other, each interaction having a duration τ > 0. After each interaction, a quantum measurement is performed on the element of the chain that has just been in contact with the small system. Each measurement involves a random perturbation of the state of the small system, the randomness being given by the outcome of the corresponding quantum measurement. The complete evolution of the state of the small system is described by a Markov chain depending on the time parameter τ . This Markov chain is called a Discrete Quantum Trajectory. By rescaling the intensity of the interaction between the small system and the elements of the chain in terms of τ , it has been shown in [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF] that the solutions of equations (1,2) can be obtained as limits of the discrete quantum trajectories when the time step τ goes to zero.
In [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF], the author investigated the case when the reference state of each element of the chain is the ground state (this corresponds also to models at zero temperature). This setup was generalized in [START_REF] Attal | Stochastic Master Equations for a Heat Bath[END_REF], where Gibbs states with positive temperature were considered and the corresponding equations were derived. In the present work, we go beyond this generalization and study the statistical model for the temperature state of the chain. More precisely, the initial state of the elements of the chain is a statistical mixture of ground and excited states. It is important to notice that both the Gibbs model as well as the ground state model are deterministic. Let us stress that, in the case where no measurement is performed after each interaction, both the Gibbs and the statistical model give rise to the same deterministic limit evolution. This limit behavior confirms the idea that a mixed quantum state and a probabilistic mixture of pure states represent the same physical reality. Quite surprisingly, we show that, when adding measurement, the limit stochastic differential equations are of different nature: for the Gibbs model the only possible limit evolutions are deterministic or diffusive, whereas for the statistical model jump evolutions becomes a possibility. Furthermore, the Gibbs model limit equations involve at most one random noise, whereas two driving noises may appear at the limit when considering in the statistical model.
The article is structured as follows. In Section 1, we introduce the different discrete models of quantum repeated interactions and measurements. In our approach, we present the statistical model of the thermal state as the result of a quantum measurement applied to each element of the chain before each interaction. Next, we describe the random evolution of the open system by deriving discrete stochastic equations. In Section 2, we investigate the continuous time models obtained as limits of the discrete models when the time-step parameter goes to zero. We remind the results of [START_REF] Attal | Stochastic Master Equations for a Heat Bath[END_REF] related to the thermal Gibbs model and we describe the new continuous models related to the thermal statistical model. Section 3 is devoted to the analysis of the different models. The qualitative differences between the continuous time evolutions are illustrated by concrete examples. Within these examples, it is shown that the statistical approach provides clear physical interpretations which cannot be reach when considering the Gibbs model. We show that model at zero temperature (each element of the chain is at the ground state) can be recovered by the statistical model; however, this is not possible with the Gibbs model. Moreover, we show that considering the statistical model allows to obtain unravelings of heat master equations with a measurement interpretation. Section 4 contains the proofs of the convergence of the discrete time model to the continuous model. Such results are based on Markov chain approximation techniques using the notion of convergence of Markov generators and martingale problems.
Quantum Repeated Interactions and Discrete Quantum Trajectories
In this section we present the mathematical model of quantum repeated measurements. In the first subsection we briefly recall the model of quantum repeated interactions [START_REF] Attal | Yan From repeated to continuous quantum interactions[END_REF] and in the second subsection we describe three different situations of indirect quantum measurements, in which environment particles are measured before and/or after each interaction. Discrete evolution equations are obtained in each case.
Quantum Repeated Interactions Model without Measurement
Let us introduce here the mathematical framework of quantum repeated interactions. We consider a small system H in contact with an infinite chain of identical and independent quantum systems. Each piece of the chain is represented by a Hilbert space E. Each copy of E interacts, one after the other, with the small system H during a time τ . Note that all the Hilbert spaces we consider are complex and finite dimensional. We start with the simpler task of describing a single interaction between the small system H and one piece of the environment E. Let ρ denote the state of H and let σ be the state of E. States are a positive self-adjoint operators of trace one; in Quantum Information Theory they are also called density matrices. The coupled system is described by the tensor product H ⊗ E and the initial state is in a product form ρ ⊗ σ. The evolution of the coupled system is given by a total Hamiltonian acting on H ⊗ E
H tot = H 0 ⊗ I + I ⊗ H + H int ,
where the operators H 0 and H are the free Hamiltonians of the systems H and E respectively, and the operator H int is the interaction Hamiltonian. The operator H tot gives rise to an unitary operator
U = exp(-iτ H tot ),
where τ represents the time of interaction. After the interaction, in the Schrödinger picture, the final state of the coupled system is
µ = U (ρ ⊗ σ)U ⋆ .
In order to describe all the repeated interactions, we need to describe an infinite number of quantum systems. The Hilbert space of all possible states is given by the countable tensor product
Γ = H ⊗ ∞ k=1 E k = H ⊗ Φ,
where E k ≃ E for all k ≥ 1. If {e 0 , e 1 , . . . , e K } denotes an orthonormal basis of E ≃ C K+1 , the orthonormal basis of Φ = ∞ k=1 E k is constructed with respect to the stabilizing sequence e ⊗N * 0 (we shall not develop the explicit construction of the countable tensor product since we do not need it in the rest of the paper; we refer the interested reader to [START_REF] Attal | Yan From repeated to continuous quantum interactions[END_REF] for the complete details).
Let us now describe the interaction between H and the k-th piece of environment E k , from the point of view of the global Hilbert space Γ. The quantum interaction is given by an unitary operator U k which acts like the operator U on the tensor product H ⊗ E k and like the identity operator on the rest of the space Γ. In the Schrödinger picture, a state η of Γ evolves as a closed system, by unitary conjugation
η -→ U k η U ⋆ k .
Therefore, the whole procedure up to time k can be described by an unitary operator V k defined recursively by
V k+1 = U k+1 V k V 0 = I (3)
In more concrete terms we consider the initial state µ = ρ ⊗ ∞ k=1 σ for the small system coupled with the chain (notice that all the elements of the chain are initially in the same state σ k = σ). After k interactions, the reference state is given by
µ k = V k µ V ⋆ k .
Since we are interested only in the evolution of the small system H, we discard the environment Φ. The reduced dynamics of the small system is then given by the partial trace on the degrees of freedom of the environment. If α denotes a state on Γ, we denote by Tr Φ [α] the partial trace of α on H with respect to the environment space Φ = ∞ k=1 E k . We recall the definition of the partial trace operation.
Definition-Theorem 1 Let H and K be two Hilbert spaces. For all state α on H ⊗ K, there exists a unique state on H denoted by Tr K [α] which satisfies
Tr Tr K [α]X = Tr[α(X ⊗ I K )],
for all X ∈ B(H). The state Tr K [α] is called the partial trace of α on H with respect to K.
With this notation, the evolution of the state of the small system is given by
ρ k = Tr Φ µ k . (4)
The reduced dynamics of (ρ k ) is entirely described by the following proposition [START_REF] Attal | Yan From repeated to continuous quantum interactions[END_REF][START_REF] Nechita | Random repeated quantum interactions and random invariant states[END_REF].
Proposition 1 The sequence of states (ρ k ) k defined in equation ( 4) satisfies the recurrence relation
ρ k+1 = Tr E U (ρ k ⊗ σ)U ⋆ .
Furthermore, the application
L : B(H) → B(H) X → Tr E [U (X ⊗ σ)U ⋆ ]
defines a trace preserving completely positive map (or a quantum channel) and the state of the small system after k interactions is given by
ρ k = L k (ρ 0 ). ( 5
)
Quantum Repeated Interactions with Measurement
In this section we introduce Quantum Measurement in the model of quantum repeated interactions and we show how equation ( 5) is modified by the different observations. We shall study three different situations of indirect measurement, as follows:
1. The first model concerns "quantum repeated measurements" before each interaction.
It means that we perform a measurement of an observable on each copy of E before the interaction with H. We call such a setup "Random Environment" (we shall explain the terminology choice later on).
2. The second model concerns "quantum repeated measurements" after each interaction.
It means that we perform a measurement of an observable on each copy of E after the interaction with H. We call such a setup "Usual Indirect Quantum Measurement".
3. The third setup is a combination of the two previous models. Two quantum measurements (of possibly different observables) are performed on each copy of E, one before and one after each interaction with H. Such a setup is called "Indirect Quantum Measurement in Random Environment"
In all the cases, the measurement is called indirect because the small system is not directly observed, the measurement being performed on an auxiliary system (an element of the chain) which interacted previously with the system. The main purpose of this work is to study and analyze the three different limit behaviors obtained when the interaction time τ goes to zero (see Section 2). Let us mention that the second setup has been studied in detail in [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF][START_REF] Pellegrini | Markov Chains Approximation of Jump-Diffusion Quantum Trajectories[END_REF]. We chose to describe in great detail the more general case of the third model, since the other two models can be easily recovered from the third one, by choosing to measure the trivial observable I.
Indirect Quantum Measurement in Random Environment
In order to make the computations more easy to follow, we shall focus on the case where the environment is a chain of qubits (two-dimensional quantum systems). Mathematically, this is to say that E = C 2 .
Let us start by making more precise the physical model for one copy of E. To this end, we consider {e 0 , e 1 } an orthonormal basis of E, which diagonalizes the Hamiltonian
H = γ 0 0 0 γ 1 ,
where we suppose that γ 0 < γ 1 . The reference state σ of the environment corresponds to a Gibbs thermal state at positive temperature, that is
σ = e -βH Tr [e -βH ] , with β = 1 KT , (6)
where T corresponds to a finite strictly positive temperature and K is a constant. In the basis {e 0 , e 1 }, σ is diagonal
σ = p|e 0 e 0 | + (1 -p)|e 1 e 1 |,
with p = e -βγ 0 e -βγ 0 + e -βγ 1 . Notice that since β > 0, we have 0 < p < 1.
We are now in position to describe the measurement before the interaction. We consider a diagonal observable A of E of the form
A = λ 0 |e 0 e 0 | + λ 1 |e 1 e 1 |.
The extension of the observable A to an observable of H ⊗ E is I ⊗A. According to the axioms of Quantum Mechanics, the outcome of the measurement of the observable I ⊗A is an element of its spectrum, the result being random. If the initial state (before the interaction) is ρ ⊗ σ, we shall observe the eigenvalue λ i with probability
P[λ i is observed] = Tr (ρ ⊗ σ) I ⊗P i = Tr[σP i ], i = 0, 1 where P i = |e i e i | are the eigenprojectors of A. It is straightforward to see that in this case P[λ 0 is observed] = p = 1 -P[λ 1 is observed].
Furthermore, according to the wave packet reduction principle, if the eigenvalue λ i is observed, the initial state ρ ⊗ σ is modified and becomes
µ 1 i = I ⊗P i (ρ ⊗ σ) I ⊗P i Tr (ρ ⊗ σ) I ⊗P i = ρ ⊗ P i σP i Tr[σP i ] . (7)
This defines naturally a random variable µ 1 valued in the set of states on H⊗E. More precisely, the state µ 1 takes the value µ 1 0 = ρ ⊗ |e 0 e 0 | with probability Tr (ρ ⊗ σ) |e 0 e 0 | = p and the value µ 1 1 = ρ ⊗ |e 1 e 1 | with probability 1p.
Remark 1 Since both the initial state of the system and the observable measured have product form, only the state of E is modified by the measurement before the interaction. Instead of describing the evolution of the coupled system, we could have considered that the state of E is a random variable σ 1 i where σ 1 i is either |e 0 e 0 | with probability p either |e 1 e 1 | with probability 1p. This is the statistical model for a thermal state and its random character justifies the name "Random environment". In conclusion, we could have replaced from the start the setup (Gibbs state + Quantum measurement) with the probabilistic setup Random environment, the results being identical. We shall give more details and comments on this point of view in the Section 3.
We now move on to describe the second measurement, which is performed after the interaction. In this case we consider an arbitrary (not necessarily diagonal in the basis {e 0 , e 1 }) observable B of E which admits a spectral decomposition
B = α 0 Q 0 + α 1 Q 1 ,
where Q j corresponds to the eigenprojector associated with the eigenvalue α j . Let µ 1 be the random state after the first measurement. After the interaction, the state on H ⊗ E is
η 1 i = U µ 1 i U ⋆ , i = 0, 1.
Now, assuming that the measurement of the observable A (before the interaction) has given the result λ i , the probability of observing the eigenvalue α j of B is given by
P[α j is observed] = Tr η 1 i I ⊗Q j .
and the state after the measurement becomes
θ 1 i,j = I ⊗Q j η 1 i I ⊗Q j Tr η 1 i I ⊗ Q j .
The random state θ 1 (which takes one of the values θ 1 i,j ) on H ⊗ E describes the random result of the two indirect measurements which were performed before and after the interaction.
Having described the interaction between the small system H and one copy of E, we look now at the repeated procedure on the whole system Γ. The probability space underlying the outcomes of the repeated quantum measurements before and after each interaction is given by Ω = (Σ A × Σ B ) N ⋆ , where Σ A = {0, 1} corresponds to the index of the eigenvalues of the observable A and Σ B = {0, 1} for the ones of B. On Ω, we consider the usual cylinder σ-algebra Λ generated by the cylinder sets
Λ (i 1 ,j 1 ),...,(i k ,j k ) = {(ω, ϕ) ∈ (Σ A × Σ B ) N ⋆ | ω 1 = i 1 , . . . , ω k = i k , ϕ 1 = j 1 , . . . , ϕ k = j k }.
Now, we shall define a probability measure describing the results of the repeated quantum measurements. To this end, we introduce the following notation. For an operator Z on E j , we note Z (j) the extension of Z as an operator on Γ, which acts as Z on the j-th copy of E and as the identity on H and on the other copies of E:
Z (j) = I ⊗ j-1 p=1 I ⊗Z ⊗ p≥j+1 I .
Furthermore, for all k ≥ 1 and {(i 1 , j 1 ), . . . ,
(i k , j k )} ∈ (Σ A × Σ B ) k , we put μk (i 1 , j 1 ), . . . , (i k , j k ) = k s=1 Q (s) js V k k s=1 P (s) is µ k s=1 P (s) is V ⋆ k k s=1 Q (s) js , (8)
where P i and Q j are the respective eigenprojectors of A and B and µ = ρ ⊗ ∞ k=1 σ k , with σ k = σ = p|e 0 e 0 | + (1p)|e 1 e 1 | for all k ∈ N ⋆ , is the initial state on Γ. Notice that the products in the previous equation need not to be ordered, since two operators X (i) and Y (j) commute whenever i = j. In the same vein, the following important commutation relation
Q (k) i k U k P (k) i k . . . Q (1) i 1 U 1 P (1) i 1 = k s=1 Q (s) js V k k s=1 P (s) is ,
shows that the operator μk (i 1 , j 1 ), . . . , (i k , j k ) in Eq. ( 8) is actually the non normalized state of the global system after the observation of eigenvalues λ i 1 , . . . , λ i k for k first measurements of A and α j 1 , . . . , α j k for the k first measurements of the observable B.
We have now all the elements needed to define a probability measure on the cylinder algebra Λ by
P[Λ (i 1 ,j 1 ),...,(i k ,j k ) ] = Tr[μ k (i 1 , j 1 ), . . . , (i k , j k ) ].
This probability measure satisfies the Kolmogorov Consistency Criterion, hence we can extend it to the whole σ-algebra Λ to the unique probability measure P with these finite dimensional marginals.
The global random evolution on Γ is then described by the random sequence (ρ k ) ρk :
Ω -→ B(Γ) (ω, ϕ) -→ ρk (ω, ϕ) = μ((ω 1 , ϕ 1 ), . . . , (ω k , ϕ k )) Tr[μ k ((ω 1 , ϕ 1 ), . . . , (ω k , ϕ k ))]
This random sequence describes the random modification involved by the result of measurement before and after the interactions. In order to recover the measurement setup only before or only after the interactions, one has just to delete the projector P (j)
i j or Q (j)
i j in equation [START_REF] Barchielli | Quantum Trajectories and Measurements in Continuous Time The Diffusive Case[END_REF]. The reduced evolution of the small system is obtained by the partial trace operation:
ρ k ω, ϕ = Tr Φ ρk ω, ϕ (9)
for all (ω, ϕ) ∈ Ω and all k ∈ N ⋆ . The random sequence (ρ k ) k≥1 is called a Discrete Quantum Trajectory. It describes the random modification of the small system undergoing the sequence of successive measurements.
Remark 2
The dynamics of the sequence of states ρ k can be seen as a random walk in random environment dynamics in the following way. Assume that all the elements of the chain are measured before the first interaction; the results of this procedure define a random environment in which the small system will evolve. All the randomness coming from the measurement before each interaction is now contained in the environment ω. Given a fixed value of the environment ω, the small system interacts repeatedly with the chain (whose states depend on ω) and the random results of the repeated measurement of the second observable B are encoded in ϕ. In this way, the global evolution of ρ can be seen as a random walk (where random modifications of the states are due to the second measurement) in a random environment (generated by the measurements before each interaction).
Discrete Evolution Equations
In this section, using the Markov property of the discrete quantum trajectories, we obtain discrete evolution equations which are random perturbation of the Master equation [START_REF] Attal | Yan From (n + 1)-level atom chains to n-dimensional noises[END_REF] given in Proposition 1. The Markov property of the random sequence (ρ k ) k is expressed as follows.
Proposition 2 The random sequence of states (ρ k ) k on H defined by the formula ( 9) is a Markov chain on (Ω, Λ, P). More precisely, we have the following random evolution equation
ρ k+1 (ω, ϕ) = i,j∈{0,1} G ij (ρ k (ω, ϕ)) Tr G ij (ρ k (ω, ϕ)) 1 k+1 ij (ω, ϕ), (10)
where
G ij (ρ) = Tr E [(I ⊗ Q j ) U (I ⊗P i (ρ ⊗ σ) I ⊗P i ) U ⋆ (I ⊗Q j )]
and
1 k+1 ij (ω, ϕ) = 1 ij (ω k+1 , ϕ k+1 ) = 1 i (ω k+1 )1 j (ϕ k+1 ) for all (ω, ϕ) ∈ (Σ A × Σ B ) N ⋆ .
The equation ( 10) is called a Discrete Stochastic Master Equation .In order to make more explicit the equation ( 10) and to compute the partial trace, we introduce a suitable basis for H ⊗ E, which is {e 0 ⊗ e 0 , e 1 ⊗ e 0 , e 0 ⊗ e 1 , e 1 ⊗ e 1 }. In this basis, the unitary operator U can be written in block format in the following way
U = L 00 L 01 L 10 L 11 ,
where L ij are operators in M 2 (C). We shall treat two different situations, depending on the form of the observable B that is being measured after each interaction. On one hand we consider the case where the observable B of E is diagonal in the basis (e 0 , e 1 ) and on the other hand we consider the case where B is non diagonal. Let us start with the case where the observable B is diagonal in the basis {e 0 , e 1 }, that is
B = α 0 |e 0 e 0 | + α 1 |e 1 e 1 |.
In this case, equation [START_REF] Barchielli | Alberto Quantum stochastic calculus, measurements continuous in time, and heterodyne detection in quantum optics[END_REF] becomes
ρ k+1 (ω, ϕ) = L 00 ρ k (ω, ϕ) L ⋆ 00 Tr[L 00 ρ k (ω, ϕ) L ⋆ 00 ] 1 00 (ω k+1 , ϕ k+1 ) + L 10 ρ k (ω, ϕ) L ⋆ 10 Tr[L 10 ρ k (ω, ϕ) L ⋆ 10 ] 1 01 (ω k+1 , ϕ k+1 ) + L 01 ρ k (ω, ϕ) L ⋆ 01 Tr[L 01 ρ k (ω, ϕ) L ⋆ 01 ] 1 10 (ω k+1 , ϕ k+1 ) + L 11 ρ k (ω, ϕ) L ⋆ 11 Tr[L 11 ρ k (ω, ϕ) L ⋆ 11 ] 1 11 (ω k+1 , ϕ k+1 ). (11)
Usually, a stochastic Master equation appears as a random perturbation of the Master equation (see equations [START_REF] Attal | Quantum noises[END_REF][START_REF] Attal | Quantum Noises Book to appear[END_REF] in the Introduction). Moreover, the noises driving the equations are centered, that is of zero mean (this is the case of the Brownian motion and the counting process compensated with the stochastic intensity in equations (1, 2)). In order to obtain a similar description in the discrete case, we introduce the following random variables
X k ω, ϕ = 1 10 (ω k , ϕ k ) + 1 11 (ω k , ϕ k ) -(1 -p) p(1 -p) , k ∈ N ⋆ . (12)
Now, we rewrite equation [START_REF] Barchielli | Giancarlo Instruments and mutual entropies in quantum information[END_REF] in terms of the random variables X k , 1 01 1 10 :
ρ k+1 = p(L 00 ρ k L ⋆ 00 + L 10 ρ k L ⋆ 10 ) + (1 -p)(L 01 ρ k L ⋆ 01 + L 11 ρ k L ⋆ 11 ) 1 + -p(1 -p) (L 00 ρ k L ⋆ 00 + L 10 ρ k L ⋆ 10 ) + p(1 -p) (L 11 ρ k L ⋆ 11 + L 01 ρ k L ⋆ 01 ) X k+1 + - L 00 ρ k L ⋆ 00 Tr[L 00 ρ k L ⋆ 00 ] + L 10 ρ k L ⋆ 10 Tr[L 10 ρ k L ⋆ 10 ] (1 01 -p Tr[L 10 ρ k L 10 ]) + - L 11 ρ k L ⋆ 11 Tr[L 11 ρ k L ⋆ 11 ] + L 01 ρ k L ⋆ 01 Tr[L 10 ρ k L ⋆ 10 ] (1 10 -(1 -p) Tr[L 01 ρ k L 01 ]). ( 13
)
It is important to stress out that the last three terms in the previous equation have mean zero:
E X k = E 1 01 -p Tr[L 10 ρ k L 10 ] = E 1 10 -(1 -p) Tr[L 01 ρ k L 01 ] = 0.
Moreover, recall that the discrete evolution of Proposition 1, without measurement, is given by
ρ k+1 = L(ρ k ) = p L 00 ρ k L ⋆ 00 + L 10 ρ k L ⋆ 10 + (1 -p) L 01 ρ k L ⋆ 01 + L 11 ρ k L ⋆ 11 . ( 14
)
As a consequence, the discrete stochastic master equation ( 13) is written as a perturbation of the discrete Master equation ( 14).
Remark 3
In this expression, one can see that the random variable X k depends only on the outcome of the measurement before the interaction (we sum over the two possible results of the measurement after the interaction). In other words, it means that the random variables X k describe essentially the perturbation of the measurement before the interaction. On the other hand, the random variables 1 01 and 1 10 , conditionally on the result of the first measurement, describe the perturbation involved by the measurement after the interaction. Hence, each term of the equation ( 13) that is linked with either X k , 1 01 or 1 10 expresses how the deterministic part ( 14) is modified by the results of the different measurements.
We now analyze the second case, where the observable B is non-diagonal in the basis {e 0 , e 1 }. We write B = α 0 Q 0 + α 1 Q 1 , where the eigenprojectors Q i are written in the {e 0 , e 1 } basis Q i = (q i kl ) 0≤k,l≤1 . In this case, the operators appearing in equation ( 10) are given by
G 0i (ρ) = q i 00 L 00 ρL ⋆ 00 + q i 10 L 00 ρL ⋆ 10 + q i 01 L 10 ρL ⋆ 00 + q i 11 L 10 ρL ⋆ 10 G 1i (ρ) = q i 00 L 01 ρL ⋆ 01 + q i 10 L 01 ρL ⋆ 11 + q i 01 L 11 ρL ⋆ 01 + q i 11 L 11 ρL ⋆ 11 .
As before, in order to obtain the expression of the discrete Master equation as a perturbation of the deterministic Master equation, we introduce the following random variables
X k+1 = 1 k+1 10 + 1 k+1 11 -(1 -p) p(1 -p) Y 0 k+1 = 1 k+1 01 -p Tr G 01 (ρ k ) p Tr G 01 (ρ k ) 1 -p Tr G 01 (ρ k ) Y 1 k+1 = 1 k+1 10 -(1 -p) Tr G 10 (ρ k ) (1 -p) Tr G 10 (ρ k ) 1 -(1 -p) Tr G 10 (ρ k ) . ( 15
)
In terms of these random variables, we get
ρ k+1 = L(ρ k )1 + -p(1 -p) G 00 (ρ k ) + G 01 (ρ k ) + p(1 -p) G 11 (ρ k ) + G 10 (ρ k ) X k+1 + p Tr[G 01 (ρ k )](1 -p Tr[G 01 (ρ k )]) - G 00 (ρ k ) Tr G 00 (ρ k ) + G 01 (ρ k ) Tr G 01 (ρ k ) Y 0 k+1 + (1 -p) Tr[G 10 (ρ k )](1 -(1 -p) Tr[G 10 (ρ k )]) - G 11 (ρ k ) Tr G 11 (ρ k ) + G 10 (ρ k ) Tr G 10 (ρ k ) Y 1 k+1 . ( 16
)
Remark 4 As it was the case in equation ( 13), the discrete random variables X k and Y i k , i = 0, 1 are centered. As before, the variables X k represent the perturbation produced by the measurement before the interaction and, given the result of this measurement, the variables Y i k describe the perturbation generated by the measurement of the second observable. The particular choices made for Y i k will be justified when we shall consider the continuous models.
They will appear as discrete analogs of the noises which drive the continuous stochastic Master equations (W t and Ñt in equations (1, 2)).
The above general framework concerns the combination of the two measurements, one before and one after each interaction. Let us present the corresponding equations when only one type of measurement (before or after each interaction) is performed.
We start by looking at the case where a measurement is only performed before the interaction (we called this kind of setup "Random environment"). Since measuring an observable on an element of the chain (which has not yet interacted) does not alter the state of the little system H, only the reference state of each copy of E is random. The completely positive evolution operators describing the two possibilities for the state after the interaction are given by
R i (ρ) = Tr E [U (ρ ⊗ |e i e i |)U ⋆ ] (17) = G i0 (ρ) + G i1 (ρ), (18)
for i = 0, 1. Let 1 k i be the random variable which is equal to 1 if we observe the eigenvalue λ i at the k-th step, and 0 otherwise. We can describe the evolution of the small system H by the following equation
ρ k+1 = R 0 (ρ k )1 k+1 0 + R 1 (ρ k )1 k+1 1 .
As before, we introduce
X k+1 = 1 k+1 1 -(1 -p) p(1 -p) , k ∈ N.
With this notation, the evolution equation becomes
ρ k+1 = L(ρ k ) + -p(1 -p) L 00 ρ k L ⋆ 00 + L 10 ρ k L ⋆ 10 + p(1 -p) L 11 ρ k L ⋆ 11 + L 01 ρ k L ⋆ 01 X k+1 . (19)
The opposite case, where a measurement is only performed after the interaction, is treated in great detail in [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF] when p = 0 (ground states) and in [START_REF] Attal | Stochastic Master Equations for a Heat Bath[END_REF] for 0 < p < 1 (Gibbs states).
Let us recall briefly the main steps needed to obtain the appropriate equations. Consider the observable
B = α 1 Q 1 + α 2 Q 2 , with Q i = (q i kl ) 0≤k,l≤1
. The two possible non normalized states on H that can be obtained after the measurement are defined via the action of the operators
F i (ρ) = Tr E [I ⊗ Q i U (p|e 0 e 0 | + (1 -p)|e 1 e 1 |)U ⋆ I ⊗ Q i ] (20) = pG 0i (ρ) + (1 -p)G 1i (ρ), (21)
for i = 0, 1. The discrete evolution equation is then given by
ρ k+1 = F 0 (ρ k ) Tr[F 0 (ρ k )] 1 k+1 0 + F 1 (ρ k ) Tr[F 1 (ρ k )] 1 k+1 1 . (22)
Again, we introduce the random variables X k defined by
X k+1 = 1 k+1 i -Tr[F 1 (ρ k )] Tr[F 0 (ρ k )] Tr[F 1 (ρ k )] .
In terms of these centered random variables, we get
ρ k+1 = L(ρ k )1 + - Tr F 1 (ρ k ) Tr F 0 (ρ k ) F 0 (ρ k ) + Tr F 0 (ρ k ) Tr F 1 (ρ k ) F 1 (ρ k ) X k+1 . ( 23
)
2 Continuous Time Models of Quantum Trajectories
In this section, we present the continuous versions of the discrete equations [START_REF] Barchielli | On a class of stochastic differential equations used in quantum optics[END_REF][START_REF] Belavkin | Quantum stochastic calculus and quantum nonlinear filtering[END_REF][START_REF] Bouten | An introduction to quantum filtering[END_REF][START_REF] Nechita | Random repeated quantum interactions and random invariant states[END_REF]. We start by introducing asymptotic assumptions for the interaction unitaries in terms of the time parameter τ . Next, we implement these assumptions in the different equations [START_REF] Barchielli | On a class of stochastic differential equations used in quantum optics[END_REF][START_REF] Belavkin | Quantum stochastic calculus and quantum nonlinear filtering[END_REF][START_REF] Bouten | An introduction to quantum filtering[END_REF][START_REF] Nechita | Random repeated quantum interactions and random invariant states[END_REF] and we obtain stochastic differential equations as limits when the time step τ goes to 0.
Let us present the asymptotic assumption for the interaction with τ = 1/n. In terms of the parameter n we can write the unitary operator U as
U (n) = exp -i 1 n H tot = L 00 (n) L 10 (n) L 01 (n) L 11 (n) . ( 24
)
Let us recall that the discrete dynamic of quantum repeated interactions is given by
V k = U k • • • U 1 .
In [START_REF] Attal | Alain The Langevin equation for a quantum heat bath[END_REF][START_REF] Attal | Yan From repeated to continuous quantum interactions[END_REF], it is shown that the asymptotic of the coefficients L ij (n) must be properly rescaled in order to obtain a non-trivial limit for V [nt] . With proper rescaling, is it shown in these references that the operator V [nt] converges when n goes to infinity to an operator Ṽt which satisfies a Quantum Langevin Equation. When translated in our context of a two-level atom in contact with a spin chain, we put
L 00 (n) = I + 1 n W + • 1 n L 01 (n) = 1 √ n S + • 1 √ n L 10 (n) = 1 √ n T + • 1 √ n L 11 (n) = I + 1 n Z + • 1 n . ( 25
)
In terms of total Hamiltonian, it is shown in [START_REF] Attal | Yan From repeated to continuous quantum interactions[END_REF] that typical Hamiltonian H tot which gives rise to such asymptotic assumption can be described as
H tot = H 0 ⊗ I + I ⊗ γ 0 0 0 γ 1 + √ n C ⊗ 0 0 1 0 + C ⋆ ⊗ 0 1 0 0 .
Hence, for the operators W, S, T and Z we get
W = -H 0 -γ 0 I - 1 2 C ⋆ C Z = -H 0 -γ 1 I - 1 2 CC ⋆ S = T ⋆ = -iC (26)
In the rest of the paper, we shall write all the results in terms of the operators H 0 and C. Now, we are in position to investigate the asymptotic behavior of the different equations [START_REF] Barchielli | On a class of stochastic differential equations used in quantum optics[END_REF][START_REF] Belavkin | Quantum stochastic calculus and quantum nonlinear filtering[END_REF][START_REF] Bouten | An introduction to quantum filtering[END_REF][START_REF] Nechita | Random repeated quantum interactions and random invariant states[END_REF] and to introduce the continuous models. The mathematical arguments used to obtain the continuous models are developed in Section 5. Before presenting the main result concerning the model with measurement, we treat the simpler model obtained by considering the limit n goes to infinity in the equation ( 5) of Proposition 1.
Continuous Quantum Repeated Interactions without Measurement
In this section, by applying the asymptotic assumption, we show that the limit evolution obtained from the quantum repeated interactions model is a Lindblad evolution (also called Markovian evolution [START_REF] Breuer | Francesco The theory of open quantum systems[END_REF]). This result has been stated and proved in [START_REF] Attal | Yan From repeated to continuous quantum interactions[END_REF]. We recall it here since the more general situations treated in the current work build upon these considerations. The discrete Master equation ( 5) of Proposition 1 in our context is expressed as follows
ρ k+1 = L(ρ k ) = p L 00 ρ k L ⋆ 00 + L 10 ρ k L ⋆ 10 + (1 -p) L 01 ρ k L ⋆ 01 + L 11 ρ k L ⋆ 11 . (27)
Plugging in the asymptotic assumptions [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF], we get (here, n is a parameter)
ρ k+1 = ρ k + 1 n p -i[H 0 , ρ] - 1 2 {C ⋆ C, ρ} + CρC ⋆ + (1 -p) -i[H 0 , ρ] - 1 2 {CC ⋆ , ρ} + C ⋆ ρC + •(1) , (28)
where [X, Y ] = XY -Y X and {X, Y } = XY + Y X are the usual commutator and anticommutator. The following theorem is obtained by taking the limit n → ∞ in the previous equation.
Theorem 1 (Limit Model for Quantum Repeated Interactions without Measurement) Let (ρ [nt] ) be the family of states defined from the sequence (ρ k ) describing quantum repeated interactions. We have
lim n→∞ ρ [nt] -ρ t = 0,
where (ρ t ) is the solution of the Master equation
dρ t = L(ρ t )dt,
with the Lindblad operator L given by
L(ρ) = p -i[H 0 , ρ] - 1 2 {C ⋆ C, ρ} + CρC ⋆ + (1 -p) -i[H 0 , ρ] - 1 2 {CC ⋆ , ρ} + C ⋆ ρC . (29)
The operator L appearing in equation ( 29) is the usual Lindblad operator describing the evolution of a system in contact with a heat bath at positive temperature T [START_REF] Attal | Alain The Langevin equation for a quantum heat bath[END_REF]; let us recall that the parameter p can be expressed in terms of the temperature T as in equation ( 6).
Continuous Quantum Repeated Interactions with Measurement
In this section, we present the different continuous models obtained as limits of discrete quantum repeated measurement models described in the equations [START_REF] Barchielli | On a class of stochastic differential equations used in quantum optics[END_REF][START_REF] Belavkin | Quantum stochastic calculus and quantum nonlinear filtering[END_REF][START_REF] Bouten | An introduction to quantum filtering[END_REF][START_REF] Nechita | Random repeated quantum interactions and random invariant states[END_REF].
Although continuous quantum trajectories have been extensively studied by the second author in [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF][START_REF] Pellegrini | Markov Chains Approximation of Jump-Diffusion Quantum Trajectories[END_REF], the result concerning the combination of the two kinds of measurement is new and the stochastic differential equations appearing at the limit have, to our knowledge, never been considered in the literature. The comparison between the different limiting behaviors is particularly interesting and will be discussed in detail in Section 3.
The "Random environment" setup
In Section 1.2.1, we have seen that the evolution of the little system in presence of measurement before each interaction is described by the following equation:
ρ k+1 = L(ρ k ) + -p(1 -p) L 00 ρ k L ⋆ 00 + L 10 ρ k L ⋆ 10 + p(1 -p) L 11 ρ k L ⋆ 11 + L 01 ρ k L ⋆ 01 X k+1 . (30)
Using the asymptotic condition for the operator L ij (n), we get the following expression
ρ k+1 = ρ k + 1 n L(ρ k ) + •(1) + 1 n K(ρ k ) + •(1) X k+1 , (31)
where the expression of L is the same as Theorem 1. The accurate expression of K is not necessary because these terms disappears at the limit. From the equation [START_REF] Jacod | Calcul stochastique et problèmes de martingales[END_REF], we want to derive a discrete stochastic differential equation. To this end, we define the following stochastic processes
ρ n (t) = ρ [nt] , V n (t) = [nt] n , W n (t) = 1 √ n [nt]-1 k=0 X k+1 (32)
Next, by writing
ρ [nt] = ρ 0 + [nt]-1 k=0 ρ k+1 -ρ k
and by using the equation [START_REF] Jacod | Calcul stochastique et problèmes de martingales[END_REF] and the definition (32) of stochastic processes, we can write
ρ n (t) = ρ 0 + t 0 L(ρ n (s-))dV n (s) + t 0 1 √ n E(ρ n (s-))dW n (s) + ε n (t), ( 33
)
where ε n (t) regroups all the •(•) terms. The equation [START_REF] Jacod | Limit theorems for stochastic processes, volume 288 of Grund lehren der Mathematischen Wissenschaften[END_REF] appears then as a discrete stochastic differential equation whose solution is the process (ρ n (t)) t .
In order to obtain the final convergence result, we shall use the following proposition concerning the limit behavior of the process (W n (t)).
Proposition 3 Let (W n (t)) be the process defined by the formula [START_REF] Jacod | Quelques remarques sur un nouveau type déquations di?érentielles stochastiques[END_REF]. We have the following convergence result
W n (t) ⇒ W t ,
where ⇒ denotes the convergence in distribution for stochastic processes and (W t ) t≥0 is a standard Brownian motion.
Proof: In this case, the random variables (X k+1 ) are independent and identically distributed. Furthermore they are centered and reduced. As a consequence, the convergence result is just an application of the Donsker Theorem [START_REF] Billingsley | Convergence of probability measures[END_REF][START_REF] Jacod | Limit theorems for stochastic processes, volume 288 of Grund lehren der Mathematischen Wissenschaften[END_REF][START_REF] Protter | Stochastic integration and di?erential equations[END_REF].
Using Proposition 3, we can now take the limit n → ∞ in equation [START_REF] Jacod | Limit theorems for stochastic processes, volume 288 of Grund lehren der Mathematischen Wissenschaften[END_REF].
Theorem 2 (Limit Model for Random Environment) The stochastic process (ρ n (t)), describing the evolution of the small system in contact with a random environment, converges in distribution to the solution of the Master equation
dρ t = L(ρ t )dt,
where Lindblad generator L is given in equation ( 29).
This theorem is a straightforward application of a well-known theorem of Kurtz and Protter [START_REF] Kurtz | Wong-Zakai corrections, random evolutions, and simulation schemes for SDEs[END_REF][START_REF] Kurtz | Weak limit theorems for stochastic integrals and stochastic differential equations[END_REF] concerning the convergence of stochastic differential equations. Without the term 1/
√ n, the process (W n (t)) converges to a Brownian motion and thus the equation [START_REF] Jacod | Limit theorems for stochastic processes, volume 288 of Grund lehren der Mathematischen Wissenschaften[END_REF] converges to a diffusive stochastic differential equation. As the term 1/ √ n converges to zero, this implies the random diffusive part disappears when we consider the limit. The fact that we recover the deterministic Lindblad evolution for a heat bath will be discussed in Section 3.
The next subsection contains the description of the continuous model when a measurement is performed after each interaction.
Usual indirect Quantum Measurement
In [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF], it is shown that discrete quantum trajectories for p = 0 converge (when n goes to infinity) to solutions of classical stochastic Master equations (1, 2). These models are at zero temperature. The result for positive temperature (0 < p < 1) is treated in [START_REF] Attal | Stochastic Master Equations for a Heat Bath[END_REF]. In this section, we just recall the result of [START_REF] Attal | Stochastic Master Equations for a Heat Bath[END_REF] corresponding to the limit models obtained from the equation [START_REF] Nechita | Random repeated quantum interactions and random invariant states[END_REF].
As it is mentioned in Section 1.2.1, the final stochastic differential equations depend on the form of the observable.
1. If B = α 0 Q 0 + α 1 Q 1 is a diagonal observable, with Q i = (q i kl ) 0≤k,l≤1
, we have q 0 00 = q 1 00 = 1 and all the other coefficients are equal to zero. Hence, we obtain the following asymptotic expression for the equation ( 23)
ρ k+1 -ρ k = 1 n L(ρ k+1 ) + •(1) + 1 n N (ρ k ) + •(1) X k+1 . ( 34
)
For the random variables (X k ), we have
X k+1 (0) = -Tr[F 1 (ρ k )] Tr[F 0 (ρ k )] with probability p + 1 n h(ρ k ) + •(1) X k+1 (1) = Tr[F 0 (ρ k )] Tr[F 1 (ρ k )]
with probability 1
-p + 1 n g(ρ k ) + •(1) (35)
In [START_REF] Protter | Stochastic integration and di?erential equations[END_REF] and [START_REF] Kurtz | Weak limit theorems for stochastic integrals and stochastic differential equations[END_REF], the exact expressions of N , h and g are not necessary for the final result. The expression of L corresponds to the Lindblad operator of Proposition 1.
2. The other case concerns an observable B which is not diagonal. We have then 0 < q 0 00 < 1 and 0 < q 1 11 < 1. The final result is essentially the same for all non diagonal observables B. Hence, we just focus on the symmetric case where B is of the form
B = α 0 1/2 1/2 1/2 1/2 + α 1 1/2 -1/2 -1/2 1/2 .
Thus, in asymptotic form, the equation ( 23) becomes
ρ k+1 -ρ k = 1 n L(ρ k ) + •(1) + 1 √ n G(ρ k ) + •(1) X k+1 , ( 36
)
where G is defined on the set of states by
G(ρ) = -p(Cρ + ρC ⋆ ) + (1 -p)(C ⋆ ρ + ρC) + Tr p(Cρ + ρC ⋆ ) + (1 -p)(C ⋆ ρ + ρC) ρ
The random variables X k+1 evolve as
X k+1 (0) = -Tr[F 1 (ρ k )] Tr[F 0 (ρ k )] with probability 1 2 + 1 √ n f (ρ k ) X k+1 (1) = Tr[F 0 (ρ k )] Tr[F 1 (ρ k )]
with probability
1 2 + 1 √ n m(ρ k ) (37)
Again, the exact expressions of f and m are not necessary for the final result.
We define
ρ n (t) = ρ [nt] , V [nt] = [nt] n , W n (t) = 1 √ n [nt]-1 k=0 X k+1 .
Depending on which type of observable we consider, we obtain two different discrete stochastic differential equations.
1. In the case of a diagonal observable we have
ρ n (t) = ρ 0 + [nt]-1 k=0 1 n (L(ρ k ) + •(1)) + 1 √ n (N (ρ k ) + •(1)) 1 √ n X k+1 = ρ 0 + t 0 L(ρ n (s-)dV n (s) + t 0 1 √ n N (ρ n (s-))dW n (s) + ε n (t).
2. In the same way, in the non diagonal case we obtain
ρ n (t) = ρ 0 + [nt]-1 k=0 1 n (L(ρ k ) + •(1)) + (G(ρ k ) + •(1)) 1 √ n X k+1 = ρ 0 + t 0 L(ρ n (s-)dV n (s) + t 0 G(ρ n (s-))dW n (s) + ε n (t).
In these equations the terms ε n (t) regroup the •(•) terms. The final results are gathered in the following theorem (see [START_REF] Attal | Stochastic Master Equations for a Heat Bath[END_REF] for a complete proof).
Theorem 3 (Limit Model for Usual Indirect Quantum Measurement) Let B be a diagonal observable. Let (ρ n (t)) be the stochastic process defined from the discrete quantum trajectory describing the quantum repeated measurement of B. This stochastic process converges in distribution to the solution of the master equation
dρ t = L(ρ t )dt.
Let B be a non-diagonal observable. Let (ρ n (t)) be the stochastic process defined from the discrete quantum trajectory describing the quantum repeated measurement of B. This stochastic process converges in distribution to the solution of the stochastic differential equation
dρ t = L(ρ t )dt + G(ρ t )dW t where (W t ) is a standard Brownian motion.
It is important to notice that for a diagonal observable we end up with a Master equation without random terms. In [START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF], at zero temperature, it is shown that the limit evolution is described by a jump stochastic differential equation. Similar evolutions for diagonal observables will be recovered when we consider both measurements. The discussion in Section 3 will turn around such results.
Continuous Model of Usual Indirect Quantum Measurement in Random Environment
This section contains the main result of the article. To our knowledge, a random environment model has never been considered before in the setup of indirect quantum measurement (neither in discrete, nor in the continuous case). We treat separately the case of a diagonal observable and a non diagonal observable. We show that for a diagonal observable, we recover a evolution including jump random times. The limit evolution is although different as the case of [START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF].
Let us start with the non diagonal case. As in Section 2.2.2, we focus on the case
B = α 0 1/2 1/2 1/2 1/2 + α 1 1/2 -1/2 -1/2 1/2 .
In this situation, the asymptotic form of the equation ( 16) is given by
ρ k+1 = ρ k + 1 n L(ρ k ) + •(1) + 1 n K(ρ k ) + •(1) X k+1 + 1 √ n -1 -(1 -p) 2 Cρ k + ρ k C ⋆ -Tr ρ k (C + C ⋆ ρ k Y 0 k+1 + 1 √ n (1 -p 2 ) C ⋆ ρ k + ρ k C -Tr[ρ k (C ⋆ + C]ρ k Y 0 k+1 (38)
From the equation [START_REF] Brown | Some Poisson approximations using compensators[END_REF], we want to derive a discrete stochastic differential equation. To this aim, we define the processes
ρ n (t) = ρ [nt] , V n (t) = [nt] n , W n (t) = 1 √ n [nt]-1 k=0 X k+1 , W 0 n (t) = - 1 √ n [nt]-1 k=0 Y 0 k+1 , W 1 n (t) = 1 √ n [nt]-1 k=0 Y 1 k+1 ,
and the operators
Q(ρ) = 1 -(1 -p) 2 Cρ + ρC ⋆ -Tr[ρ(C + C ⋆ )]ρ (39)
W(ρ) = (1 -p 2 ) C ⋆ ρ + ρC -Tr[ρ(C ⋆ + C)]ρ . ( 40
)
This way, the process (ρ n (t)) satisfies the following discrete stochastic differential equation
ρ n (t) = t 0 L(ρ n (s-)dV n (s) + t 0 1 √ n K(ρ n (s-))dW n (s) + t 0 Q(ρ n (s-))dW 0 n (s) + t 0 W(ρ n (s-))dW 1 n (s) + ε n (t).
Heuristically, if we assume that
(W n (t), W 0 n (t), W 1 n (t)) =⇒ (W t , W 1 t , W 2 t )
, where the processes (W t ) and (W 1 t ) and (W 2 t ) are independent Brownian motions, the following theorem becomes natural (the rigorous proof is presented in Section 4).
Theorem 4 (Limit Model for Indirect Quantum Measurement of non-diagonal observables in Random environment) Let (ρ n (t)) be the stochastic process defined from the discrete quantum trajectory (ρ k ) which describes the repeated measurement of a nondiagonal observable in random environment. Then the process (ρ n (t)) converges in distribution to the solution of the stochastic differential equation
ρ t = ρ 0 + t 0 L(ρ s )ds + t 0 Q(ρ s )dW 1 s + t 0 W(ρ s )dW 2 s , (41)
where (W 1 t ) and (W 2 t ) are two independent Brownian motions. It is important to notice that we get two Brownian motion at the limit whereas in Theorem 3 there is only one Brownian motion. We have already described a situation where the random noise disappears.
Let us now deal with the diagonal case. In asymptotic form, the equation ( 13) becomes
ρ k+1 = ρ k + 1 n L(ρ k ) + •(1) + 1 n E(ρ k ) + •(1) X k+1 + Cρ k C ⋆ Tr Cρ k C ⋆ -ρ k + •(1) 1 01 - p n Tr Cρ k C ⋆ + •(1) + C ⋆ ρ k C Tr C ⋆ ρ k C -ρ k + •(1) 1 10 - 1 -p n Tr C ⋆ ρ k C + •(1) . (42)
Such an equation can be written in the following way
ρ k+1 = ρ k + 1 n L(ρ k ) + p -Cρ k C ⋆ + Tr Cρ k C ⋆ ρ k +(1 -p) -C ⋆ ρ k C + Tr C ⋆ ρ k C ρ k + •(1) + 1 n E(ρ k ) + •(1) X k+1 + Cρ k C ⋆ Tr Cρ k C ⋆ -ρ k + •(1) 1 01 + C ⋆ ρ k C Tr C ⋆ ρ k C -ρ k + •(1) 1 10
In order to define the discrete stochastic differential equation, we need to introduce the operator
T (ρ) = L(ρ) + p -CρC ⋆ + Tr CρC ⋆ ρ + (1 -p) -C ⋆ ρC + Tr C ⋆ ρC ρ
and the following processes
ρ n (t) = ρ [nt] (43) V n (t) = [nt] n , W n (t) = 1 √ n [nt]-1 k=0 X k+1 (44) Ñ 1 n (t) = [nt]-1 k=0 1 01 Ñ 2 n (t) = [nt]-1 k=0 1 10 (45)
We obtain a discrete stochastic differential equation
ρ n (t) = ρ 0 + t 0 T (ρ n (s-)dV n (s) + t 0 1 √ n E(ρ n (s-))dW n (s) + t 0 Cρ k C ⋆ Tr[Cρ k C ⋆ ] -ρ k Ñ 1 n (s) + t 0 C ⋆ ρ k C Tr[C ⋆ ρ k C] -ρ k Ñ 2 n (s). ( 46
)
Let us motivate briefly what follows concerning the convergence of ( Ñ 1 n (t)) and ( Ñ 2 n (t)) (this will be rigorously justified in Section 3). Let us deal with ( Ñ 1 n (t)) for example. By definition of 1 01 , we have
1 01 (ω k+1 , ϕ k+1 ) = 1 with probability 1 n p Tr[Cρ k C ⋆ ] + •(1) 1 01 (ω k+1 , ϕ k+1 ) = 0 with probability 1 -1 n p Tr[Cρ k C ⋆ ] + •(1) (47)
Hence, for a large n, the random variable 1 01 takes the value 1 with a low probability and 0 with a high probability. This behavior is typical of the classical Poisson process [START_REF] Brown | Some Poisson approximations using compensators[END_REF][START_REF] Brémaud | Point processes and queues[END_REF].
Heuristically we can consider a counting process ( Ñ 1 t ) as the continuous limit of ( Ñ 1 n (t)). Since a counting process is entirely determined by its intensity ( [START_REF] Brémaud | Point processes and queues[END_REF][START_REF] Jacod | Calcul stochastique et problèmes de martingales[END_REF]), we can guess its intensity by computing E[ Ñ 1 n (t)]. We have
E[ Ñ 1 n (t)] = [nt]-1 k=0 1 n E p Tr[Cρ k C ⋆ ] + •(1) = t 0 E p Tr[Cρ n (s-)C ⋆ ] dV n (s) + εn (t) (48)
Assuming that the processes ( Ñ 1 n (t)) and (ρ n (t)) converge, we get
E[ Ñ 1 t ] = t 0 E p Tr[Cρ s-C ⋆ ] ds.
We thus define the limit process ( Ñ 1 t ) as a counting process with stochastic intensity t → t 0 p Tr[Cρ s-C ⋆ ]ds. In the same way, we assume that ( Ñ 2 n (t)) converges to a counting process ( Ñ 2 t ) with stochastic intensity t → t 0 (1p) Tr[C ⋆ ρ s-C]ds. The limit stochastic differential equation would then be
dρ t = T (ρ t-)dt + Cρ t-C ⋆ Tr[Cρ t-C ⋆ ] -ρ t-d Ñ 1 t + C ⋆ ρ t-C Tr[C ⋆ ρ t-C] -ρ t-d Ñ 2 t . (49)
From a mathematical point of view, the way of defining this equation is not absolutely rigorous because the definition of the driving processes depends on the solution (ρ t ) (usually, in order to define solutions of a stochastic differential equation, one needs to consider previously the driving processes).
A rigorous way to introduce this equation consists in defining it in terms of two Poisson Point processes N 1 and N 2 on R 2 which are mutually independent (see [START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF][START_REF] Jacod | Quelques remarques sur un nouveau type déquations di?érentielles stochastiques[END_REF]). More precisely, we consider the stochastic differential equation
ρ t = ρ 0 + t 0 T (ρ s-)ds + t 0 R Cρ s-C ⋆ Tr Cρ s-C ⋆ -ρ s-1 0<x<p Tr[Cρ s-C ⋆ ] N 1 (ds, dx) + t 0 R C ⋆ ρ s-C Tr C ⋆ ρ s-C -ρ s-1 0<x<(1-p) Tr[C ⋆ ρ s-C] N 2 (ds, dx) (50)
This allows to write the equation in an intrinsic way and, if (50) admits a solution, we can define the processes
Ñ 1 t = t 0 R 1 0<x<p Tr[Cρ s-C ⋆ ] N 1 (ds, dx) and Ñ 2 t = t 0 R 1 0<x<(1-p) Tr[C ⋆ ρ s-C] N 2 (ds, dx). (51)
We can now state the convergence theorem in this context.
Theorem 5 (Limit Model for Indirect Quantum Measurement of diagonal observables in Random environment) Let N 1 and N 2 be two independent Poisson point processes on R 2 defined on a probability space (Ω, F, P). Let (ρ n (t)) be the process defined from the discrete quantum trajectory (ρ k ) which describes the measurement of a diagonal observable A in a random environment. The stochastic process (ρ n (t)) converges in distribution to the solution of the stochastic differential equation
ρ t = ρ 0 + t 0 T (ρ s-)ds + t 0 R Cρ s-C ⋆ Tr Cρ s-C ⋆ -ρ s-1 0<x<p Tr[Cρ s-C ⋆ ] N 1 (ds, dx) + t 0 R C ⋆ ρ s-C Tr C ⋆ ρ s-C -ρ s-1 0<x<(1-p) Tr[C ⋆ ρ s-C] N 2 (ds, dx) (52)
B diagonal T = 0 L 1 T > 0 L p T = 0 L 1 T > 0 L p T = 0 1J T > 0 L p T = 0 1J T > 0 2J B non-diagonal T = 0 1D T > 0 1D T = 0 1D T > 0 2D
Remark 5 It is not obvious that the stochastic differential equations [START_REF] Attal | Yan From repeated to continuous quantum interactions[END_REF][START_REF] Attal | Yan From (n + 1)-level atom chains to n-dimensional noises[END_REF] admit a unique solution (even in the diffusive case, the coefficients are not Lipschitz and the jump term can vanish). The uniqueness questions is treated in [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF][START_REF] Jacod | Quelques remarques sur un nouveau type déquations di?érentielles stochastiques[END_REF][START_REF] Jacod | Calcul stochastique et problèmes de martingales[END_REF].
Let us stress at this point that in this article we have focused on the particular case E = C 2 . This case allows to consider observables with two different eigenvalues. In [START_REF] Pellegrini | Markov Chains Approximation of Jump-Diffusion Quantum Trajectories[END_REF][START_REF] Attal | Stochastic Master Equations for a Heat Bath[END_REF], situations with more than two eigenvalues are considered but only when measurements are performed after the interactions. The statistical model (Random environment) is not treated. In this article, our aim was to compare the situation with and without measurement before the interaction in order to emphasize the situations appearing in the case of random environment. The situation that we have treated is sufficiently insightful to point out the differences between the statistical model and the Gibbs model. Higher dimension can easily be treated by adapting the presentation of this article and the results of [START_REF] Pellegrini | Markov Chains Approximation of Jump-Diffusion Quantum Trajectories[END_REF][START_REF] Attal | Stochastic Master Equations for a Heat Bath[END_REF]; the continuous evolutions involve mixing between jump and diffusion evolution (see also [START_REF] Barchielli | Quantum Trajectories and Measurements in Continuous Time The Diffusive Case[END_REF][START_REF] Mora | Basic Properties of Non-linear Stochastic Schrödinger Equations Driven by Brownian MOotions[END_REF][START_REF] Barchielli | On stochastic differential equations and semigroups of probability operators in quantum probability[END_REF] for other references on such types of equations).
In the following section, we compare the different continuous stochastic Master equations in the different model of environment.
Discussion
The different models we have considered and the limiting continuous equations that govern the dynamics are summed up in Table 1. Each cell of the table contains the type of evolution equation in the zero temperature case (T = 0) and in the positive temperature case (T > 0). Hence, in what follows, the parameter p, until now supposed constant, will be allowed to vary. Continuous, Master equations evolutions are denoted by L p where p is the parameter related to the temperature (T = 0 corresponds to p = 1). In these terms the two differential equation at T = 0 are given by
dρ t = L 1 (ρ t-)dt + Cρ t-C ⋆ Tr Cρ t-C ⋆ -ρ t- d Ñt -Tr Cρ t-C ⋆ dt , (53)
where ( Ñt ) is a counting process with stochastic intensity t 0 Tr Cρ s-C ⋆ ds and
dρ t = L 1 (ρ t-)dt + Cρ t + ρ t C ⋆ -Tr ρ t (C + C ⋆ ) ρ t dW t , (54)
where (W t ) is a Brownian motion. Note that when no measurement is performed after the interaction (the "No measurement" and "Before" columns), the type of the observable B is irrelevant. Moreover, at zero temperature, the measurement before the interaction is irrelevant, since the state of the system to be measured is an eigenstate of the observable. Hence the last two columns contain identical information in the case T = 0.
The discussion that follows is meant to provide insight about this table and on the different limit behaviors that appear. We shall try, as much as possible, to provide physical explanations for the similarities and differences between the different models treated in the present work.
Gibbs vs. Statistical models at near zero temperatures
In order to emphasize the differences between the statistical model and the Gibbs model, we investigate the stochastic equations when the parameter p goes to 1, that is the temperature goes to zero (this fact is related to the assumption 0 > γ 1 in the description of the free Hamiltonian of E). In particular, we show that we can recover the zero temperature case from the statistical model by considering the limit p goes to 1, while it is not the case in the Gibbs model. This can be seen in the case of a diagonal observable. At zero temperature for a diagonal observable, the continuous model is given by the jump equation ( 53). In the Gibbs model, for a diagonal observable, we get only the master equation dρ t = L p (ρ t )dt. It is then obvious that we do not recover the equation (53) when we consider the limit p goes to one. Concerning the statistical model, i.e random environment, the limit equation is given by
dρ t = L p (ρ t-)dt + Cρ t-C ⋆ Tr Cρ t-C ⋆ -ρ t- d Ñ 1 t -p Tr Cρ t-C ⋆ dt + C ⋆ ρ t-C Tr C ⋆ ρ t-C -ρ t- d Ñ 2 t -(1 -p) Tr C ⋆ ρ t-C dt , (55)
where ( Ñ 1 t ) is a counting process with stochastic intensity t 0 p Tr Cρ s-C ⋆ ds and ( Ñ 2 t ) is a counting process with stochastic intensity t 0 (1p) Tr C ⋆ ρ s-C ds. Heuristically, if we consider the limit p = 1, we get a counting process ( Ñ 2 t ) with a intensity equal to zero and ( Ñ 1 t ) is a counting process with stochastic intensity t 0 Tr Cρ s-C ⋆ ds. As a consequence, we have that almost surely, for all t Ñ 2 t = 0. Hence, we recover the equation (53) at the limit p = 1 (this result can be rigorously proved by considering the limit p = 1 in the Markov generator see Section 4). Let us notice that the limit p = 1 in the diffusive evolution allows to recover the model at zero temperature for the diffusive evolution in both models (statistical and Gibbs).
Gibbs vs. Statistical Models: absorption and emission interpretation
In the preceding section, we have seen that the Gibbs model and the Statistical model give rather different continuous evolution equations, especially in the case where a diagonal observable is measured. We are now going to provide a more complete interpretation of the Table 1. To this end, we shall concentrate on the special case where H = E and
C = 0 0 1 0 .
This particular choice for the Hamiltonians is known as the dipole type interaction model and it has the property that the interaction between the small system and each copy of the chain is symmetric. This will allow us to give an interpretation of the evolution of the small system in terms of emisions and absorption of photons. In such a setup, we shall clearly identify and explain the differences between the two models (Gibbs and Statistical).
Let us start by commenting on the similarities between these models. If no measure is performed after each interaction, we have seen that the limit evolution is the same in both models. In particular, the randomness generated by the measure in the Statistical model disappears at the limit and we get a classical Master equation.
The models become different when one considers a measurement, after each interaction. As in the previous section, the differences are more significant in the case where the measured observable B is diagonal. In order to illustrate the differences between the two models, we start by describing the trajectory of the solutions of the jump equations and by explaining the apparition of jumps.
At zero temperature, the evolution equation ( 53) can be re-written as
dρ t = S 1 (ρ t-)dt + Cρ t-C ⋆ Tr Cρ t-C ⋆ -ρ t-d Ñt (56)
by regrouping the dt terms. The solution of such a stochastic differential equation can be described in following manner. Let (T n ) n the jump times of the counting process ( Ñt ), that is T n = inf{t/ Ñt = n}. We have then
ρ t = t 0 S 1 (ρ s-)ds + ∞ k=0 Cρ T k -C ⋆ Tr Cρ T k -C ⋆ -ρ T k -1 T k ≤t . (57)
This expression is rigorously justified in [START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF]. What this means is that in the time intervals between the jumps, the solution satisfies the ordinary differential equation dρ t = S 1 (ρ t-)dt and at jump times its discontinuity is given by
ρ T k = ρ T k -+ Cρ T k -C ⋆ Tr Cρ T k -C ⋆ -ρ T k -= Cρ T k -C ⋆ Tr Cρ T k -C ⋆ . (58)
In a similar fashion, the solution of equation (55) satisfies
ρ t = t 0 S p (ρ s-)ds + ∞ k=0 Cρ T 1 k -C ⋆ Tr Cρ T 1 k -C ⋆ -ρ T 1 k -1 T 1 k ≤t + ∞ k=0 C ⋆ ρ T 2 k -C Tr C ⋆ ρ T 2 k -C -ρ T 2 k -1 T 2 k ≤t
where for i = 0, 1, the terms (T i k ) correspond to the jump times of the processes ( Ñ i t ). Depending on the type of the jump, the discontinuity of the solution is given by
ρ T 1 k = Cρ T 1 k -C ⋆ Tr Cρ T 1 k -C ⋆ or ρ T 2 k = C ⋆ ρ T 2 k -C Tr C ⋆ ρ T 2 k -C . ( 59
)
Remark 6 Since the two Poisson point processes N 1 and N 2 are independent, on the probability space (Ω, F, P) supporting these two processes, we have
P {ω ∈ Ω | ∃k ∈ N, T 1 k (ω) = T 2 k (ω)} = 0. H B A E k Figure 1: Experimental setup
This means that a jump of type 1 cannot occur at the same time as a jump of type 2. We shall see later on that this condition is also physically relevant.
An explicit computation with the particular value of C we considered gives
CρC ⋆ Tr CρC ⋆ = 1 0 0 0 = |e 0 e 0 | and C ⋆ ρC Tr C ⋆ ρC = 0 0 0 1 = |e 1 e 1 |, (60)
for all states ρ. In the setup with two possible jumps, depending on the type of jump, the state of the small system after the jump is either ground state or the excited state. This has a clear interpretation in terms of the emission and absorption of photons. At zero temperature, it is well known that equation (56) describes an counting photon experiment [START_REF] Barchielli | On a class of stochastic differential equations used in quantum optics[END_REF][START_REF] Barchielli | Direct and heterodyne detection and other applications of quantum stochastic calculus to quantum optics[END_REF] and that the a jump corresponds to the emission of a photon, which will be detected by the measuring apparatus. In the case where two jumps can occur, the same interpretation remains valid for type 1 jumps (emission of a photon). After such an emission, the state of the small system is projected on the ground state |e 0 e 0 |. Type 2 jumps are characterized by the fact that the state of the small systems jumps to the excited state |e 1 e 1 |; this corresponds to the absorption of a photon by the small system, which justifies its excitation. Note that the impossibility of simultaneous jumps of the two types (see the above remark) is physically justified by the fact that the small system can not absorb and emit a photon in the same time.
This interpretation has a clear meaning in the discrete model. Let us consider the experimental setup in Figure 1. This setup, with two measuring apparatus, corresponds to the Statistical model.
• At zero temperature, each copy of E is in the ground state |e 0 e 0 |. In this case, the first measurement device will never click and only the result of the second apparatus is relevant. If, at the step k + 1, the second apparatus does not click, the state of the small system is given by
ρ k+1 = L 00 ρ k L ⋆ 00 / Tr[L 00 ρ k L ⋆ 00 ].
In the asymptotic regime, we get ρ k+1 = ρ k + 1/nS 1 (ρ k ) + •(1), which is an approximation of a continuous evolution. On the other hand, if the second apparatus clicks, then the evolution is given by
ρ k+1 = L 10 ρ k L ⋆ 10 / Tr[L 10 ρ k L ⋆ 10 ] = Cρ k C ⋆ / Tr[Cρ k C ⋆ ] + •(1)
, which corresponds to the emission of a photon. This corresponds to a jump, as indicated by the result of the measurement.
• At positive temperature, both devices can click. If the first apparatus does not click, the state of E before the interaction is |e 0 e 0 | and we have the same interpretation of the second measurement as before. In the other case, a click for the first measurement implies that the state of E is |e 1 e 1 |. Now, the interpretation of the second measurement is the following. If we have a click, then the evolution is continuous, and the absence
C ⋆ ρ k C/ Tr[C ⋆ ρ k C] + •(1)
(absorption of a photon). Let us stress that this corresponds to the inverse of the situation where no click occurs at the first measurement. The different cases are summarized in the Table 2.
We are now in the position to explain the difference between the Gibbs and the Statistical models. In the Statistical model, the first measurement allows us to clearly identify if the small system absorbs or emits a photon. If we consider the same experiment without the first measurement device, we obtain the Gibbs model. In this setup, the information provided by the second apparatus is not sufficient to distinguish between a continuous evolution, an absorption or an emission. Indeed, as it has been pointed out in the above description, in order to have the exact variations of the state of the small system, it is necessary to know if the state of E is |e 0 e 0 | or |e 1 e 1 | before the interaction.
Unraveling
In order to conclude the Section 3, we shall to investigate an important physical feature called unraveling. This concept is related with the possibility to describe the stochastic master equations in terms of pure states. More precisely, an important category of stochastic master equations preserve the property of being valued in the set of pure states, that is if the initial state is pure, then, at all times, the state of the small system will continue to be pure. This property is of great importance for numerical simulations; indeed, less parameters are needed to describe a pure state than an arbitrarily density matrix (for a K-dimensional Hilbert space, a pure state is "equivalent" to a vector that is we need 2K -1 real parameters, whereas for a density matrix we need K 2 such real coordinates). Since the expectation of the solution of a stochastic master equation reproduces the solution of the master equation, by taking the average of a large number of simulations of the stochastic master equations we get a simulation of the master equation. An important gain of simulation is obtained by the pure state property. This technique is called Monte Carlo Wave Function Method.
When a stochastic Master equation preserve the property of being a pure state, it is said that the stochastic master equations gives an unraveling of the master equation (or unravels the master equation). In this setup, one can express a stochastic differential equation for vectors in the underlying Hilbert space. This equation is called stochastic Schrödinger equation. In this subsection, we want to show that the continuous models obtained from the limit of the repeated measurements before and after the interaction give rise to unraveling of the master equation for a heat bath whereas the unraveling property is not satisfied if we consider the measurement only after the interaction. Let us stress that at zero temperature, this property has already been established in [START_REF] Pellegrini | Uniqueness and Approximation of Stochastic Schrödinger Equation: the diffusive case[END_REF][START_REF] Pellegrini | Existence, uniqueness and approximation for stochastic Schrödinger equation: the Poisson case[END_REF] (the author do not refer to unraveling but he shows that the stochastic master equations ( 53) and (54) preserve the property of being valued in the pure states set).
In order to obtain the expression of the stochastic Schrödinger equation for the heat bath, we show that the quantum trajectories can be expressed in terms of pure states. To this end, we show that for all k, there exists a norm 1 vector
ψ k ∈ H such that ρ k = |ψ k ψ k |.
Next, by considering the process ψ
(n) t
and the convergence when n goes to infinity, we get a stochastic differential equation for norm one vectors in H. Following the form of observables, we obtain two types of equations, which are equivalent of ( 53) and (54) (the equivalence is characterized by the fact that a solution (ψ t ) of an equation for vectors allow to consider the process (|ψ t ψ t |) which satisfies the corresponding stochastic master equation).
We proceed by recursion. Let suppose that there exists ψ k such that ρ k = |ψ k ψ k |. Let Q i be one of the eigenprojectors of the observable B which is measured after the interaction. Since Q i is a one dimensional projector, there exists a norm 1 vector ϑ i such that
Q i = |ϑ i ϑ i |.
For j ∈ {0, 1}, the transitions between ρ k+1 and ρ k are given by the non normalized operators
ξ k+1 (ji) = Tr E I ⊗ Q i U (ρ k ⊗ |e j e j |)U ⋆ I ⊗ Q i , for (i, j) ∈ {0, 1} 2 and we have ξ k+1 (ji) = Tr E I ⊗ |ϑ i ϑ i | U (|ψ k ψ k | ⊗ |e j e j |)U ⋆ I ⊗ |ϑ i ϑ i | = Tr E I ⊗ |ϑ i ϑ i | p,l L pl |e p e l | |ψ k ψ k | ⊗ |e j e j | u,v L ⋆ uv |e v e u | I ⊗ |ϑ i ϑ i | = Tr E p,l,u,v L pl |ψ k ψ k |L ⋆ uv ⊗ |ϑ i ϑ i ||e p e l ||e j e j ||e v e u ||ϑ i ϑ i | = Tr E I ⊗ |ϑ i ϑ i | p,l L pl |e p e l | |ψ k ψ k | ⊗ |e j e j | u,v L ⋆ uv |e v e u | I ⊗ |ϑ i ϑ i | = Tr E p,u L pj |ψ k ψ k |L ⋆ uj ⊗ |ϑ i ϑ i ||e p e u ||ϑ i ϑ i | = Tr E p e p ; ϑ i L pj ψ k u e u ; ϑ i L uj ψ k ⊗ |ϑ i ϑ i | = p e p ; ϑ i L pj ψ k u e u ; ϑ i L uj ψ k . (61)
We can then define
ψ k+1 (ji) = p e p ; ϑ i L pj ψ k p e p ; ϑ i L pj ψ k 1 ji , (62)
which describe the evolution of the wave function of H. This equation is equivalent to the discrete stochastic Master equations in the sense that almost surely (with respect to P)
|ψ k ψ k | = ρ k , for all k.
Let us stress that, here, the normalizing factor p e p ; ϑ i L pj ψ k appearing in the quotient is not the probability of outcome. Indeed, the probability of outcome is p e p ; ϑ i L pj ψ k 2 . Now, we can investigate the continuous limit of this equation by applying the asymptotic assumptions described in Section 2. Depending on the form of the observable B, we obtain two different kinds of equations:
• A jump equation (B diagonal)
ψ t = ψ 0 + t 0 F p (ψ s-)ds + t 0 R Cψ s- √ µ s- -ψ s-1 0<x<pµ s-N 1 (dx, ds) + t 0 R C ⋆ ψ s- √ ν s- -ψ s-1 0<x<(1-p)ν s-N 2 (dx, ds), (63)
where
F p (ψ s-) = p -iH 0 - 1 2 C ⋆ C + µ s-I + √ µ s-C ψ s- +(1 -p) -iH 0 - 1 2 CC ⋆ + ν s-I + √ ν s-C ⋆ ψ s- (64)
and µ s-= ψ s-, C ⋆ Cψ s-and ν s-= ψ s-, CC ⋆ ψ s-.
• A diffusive equation (B non diagonal)
ψ t = ψ 0 + t 0 G p (ψ s )ds + t 0 1 -(1 -p 2 )(C -κ s I)ψ s dW 1 s + t 0 (1 -p 2 )(C ⋆ -ζ s I)ψ s dW 2 s , (65)
where
G p (ψ s ) = p -iH 0 - 1 2 C ⋆ C -2κ s C + κ 2 s I ψ s +(1 -p) -iH 0 - 1 2 CC ⋆ -2ζ s C ⋆ + ζ 2 s I ψ s , (66)
with κ s = Re( ψ s , Cψ s ) and ζ s = Re( ψ s , C ⋆ ψ s ).
By applying the Itô rules in stochastic calculus, we can make the following observation which establishes the connection between the equations for vectors and the equations for states. Let (ψ t ) be the solution of equation (63) (respectively (65)), then almost surely |ψ t ψ t | = ρ t , for all t ≥ 0, where (ρ t ) is the solution of (52) in Theorem 5 (respectively (41) in Theorem 4). Such considerations are the continuous equivalent of the remark following equation (62).
In other words, the description of the evolution of the system H in the setup with both measurements can be described in terms of pure states. A key property for unraveling is that
dE |ψ t ψ t | = L p E |ψ t ψ t | dt, (67)
for any solutions of (63) or (65).
4 Proofs of Theorems 4 and 5
The last section of the paper is devoted to showing that discrete quantum trajectories in random environment converge to solutions of stochastic differential equations (41, 52). We proceed in the following way.
In a first step, we justify rigorously the form of the stochastic differential equations provided in Theorems 4 and 5. Starting with the description of discrete quantum trajectories in terms of Markov chains, we can define the so called discrete Markov generators of these Markov chains. These generators depend naturally on the parameter n of the length of interaction. When n goes to infinity, the limit of the discrete Markov generators gives rise to infinitesimal generators. Next, these limit generators can be naturally associated with problems of martingale [START_REF] Jacod | Calcul stochastique et problèmes de martingales[END_REF][START_REF] Ethier | Markov processes[END_REF]. The solution of martingale problems associated with these generators (see Definition 1 below) can be then expressed in terms of solutions of particular stochastic differential equations. We show that the appropriate equations are the same as the ones in Theorems 4 and 5. This justifies the heuristic presentation of (41, 52) in Section 2.2.3.
This first step provides actually the convergence of finite dimensional laws of the discrete quantum trajectories to the continuous one. Finally, in a second step, we prove the total convergence in distribution by showing that the discrete quantum trajectories own the property of tightness (see [START_REF] Billingsley | Convergence of probability measures[END_REF][START_REF] Jacod | Limit theorems for stochastic processes, volume 288 of Grund lehren der Mathematischen Wissenschaften[END_REF]).
Convergence of Markov Generators and Martingale problems
Let us start by defining the infinitesimal generator of a discrete quantum trajectory. Let B = α 0 Q 0 + α 1 Q 1 be an observable where Q i = (q i kl ). Let (ρ k ) be any quantum trajectory describing the measurement of the observable A in a random environment with initial state ρ 0 . Using the Markov property (Proposition 2 in Section 1.2.2) of (ρ k ) on (Ω, C, P), we can consider the process (ρ n (t)) which satisfies
P[ρ n (0) = ρ] = 1 P ρ n (t) = ρ k k n ≤ t < k + 1 n = 1 for all k P ρ k+1 ∈ B | ρ k = ρ = Π(ρ, B) for all Borel sets B, (68)
where Π is the transition function of the Markov chain (ρ k ). More precisely, the transition function Π is defined, for all Borel sets B, by
Π(ρ, B) = p Tr[G 00 (ρ)]δ h 0 (ρ) (B) + p Tr[G 01 (ρ)]δ h 1 (ρ) (B) +(1 -p) Tr[G 10 (ρ)]δ g 0 (ρ) (B) + (1 -p) Tr[G 11 (ρ)]δ g 1 (ρ) (B),
where, for i = 0, 1, we recall that
G 0i (ρ) = q i 00 L 00 ρL ⋆ 00 + q i 10 L 00 ρL ⋆ 10 + q i 01 L 10 ρL ⋆ 00 + q i 11 L 10 ρL ⋆ 10 G 1i (ρ) = q i 00 L 01 ρL ⋆ 01 + q i 10 L 01 ρL ⋆ 11 + q i 01 L 11 ρL ⋆ 01 + q i 11 L 11 ρL ⋆ 11 h i (ρ) = G 0i (ρ) Tr[G 0i (ρ)]
, and
g i (ρ) = G 1i (ρ) Tr[G 1i (ρ)] . (69)
It is worth noticing that the transition function Π is defined on the set of states. The discrete Markov generator of the Markov process (ρ n (t)) is defined as
A n f (ρ) = n E f (µ) -f (ρ) Π(ρ, dµ),
where E denotes the set of states and f is any function of class C 2 with compact support.
The set of such functions is denoted by C 2 c (E). In our situation, for all f ∈ C 2 c (E), we have
A n f (ρ) = n p Tr[G 00 (ρ)] f (h 0 (ρ)) -f (ρ) + p Tr[G 01 (ρ)] f (h 1 (ρ)) -f (ρ) (1 -p) Tr[G 10 (ρ)] f (g 0 (ρ)) -f (ρ) + (1 -p) Tr[G 11 (ρ)] f (g 1 (ρ)) -f (ρ) . (70)
Now, we can implement the asymptotic assumptions (25) introduced at the beginning of Section 2 and we can consider the limit of A n when n goes to infinity. In a similar way as Section 2.2.3, the result is divided into two parts depending on the form of the observable B.
Proposition 4 Let A n be the infinitesimal generator of the discrete quantum trajectory describing the measurement of a diagonal observable. We have for all
f ∈ C 2 c (E) lim n→∞ sup ρ∈E A n f (ρ) -A j f (ρ) = 0, ( 71
)
where A j is an infinitesimal generator defined, for all f ∈ C 2 c (E), by
A j f (ρ) = D ρ f L(ρ) + f CρC ⋆ Tr[CρC ⋆ ] -f (ρ) -D ρ f CρC ⋆ Tr[CρC ⋆ ] -ρ Tr CρC ⋆ + f C ⋆ ρC Tr[C ⋆ ρC] -f (ρ) -D ρ f C ⋆ ρC Tr[C ⋆ ρC] -ρ Tr C ⋆ ρC . ( 72
)
Let A n be the infinitesimal generator of the discrete quantum trajectory describing the measurement of the non-diagonal observable
B = α 0 1/2 1/2 1/2 1/2 + α 1 1/2 -1/2 -1/2 1/2 . We have for all f ∈ C 2 c (E) lim n→∞ sup ρ∈E A n f (ρ) -A d f (ρ) = 0, ( 73
)
where A d is an infinitesimal generator defined, for all f ∈ C 2 c (E), by
A d f (ρ) = D ρ f L(ρ) + 1 2 D 2 ρ f (Q(ρ), Q(ρ)) + 1 2 D 2 ρ f (W(ρ), W(ρ)),
where Q and W are defined by the expressions (39) and (40).
We do not provide the proof of this proposition (similar computations are presented in great detail in [START_REF] Pellegrini | Markov Chains Approximation of Jump-Diffusion Quantum Trajectories[END_REF]). Now, we can introduce the martingale problem associated with the limit generators of the above Proposition 4. To this aim, we denote (F µ t ), the filtration generated by a process (µ t ), where F µ t = σ(µ s , s ≤ t) for all t ≥ 0.
Definition 1 Let (Ω, F, P) be a probability space. Let i ∈ {j, d} and let ρ 0 be a state on E.
A solution associated with the problem of martingale (A i , ρ 0 ) is a process (ρ i t ) such that, for all f ∈ C 2 c , the process (M i t (f )) defined by
M i t (f ) = f (ρ i t ) -f (ρ 0 ) - t 0 A i f (ρ i s )ds is a (F ρ i t )
martingale with respect to P.
Usually, solutions of stochastic differential equations are used to solve the problems of martingale [START_REF] Jacod | Calcul stochastique et problèmes de martingales[END_REF][START_REF] Ethier | Markov processes[END_REF]. In our context, we recover the stochastic differential equations (41, 52) introduced in Theorems 4 and 5. Let us start by the non diagonal case.
Theorem 6 (Solution of the Problem of Martingale for a Non-Diagonal Observable) Let (Ω, F, P) be a probability space which supports two independent Brownian motions (W 1 t ) and (W 2 t ). Let A d the infinitesimal generators corresponding to the discrete quantum trajectory describing the measurement of
A = λ 0 1/2 1/2 1/2 1/2 + λ 1 1/2 -1/2 -1/2 1/2 .
Let (ρ 0 ) be any state. The solution of the problem of martingale associated to (A d , ρ 0 ) is given by the solution of the following stochastic differential equation
ρ t = ρ 0 + t 0 L(ρ s )ds + t 0 Q(ρ s )dW 1 s + t 0 W(ρ s )dW 2 s . (74)
The equivalent theorem in the diagonal observable case is expressed as follows.
Theorem 7 (Solution of the Problem of Martingale for a Diagonal Observable) Let (Ω, F, P) be a probability space which supports two independent Poisson Point Process N 1 and N 2 . Let A j the infinitesimal generators corresponding to the discrete quantum trajectory describing the measurement of a diagonal observable. Let ρ 0 be any state. The solution of the problem of martingale associated to (A j , ρ 0 ) is given by the solution of the following stochastic differential equation
ρ t = ρ 0 + t 0 T (ρ s-)ds + t 0 R Cρ s-C ⋆ Tr Cρ s-C ⋆ -ρ s-1 0<x<p Tr[Cρ s-C ⋆ ] N 1 (ds, dx) + t 0 R C ⋆ ρ s-C Tr C ⋆ ρ s-C -ρ s-1 0<x<(1-p) Tr[C ⋆ ρ s-C] N 2 (ds, dx). (75)
These theorems can be proved by using Itô stochastic calculus (see [START_REF] Pellegrini | Markov Chains Approximation of Jump-Diffusion Quantum Trajectories[END_REF] for explicit computations).
In order to complete the study of the limit infinitesimal generators, we express a uniqueness theorem of solutions for the problems of martingale. Moreover this result is essential to prove the final convergence theorem.
Proposition 5 Let (ρ 0 ) be a state and let A i , i = 0, 1 be a generator defined in Proposition 5. The problem of martingale (A i , ρ 0 ) admits a unique solution in distribution. It means that two solutions of the martingale problem (A i , ρ 0 ) have the same law.
This proposition is actually a consequence of the uniqueness of solution for the stochastic differential equation associated with A i . Complete reference about Markov generators and problems of martingales (uniqueness, existence) can be found in [START_REF] Ethier | Markov processes[END_REF].
The next section contains the final convergence result.
Tightness Property and Convergence Result
We prove that discrete quantum trajectories (ρ n (t)) have the tightness property (also called relative compactness for stochastic processes). Next, we show that the convergence result of Markov generators (Proposition 4) implies the convergence of finite dimensional laws. The tightness property and the finite dimensional laws convergence imply then the convergence in distribution for stochastic processes [START_REF] Billingsley | Convergence of probability measures[END_REF].
Concerning the tightness property, we have the following result.
Proposition 6 (Tightness) Let (ρ n (t)) be any quantum trajectory describing the repeated quantum measurement of an observable A (diagonal or not). There exists some constant Z such that for all t 1 < t < t 2
E ρ n (t 2 ) -ρ n (t) 2 ρ n (t) -ρ n (t 1 ) 2 ≤ Z(t 1 -t 2 ) 2 . ( 76
)
As a consequence, the sequence of discrete processes (ρ n (t)) is tight.
In order to see that the property (76) implies the tightness property, the reader can consult [START_REF] Billingsley | Convergence of probability measures[END_REF]. Before to prove the Proposition 6, we need the following Lemma.
Lemma 1 Let (ρ k ) be the Markov chain describing the discrete quantum trajectory defined by the repeated quantum measurement of an observable A. Let M (n) r = σ{ρ j , j ≤ r}, and let (r, l) ∈ N 2 such that r < l. Then there exists a constant K A such that
E ρ l -ρ r 2 /M (n) r ≤ K A × l -r n .
Proof: We just treat the case where B is diagonal (similar reasoning yield the non diagonal case). Let us start with the term defined by E ρ lρ r 2 /M
(n) l-1 . We have
E ρ l -ρ r 2 /M (n) l-1 = E i φ i (ρ l-1 )1 l+1 0i + i θ i (ρ l-1 )1 l+1 1i -ρ r
≤ pE ρ l-1 -ρ r 2 M (n) l-1 + 1 n × E 1 n L 0 (ρ l-1 ) + •(1)) 2 p(1 - 1 n (Tr[Cρ l-1 C ⋆ ] + •(1)) M (n) l-1 + 1 n × E f 2 (ρ l-1 ) -ρ r 2 (Tr[Cρ l-1 C ⋆ ] + •(1))) M (n) l-1 .
As the discrete quantum trajectory (ρ k ) takes values in the set of states which is compact and as the function defined on the set of state ρ -→ f 2 (ρ) Tr[CρC ⋆ ] + •(1) is continuous, there exists a constant Z 1 such that, almost surely
E i φ i (ρ l-1 ) -ρ r 2 p Tr[G 0i (ρ l-1 )] M (n) l-1 ≤ pE ρ l-1 -ρ r 2 M (n) l-1 + Z 1 n . (78)
In the same way there exists a constant Z 2 such that
Finally, for an appropriate constant Z, we have almost surely
E ρ l -ρ r 2 M (n) l-1 ≤ E ρ l-1 -ρ r 2 M (n) l-1 + Z n . (80)
As a consequence, by remarking that
E ρ l -ρ r 2 M (n) r = E E ρ l -ρ r 2 M (n) l-1 M (n)
r by induction, we have
E ρ l -ρ r 2 M (n) r ≤ K A l -r n .
In the non-diagonal case, the computation and estimation are similar and the Lemma holds.
Proposition 6 follows from this lemma.
Proof: (Proposition 6) Thanks to Lemma 1, for all quantum trajectories (ρ n (t)), we have:
E ρ n (t 2 ) -ρ n (t) 2 ρ n (t) -ρ n (t 1 ) 2 = E E ρ n ([nt 2 ]) -ρ n ([nt]) 2 /M (n) [nt] ρ n ([nt]) -ρ n ([nt 1 ]) 2 ≤ K A ([nt 2 ] -[nt]) n E E ρ n ([nt]) -ρ n ([nt 1 ]) 2 /M n [nt 1 ] ≤ K A ([nt 2 ] -[nt]) n K A ([nt] -[nt 1 ]) n ≤ Z A (t 2 -t 1 ) 2 ,
with Z A = 4(K A ) 2 and the result follows.
Since the tightness property holds, it remains to prove that the finite dimensional laws converge. This result follows from the following proposition.
Proposition 7 Let ρ 0 be a state. Let (ρ n (t)) be a quantum trajectory describing a repeated quantum measurement of an observable A. Let A i , i = 0, 1 be the associated Markov generator, we have
lim n→∞ E f (ρ n (t + s)) -f (ρ n (t) - t+s t A i f (ρ n (s)) m i=1 θ i (ρ n (t i )) = 0 (81)
for all m ≥ 0, for all 0 ≤ t 1 < t 2 < . . . < t m ≤ t < t + s, for all functions (θ i ) i=1,...,m and for all f in C 2 c .
Proof: Let (ρ n (t)) be any discrete quantum trajectory and A i the associated generator. Let (F (n) t ) denote the natural filtration of the process (ρ n (t)), that is (F
(n) t ) = σ{ρ n (s), s ≤ t} = M (n) [nt] .
For m ≥ 0, 0 ≤ t 1 < t 2 < . . . < t m ≤ t < t + s and f, θ 1 , . . . , θ m ∈ C A i f (ρ n (s)) /F n t . To this end, from the definition of infinitesimal generators, we can notice that the discrete process defined for all n by
f (ρ n (k/n)) -f (ρ 0 ) - k-1 j=0 1 n A i n f (ρ n (j/n)) (83)
is a (F n k/n )-martingale (this is the discrete equivalent of solutions for problems of martingale for discrete processes). Now, assuming r/n ≤ t < (r + 1)/n and l/n ≤ t + s < (l + 1)/n, we have F n t = F n r/n . The random states ρ n (t) and ρ n (t + s) satisfy then ρ n (t) = ρ n (r/n) and ρ n (t + s) = ρ n (l/n). The martingale property (83) implies then We finish by showing that Propositions 6 and 7 implies the convergence in distribution. Indeed the tightness property, which is equivalent to relative compactness for the Topology of Skorohod [START_REF] Jacod | Limit theorems for stochastic processes, volume 288 of Grund lehren der Mathematischen Wissenschaften[END_REF][START_REF] Billingsley | Convergence of probability measures[END_REF], implies that all converging subsequence of (ρ n (t)) converges in distibution to the solution of the problem martingale (A i , ρ 0 ). In other terms, let (Y t ) be a limit process of a subsequence of (ρ n (t)), Proposition 7 implies that
E f (ρ n (t + s)) -f (ρ n (t) F n t = E f (ρ n (l/n)) -f (ρ n (k/n) F n r/n = E l-1 j=k 1 n A i n f (ρ n (j/n)) F n
E f (Y t+s ) -f (Y t ) - t+s t A i f (Y s )ds m i=1 θ i (Y t i ) = 0, (87)
for all m ≥ 0, for all 0 ≤ t 1 < t 2 < . . . < t m ≤ t < t + s, for all functions (θ i ) i=1,...,m and for all f in C 2 c . As a consequence (Y t ) is a Markov process (with respect to its natural filtration (F Y t )), which is also a solution of the martingale problem (A i , ρ 0 ). Now, the uniqueness of the solution of the problem of martingale (Proposition 5) allows to conclude that the discrete quantum trajectory converges in distribution to the solution of the problem of martingale.
2 Mφ i (ρ l- 1 ) -ρ r 2 p 1 = 1 +E f 2
212112 Tr[G 0i (ρ l-1 )] M (n) l-1 +E i θ i (ρ l-1 )ρ r 2 (1p) Tr[G 1i (ρ l-1 )] M (n) l-1 . (77)With the asymptotic description of φ i and θ i , we have for the first term in the right side of expression (77)E i φ i (ρ l-1 )ρ r 2 p Tr[G 0i (ρ l-1 )] M (n) l-E ρ l-1 + 1 n (L 0 (ρ l-1 ) + •(1)))ρ r 2 p(1 -1 n (Tr[Cρ l-1 C ⋆ ] + •(1))) M (n) l-(ρ l-1 )ρ r 2 p n (Tr[Cρ l-1 C ⋆ ] + •(1))) M
E i θ i (ρ l- 1 ) -ρ r 2 p 2 M
122 Tr[G 1i (ρ l-1 )] M (n) l-1 ≤ (1p)E ρ l-1ρ r
Table 1 :
1 Different models and the corresponding continuous behavior
No measurement Before After Before & After
Table 2 :
2 Physical interpretation of measurements App. A \ App. B
No click Click
No click Continuous Emission
Click Absorption Continuous
of a click corresponds to a jump of the form
(ρ n (t i )) . (82)Let us now estimate the term E f (ρ n (t + s))f (ρ n (t) -
t t+s m i=1 θ i t+s A i f (ρ n (s)) /F n t
t
2 c , we have
E f (ρ n (t + s))f (ρ n (t) -t+s t A i f (ρ n (s)) m i=1 θ i (ρ n (t i )) = E E f (ρ n (t + s))f (ρ n (t) - |
04102960 | en | [
"math.math-na"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04102960/file/c4.pdf | Aymen Laadhari
email: [email protected]
Ahmad Deeb
A Finite Element Approach For Modeling Biomembranes In Incompressible Power-Law Flow
We present a numerical method to model the dynamics of inextensible biomembranes in a quasi-Newtonian incompressible flow, which better describes hemorheology in the small vasculature. We consider a level set model for the fluid-membrane coupling, while the local inextensibility condition is relaxed by introducing a penalty term. The penalty method is straightforward to implement from any Navier-Stokes/level set solver and allows substantial computational savings over a mixed formulation. A standard Galerkin finite element framework is used with an arbitrarily high order polynomial approximation for better accuracy in computing the bending force. The PDE system is solved using a partitioned strongly coupled scheme based on Crank-Nicolson time integration. Numerical experiments are provided to validate and assess the main features of the method.
INTRODUCTION
This paper is concerned with the numerical study of the time-dependent dynamics of biomembranes in a surrounding Newtonian and non-Newtonian flow. The coupled fluid-membrane problem is highly nonlinear and time consuming.
Blood is a very complex fluid. Its rheology at the macroscopic scale depends both on the individual dynamics of its embedded entities and their fluid-structure interactions at the microscopic level. Red blood cells, referred to as RBCs, represent its main cellular component; They are responsible for the supply of oxygen and the capture of carbon dioxide. In the laboratory, giant unilamellar vesicles (diameter ≈ 10µm) are biomimetic artificial liquid drops, used in vitro and in silico to study the RBCs. Understanding the dynamics of RBCs in flow remain a difficult problem in the field of computational physics and at the theoretical level as well, consequently leading to a growing interest in the past two decades. In the published literature, several works have covered the areas of experimental biology [START_REF] Song | Characterization of stress-strain behaviour of red blood cells (RBCs), part II: response of malaria-infected RBCs[END_REF], theoretical biology [START_REF] Safran | Statistical Thermodynamics of Surfaces, Interfaces and Membranes[END_REF], physics [START_REF] Kaoui | Lateral migration of a two-dimensional vesicle in unbounded poiseuille flow[END_REF][START_REF] Keller | Motion of a tank-treading ellipsoidal particle in a shear flow[END_REF][START_REF] Choi | Fluctuations of red blood cell membranes: The role of the cytoskeleton[END_REF] and applied mathematics [START_REF] Dziuk | Computational parametric Willmore flow[END_REF][START_REF] Barrett | Numerical computations of the dynamics of fluidic membranes and vesicles[END_REF].
From a mechanical continuum perspective, Canham [START_REF] Canham | The minimum energy of bending as a possible explanation of the biconcave shape of the human red blood cell[END_REF], Helfrich [START_REF] Helfrich | Elastic properties of lipid bilayers: theory and possible experiments[END_REF] and Evans [START_REF] Evans | Bending resistance and chemically induced moments in membrane bilayers[END_REF] independently introduced in the early 1970s a model to describe the mechanics of lipid bilayer membranes, where cellular deformations are driven by the principal curvatures. This results in a highly nonlinear membrane force with respect to shape, see a mathematical derivation for a generalized energy functional based on shape optimization in [START_REF] Laadhari | On the equilibrium equation for a generalized biological membrane energy by using a shape optimization approach[END_REF].
Different methods have been developed to study the dynamics of biomembranes in a Newtonian flow. We can distinguish the level set method [START_REF] Cottet | Eulerian formulation and Level-Set models for incompressible fluid-structure interaction[END_REF][START_REF] Laadhari | Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods[END_REF][START_REF] Doyeux | Simulation of two-fluid flows using a finite element/level set method. Application to bubbles and vesicle dynamics[END_REF], the phase field method [START_REF] Du | A phase field approach in the numerical study of the elastic bending energy for vesicle membranes[END_REF], the immersed boundary method [START_REF] Hu | An immersed boundary method for simulating the dynamics of three-dimensional axisymmetric vesicles in Navier-Stokes flows[END_REF], the boundary integral method [START_REF] Rahimian | Dynamic simulation of locally inextensible vesicles suspended in an arbitrary two-dimensional domain, a boundary integral method[END_REF], the parametric finite elements [START_REF] Barrett | Numerical computations of the dynamics of fluidic membranes and vesicles[END_REF], and the lattice Boltzmann method [START_REF] Kaoui | Two-dimensional vesicle dynamics under shear flow: Effect of confinement[END_REF]. From a numerical point of view, iterative and fully explicit decoupling strategies for the membrane-fluid problem are the most used techniques [START_REF] Doyeux | Simulation of two-fluid flows using a finite element/level set method. Application to bubbles and vesicle dynamics[END_REF][START_REF] Salac | Reynolds number effects on lipid vesicles[END_REF]. An explicit treatment of the bending force usually leads to numerical instability problems and severe time step limitations, depending on the local mesh size and bending stiffness. However, only few works devised semi-implicit [START_REF] Barrett | Numerical computations of the dynamics of fluidic membranes and vesicles[END_REF] or fully implicit time integration schemes [START_REF] Laadhari | Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods[END_REF][START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the level set and newton methods[END_REF]. Although stability is improved, a high computational burden is generally obtained with implicit strategies. Other interesting decoupling strategies can be found in [START_REF] Valizadeh | Isogeometric analysis of hydrodynamics of vesicles using a monolithic phase-field approach[END_REF][START_REF] Laadhari | Implicit finite element methodology for the numerical modeling of incompressible two-fluid flows with moving hyperelastic interface[END_REF][START_REF] Laadhari | An operator splitting strategy for fluid-structure interaction problems with thin elastic structures in an incompressible newtonian flow[END_REF][START_REF] Torres-Sánchez | Modelling fluid deformable surfaces with an emphasis on biological interfaces[END_REF].
While blood flow behaves like Newtonian fluid in larger diameter arteries at high shear rates, it exhibits non-Newtonian behavior in small diameter arteries with low shear rates at the microscopic scale [START_REF] Cokelet | The rheology of human blood-measurement near and at zero shear rate[END_REF]. Non-Newtonian rheology is mainly due to polymerization and the underlying mechanisms leading to the activation and deactivation of platelets and the interactions between different microscopic entities. Blood viscosity tends to increase at low shear rates as RBCs aggregate into a roller shape. The Casson, Power-Law, and Quemada models are the most widely used generalised Newtonian rheologies for blood [START_REF] Neofytou | Non-newtonian flow instability in a channel with a sudden expansion[END_REF][START_REF] Copley | Apparent viscosity and wall adherence of blood systems[END_REF]. To our knowledge, such models have not yet been studied for the current problem. In this work, we consider a quasi-Newtonian power law model to describe the hemorheology.
The aim of this paper is to study the dynamics of biomembranes in a complex non-Newtonian incompressible viscous flow. In order to keep a reasonable computational cost compared to a fully mixed formulation, we design a penalty method to account for the local inextensibility of the membrane. Various higher-order finite element approximations are used to better approximate the bending force. We present a set of numerical examples to validate and show the main features of the method.
MATHEMATICAL SETTING
Membrane model
The deformations of the membrane allow minimizing the Canham-Helfrich-Evans [START_REF] Canham | The minimum energy of bending as a possible explanation of the biconcave shape of the human red blood cell[END_REF][START_REF] Helfrich | Elastic properties of lipid bilayers: theory and possible experiments[END_REF] bending energy while preserving the local inextensibility of the membrane. Let H be the mean curvature, corresponding to the sum of the principal curvatures on the membrane. In the two-dimensional case, the membrane minimizes the bending energy given by:
J(Ω) = k b 2 ∂ Ω (H(Ω)) 2 ds, (1)
where k b ≈ 10 -20 /10 -19 kg m 2 s -2 is the bending rigidity modulus. The energy is a variant of the Willmore energy [START_REF] Willmore | Riemannian geometry[END_REF]. Let T be the final time of the experiment. For any time t ∈ [0, T ], Ω(t) ⊂ R d , d = 2, 3, is the interior domain of the membrane Γ(t) = ∂ Ω(t), assumed Lipschitz continuous. The membrane is embedded in the domain Λ which is large enough so that Γ(t) ∩ ∂ Λ = / 0, see Fig. 1. Hereafter, the dependence of Ω and Γ upon t is dropped to alleviate notations.
For a membrane with fixed topology, the Gauss-Bonnet theorem [START_REF] Feng | Finite element modeling of lipid bilayer membranes[END_REF] states that the energy term weighted by k g is constant and can be ignored. The spontaneous curvature helps describe the asymmetry of phospholipid bilayers at rest, e.g. when different chemical environments exist on either side of the membrane. We assume H 0 = 0. Let n and ν be the outward unit normal vector Γ(t) and on ∂ Λ, respectively. We introduce the surface gradient
∇ s • = (Id -n ⊗ n) ∇•, surface divergence div s • = tr(∇ s •) and surface Laplacian ∆ s • = div s (∇ s •),
where Id is the identity tensor. The expression and derivation of the bending force using shape optimization tools can be found in [START_REF] Laadhari | On the equilibrium equation for a generalized biological membrane energy by using a shape optimization approach[END_REF].
Membrane deformations are subject to specific constraints. Fluid incompressibility is assumed, this is div u = 0 in Λ. In addition, RBCs are phospholipid bilayers with local membrane inextensibility. This corresponds to a zero surface divergence, i.e. div s u = 0 over Γ, that helps preserve the local perimeter. Global perimeter conservation follows from Reynolds' lemma [START_REF] Laadhari | Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods[END_REF]. As a consequence, a saddle point formulation results in a membrane surface force that balances the jump in hydrodynamic stress tensor and appears in the right side of (3g).
Level set description
The motion of the membrane is followed implicitly in a level set framework as the zero level set of a function ϕ. For t ∈ ]0, T [, ϕ is initialized by a signed distance ϕ 0 to Γ(0) and satisfies the transport equation (3a), with u the advection vector and ϕ = ϕ b on the upstream boundary Σ -= {x ∈ ∂ Λ : u • ν(x) < 0}. Geometric quantities such as n = ∇ϕ/|∇ϕ|, H = div s n and bending force are coded in terms of ϕ and are then extended to the entire computational domain Λ. Over time, a redistancing problem is resolved to maintain the signed distance property lost by advection [START_REF] Laadhari | Fully implicit finite element method for the modeling of free surface flows with surface tension effect[END_REF]. Indeed, a too large or too small gradient of ϕ close to Γ deteriorates the precise compting of the surface terms. Let ε be a regularization parameter. We introduce the regularized Heaviside H ε and Dirac δ ε functions:
H ε (ϕ) = 0, when ϕ < -ε 1 2 1 + ϕ ε + 1 π sin πϕ ε , when |ϕ| ε, 1, otherwise and δ ε (ϕ) = dH ε dϕ (ϕ).
Given a function ζ defined on Γ and its extension ζ to Λ, surface integrals are approximated as follows:
Γ ζ (x) ds ≈ Λ |∇ϕ| δ ε (ϕ) ζ (x) dx.
Governing equations
We assume constant densities ρ i and ρ o inside and outside of the membrane, respectively. Let us introduce the fluid velocity u and the pressure p which represent a Lagrange multiplier corresponding to the incompressibility constraint on Λ. Analogously, a position-dependent surface tension λ helps imposing the local inextensibility constraint on Γ.
Let D(u) = (∇u + ∇u T )/2 be the shear strain rate tensor, so the fluid Cauchy stress tensor is σ = T -pI where T is the stress deviator. The normal stress jump [σn] + -= σ + nσ -n on Γ describes the interactions of the membrane with the surrounding fluid [START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the level set and newton methods[END_REF], while the stress discontinuity is calibrated by (3f). For a simple shear flow, u b is the shear rate on Σ D ⊂ ∂ Λ, while natural boundary conditions are prescribed on Σ N ⊂ ∂ Λ.
We assume a quasi-Newtonian power-law model [START_REF] Neofytou | Non-newtonian flow instability in a channel with a sudden expansion[END_REF] where the nonlinear constitutive equation expresses the stress deviator with a power-law viscosity function as
T = 2η |D(u)| 2 D(u), with η (γ) = Kγ (υ -1)/2 , for all γ ∈ R, (2)
where υ > 0 and K are the power index and consistency index, respectively. According to [START_REF] Walburn | A constitutive equation for whole human blood[END_REF], υ = 0.7755 < 1 (i.e. a shear thinning fluid) and K = 14.67 × 10 -3 Pa s for normal blood samples obtained using a multiple regression technique. The Newtonian case υ = 1 corresponds to a linear stress-strain relationship that reduces the viscosity function η(γ) = K to a constant. By analogy with the Newtonian case, K = µ i and K = µ o stand for the values of the consistency index in the intra-and extra-membrane domains, respectively. We perform a dimensionless analysis. Let U be the maximum velocity on Σ D and D the diameter of a circle having the same membrane perimeter. We consider the dimensionless Reynolds number Re = ρ o UDµ -1 o which expresses the ratio between the inertial and viscous forces, and the capillary number Ca = µ o D 2 Uk -1 b which compares the flow force to the bending resistance of the membrane. Furthermore, the parameter β = µ i /µ o represents the ratio of consistency indices and corresponds to the viscosity ratio with respect to extracellular viscosity in the Newtonian case. The regularized dimensionless viscosity function is:
µ ε (ϕ)|D(u)| υ-1 = (H ε (ϕ) + β (1 -H ε (ϕ))) |D(u)| υ-1 .
Following [START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the level set and newton methods[END_REF], we choose ρ i = ρ o . Let σ ε stand for the regularized Cauchy stress tensor. The dimensionless reduced area Ξ 2d = 4π|Ω|/|Γ| 2 ∈]0, 1] compares the area of the interior domain to that of a circle with the same perimeter. The dimensionless coupled problem writes: find ϕ, u, p and λ such that
∂ t ϕ + u.∇ϕ = 0 in ]0, T [×Λ (3a) Re (∂ t u + u.∇u) -div σ ε (D(u), p, ϕ) = 0 in ]0, T [×(Λ\∂ Ω) (3b) div u = 0 in ]0, T [×Λ (3c) div s u = 0 on ]0, T [×∂ Ω (3d) [u] + -= 0 on ]0, T [×∂ Ω (3e) [σ ε n] + -= ∇ s λ -λ Hn + (2Ca) -1 2∆ s H + H 3 n on ]0, T [×∂ Ω (3f) ϕ = ϕ b on ]0, T [×Σ - (3g) u = u b on ]0, T [×Σ D (3h) σ.ν = 0 on ]0, T [×Σ N (3i) ϕ(0) = ϕ 0 in Λ (3j) u(0) = u 0 in Λ. (3k)
Let ε λ = 10 -8 be the penaly parameter. To make the method straightforward to implement from any Level Set / Navier-Stokes solver and considerably reduce the size of the linear system to be solved, the inextensibility constraint is relaxed by introducing a penalty term. Indeed, the corresponding minimization problem should be approximated by another minimization problem by penalizing the local inextensibility constraint for the velocity (3d). See analogous penalty method for other applications in [START_REF] Janela | A penalty method for the simulation of fluid -rigid body interaction[END_REF].
To overcome instability problems when solving the level set equation using the standard Galerkin method, there are a variety of stabilization methods such as the streamline diffusion method, the subgrid viscosity method and the Streamline Upwind Petrov-Galerkin (SUPG) method used in this work. The latter introduces a stabilization term by adding a diffusion in the streamline direction.
We introduce the functional spaces of admissible velocity u, pressure p and level set ϕ:
V(u b ) = v ∈ H 1 (Λ) d : v = u b , on Σ D , Q = q ∈ L 2 (Λ) : Ω q = 0 , X(ϕ b ) = ψ ∈ W 1,∞ (Λ) ∩ H 1 (Λ) : ψ = ϕ b , on Σ -.
To reduce a derivation order of ϕ when evaluating the bending strength, we use the Green formula on a closed surface. See e.g. [START_REF] Laadhari | Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods[END_REF]. Testing with appropriate test functions and integrating (3b) over Ω and Λ\Ω separately, the variational problem writes:
Find u ∈ C 0 ]0, T [, L 2 (Λ) d ∩L 2 ]0, T [, V(u b ) , p ∈ L 2 ]0, T [, Q , and ϕ ∈ C 0 ]0, T [, L 2 (Λ) d ∩L 2 ]0, T [, X (ϕ b ) such that Re Λ ∂ u dt + u • ∇u • v + Λ 2µ ε (ϕ)|D(u)| υ-1 D(u) : D(v) + 1 ε λ Λ div s (u) div s (v)|∇ϕ|δ ε (ϕ) - Λ p div v + 1 2Ca Λ δ ε (ϕ)|∇ϕ| 2∇ s H • ∇ s (n • v) -H 3 n • v = Σ N σν • v, ∀v ∈ V(0), (4a) Λ q div u = 0, ∀q ∈ Q, (4b) Λ ∂ ϕ ∂t ψ + Λ (u • ∇ϕ) ψ + Λ ξ (τ; ϕ, ψ) = 0, ∀ψ ∈ X (0) . (4c)
Here, ξ (τ; ϕ, ψ) stands for the SUPG stabilisation term and τ is a stabilization parameter defined element wise to control the amount of diffusion.
NUMERICAL APPROACH
The interval [0, T ] is divided into N sub-intervals [t n ,t n+1 ) with 0 n N -1 of constant step ∆t . For n > 0, u n , p n and ϕ n are computed by induction to approximate u, ϕ and p at t n . We use the Crank-Nicolson scheme for the time discretization of (3a) and (3b) without the need to bootstrap the initial conditions. The choice of this scheme was for its simplicity to implement and being a second order one-step integrator. The discretized (3a) writes
ϕ n+1 = ϕ n + ∆t 2 u • ∇ϕ n+1 + u • ∇ϕ n in Λ.
For the spatial discretization, we consider a partition T h of Λ consisting of geometrically conformal open simplicial elements K. We define the mesh size as the diameter of the largest mesh element h = max h K with K ∈ T h . We consider a Taylor-Hood finite element approximation for u and p. After using a surface Green's transformation, the evaluation of the Canham-Helfrich-Evans force requires a third-order derivative in ϕ which induces numerical oscillations when using lower-order polynomial approximations. To avoid introducing additional mixed variables and additional equations as in [START_REF] Laadhari | Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods[END_REF], higher degree polynomials are considered for the discretization of ϕ because the bending force requires its fourth order derivatives. For the SUPG method, the streamline diffusion parameter is chosen numerically proportional to the local mesh size, this is τ K = Ch K / max {|u| 0,∞,K , tol/h K }, where C is a scaling constant and tol/h K helps to avoid division by zero. To overcome the instability problems induced by an explicit decoupling, we consider a partitioned implicit strategy based on a fixed point algorithm, as detailed in Alg. 1.
Algorithm 1 Fluid-membrane coupling 1: n = 0: let ϕ 0 and u 0 being given 2: for n = 0, . . . , N -1 do
3: Initialize u n+1,0 = u n , ϕ n+1,0 = ϕ n 4:
while e k < 10 -6 do 5:
Compute ϕ n+1,k+1 using u n+1,k 6:
Compute u n+1,k+1 , p n+1,k+1 using ϕ n+1,k+1 7:
Compute the error
e k = |u n+1,k+1 -u n+1,k | 1,2,Λ /|u n+1,k | 0,2,Λ + |ϕ n+1,k+1 -ϕ n+1,k | 0,2,Λ /|ϕ n+1,k | 0,2,Λ 8:
end while Example 1: Reversible Vortex -Grid convergence.
H ε (ϕ h ) -H ε (π h ϕ) 0,2,Ω 10 -1 10 -2
Simulations were performed using FEniCSx [START_REF] Alnaes | The FEniCS project version 1.5[END_REF]. To evaluate the capability of the level set solver for high-order finite elements, necessary afterwards for an accurate assessment of highly nonlinear bending force, we consider a reversible vortex test case featuring large deformations of the interface. The computational domain is Λ = [0, 1] 2 . A circular interface of radius R = 0.15 initially centered at (0.7, 0.7) is stretched into thin filaments which are coiled like a starfish by a vortex flow field. The deformations are periodic and the stretching of the membrane unravels before the interface regains its circular shape after a period at t = T . The maximal deformation ψ occur at t = T /2, with ψ = 3 and T = 1 in numerical computations. Similar 2D and 3D test cases are widely used to test interface tracking methods. We follow LeVeque's test [START_REF] Leveque | High-resolution conservative algorithms for advection in incompressible flow[END_REF] (Example 9.5) and consider a velocity field at x = (x, y) T ∈ Λ given by u(t, x) = -2 sin(ψπx) 2 sin(ψπy) cos(ψπy) cos(πt/T ), 2 sin(ψπy) 2 sin(ψπx) cos(ψπx) cos(πt/T ) T .
The spatial accuracy of the finite element numerical approximations is studied by computing the errors in L 2 (Λ) norm on successively refined meshes with respect to an exact reference solution π h ϕ at t = T , where π h represents the Lagrange interpolation operator. Errors are calculated after one stretching period. For k the degree of the polynomial approximation, the time step ∆t = h k is chosen small enough not to significantly influence the overall accuracy. Fig. 2 reports the convergence of calculated errors with respect to the mesh size for several polynomial finite element approximations. Convergence rates are also displayed, showing for instance an almost second-order accuracy for k = 1 and fifth-order accuracy for k = 4. from [START_REF] Salac | Reynolds number effects on lipid vesicles[END_REF] and [START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the level set and newton methods[END_REF] (Re = 10 -3 , Ca = 100), [START_REF] Zhao | The dynamics of a vesicle in simple shear flow[END_REF] (Ca = 9) and [START_REF] Kraus | Fluid vesicles in shear flow[END_REF] (Ca = 10). results in [START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the level set and newton methods[END_REF][START_REF] Zhao | The dynamics of a vesicle in simple shear flow[END_REF] and measurements in [START_REF] Kantsler | Orientation and dynamics of a vesicle in tank-treading motion in shear[END_REF].
Example 2: Dynamics of the biomembrane in Newtonian and quasi-Newtonian flows.
We first proceed to a quantitative validation with some experimental and numerical results available in the literature in the case of a purely Newtonian flow. We set υ = 1, a viscosity contrast β = 1, Ca = 10 2 and Re = 9 × 10 -3 . More details on the physiological values of the Reynolds number at the level of RBCs are available in [START_REF] Salac | Reynolds number effects on lipid vesicles[END_REF]. The membrane follows a tank-treading type movement, called TT, where it reaches a steady state characterized by a fixed angle of inclination; The surrounding fluid continues its rotation tangentially to the membrane. We consider different values of the reduced areas Ξ 2d ∈ [0.6, 1], and calculate the angle of inclination at equilibrium θ . Fig. 3 and Fig. 4 plot the change in θ /π against Ξ 2d in both Newtonian and quasi-Newtonian cases for different values of the viscosity ratio β . The results are compared with those of Kraus et al. [START_REF] Kraus | Fluid vesicles in shear flow[END_REF], Zhao et al. [START_REF] Zhao | The dynamics of a vesicle in simple shear flow[END_REF], Salac et al. [START_REF] Salac | Reynolds number effects on lipid vesicles[END_REF] and Laadhari et al. [START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the level set and newton methods[END_REF], showing good overall consistency. However, note that the values obtained with υ = 0.7755 fit slightly better compared to those of the Newtonian model which are a little higher than the other curves. This cannot be confirmed at all, given the setting of different dimensionless values such as Re and Ca in the different experiments. An in-depth study is in progress and will be the subject of a forthcoming work. Simulations are now performed using a different ratio β = 2.7 in the non-Newtonian case. We calculate the angle of inclination θ /π and compare with some numerical [START_REF] Zhao | The dynamics of a vesicle in simple shear flow[END_REF] and experimental [START_REF] Kantsler | Orientation and dynamics of a vesicle in tank-treading motion in shear[END_REF] results available only for larger reduced areas. Fig. 3(right) shows close but slightly higher equilibrium angles when the shape of the membrane becomes close to a circle. The deviations can be mainly due to the non-Newtonian model, but also to the different values of the confinement levels and the boundary conditions used in the different works.
According to an experimental systematic study on individual individual red cells in a simple shear flow, a change in dynamics occurs when the viscosity ratio exceeds a critical value depending on the reduced area [START_REF] Fischer | The red cell as a fluid droplet: Tank tread-like motion of the human erythrocyte membrane in shear flow[END_REF]. This is the tumbling regime, noted by TB, which is characterized by the periodic rotation of the membrane around its axis. A well-known empirical model was developed by Keller and Skalak [4]. This dynamics was obtained in the simulations with the non-Newtonian model, see Fig. 5 and Fig. 6 for the snapshots of the TT and TB dynamics obtained with the same set of parameters but with β = 1 and β = 10, respectively.
CONCLUSION
We have presented in this paper a relatively simple method for simulating the dynamics of an individual red blood cell, or inextensible biological membrane in general, in a surrounding incompressible non-Newtonian flow that better describes the hemorheology in small capillaries. We validated our framework using high-order finite element approximations in the case of a membrane in a simple shear flow. Simulations have shown that the method is capable of capturing the basic cellular dynamics, namely the well-known tank treading and tumbling motions. This is part of a larger ongoing work to explore the dynamics of red blood cells in small capillaries, while accounting for cell elasticity [START_REF] Gizzi | A three-dimensional continuum model of active contraction in single cardiomyocytes[END_REF] in non-Newtonian surrounding flow.
FIGURE 1 :
1 FIGURE 1: Sketch of the membrane Γ embedded into a computational domain Λ, while Ω is the inner region.
9 :
9 Update u n+1 = u n+1,k+1 , ϕ n+1 = ϕ n+1,k+1 10:
FIGURE 2 :
2 FIGURE 2: Reversible vortex. (Left) Snapshots showing the interface deformations at t ∈ {0, 0.25, 0.57, 0.75, 0.875, 1} with = 0.01. (Right) Spatial convergence in L 2 norm for high-order finite element approximations.
FIGURE 3 :
3 FIGURE 3: TT regime: Change in θ * /π with respect to Ξ 2d for a viscosity ratio β = 1. Comparisons with resultsfrom[START_REF] Salac | Reynolds number effects on lipid vesicles[END_REF] and[START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the level set and newton methods[END_REF] (Re = 10 -3 , Ca = 100),[START_REF] Zhao | The dynamics of a vesicle in simple shear flow[END_REF] (Ca = 9) and[START_REF] Kraus | Fluid vesicles in shear flow[END_REF] (Ca = 10).
FIGURE 4 :
4 FIGURE 4: TT regime: Change in θ * /π with respect to Ξ 2d for β = 2.7. Comparison of non-Newtonian model withresults in[START_REF] Laadhari | Fully implicit methodology for the dynamics of biomembranes and capillary interfaces by combining the level set and newton methods[END_REF][START_REF] Zhao | The dynamics of a vesicle in simple shear flow[END_REF] and measurements in[START_REF] Kantsler | Orientation and dynamics of a vesicle in tank-treading motion in shear[END_REF].
FIGURE 5 :
5 FIGURE 5: TT regime for Ξ 2d = 0.68, β = 1, Ca = 4 × 10 4 and Re = 9 × 10 -3 at t ∈ 0, 0.125, 0.25, 0.5, 1, 2 .
FIGURE 6 :
6 FIGURE 6: Snapshots showing a membrane in TB regime for Ξ 2d = 0.68, β = 10, Ca = 4 × 10 4 and Re = 9 × 10 -3 , at times t ∈ 0, 0.13, 0.25, 0.5, 1.25, 2, 2.25, 2.38, 3, 4, 4.5, 4.7 , respectively.
ACKNOWLEDGMENTS
The authors acknowledge financial support from KUST through the grant FSU-2021-027. |
04100878 | en | [
"sdv.spee",
"spi.other"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04100878/file/Systematic%20Reviews%20of%20Systematic%20Quantitative%2C%20Qualitative%2C%20and%20Mixed%20Studies%20Reviews.pdf | Geneviève Rouleau
email: [email protected]
Quan Nha Hong
Navdeep Kaur
Marie-Pierre Gagnon
José Côté
Julien Bouix-Picasso
Pierre Pluye
Systematic Reviews of Systematic Quantitative, Qualitative, and Mixed Studies Reviews in Healthcare Research: How to Assess the Methodological Quality of Included Reviews?
Keywords: methodological quality assessment, systematic reviews of systematic reviews, systematic mixed studies reviews, systematic qualitative reviews, systematic quantitative reviews
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Background
More and more healthcare researchers are interested in combining qualitative, quantitative and mixed methods studies and their evidence into systematic reviews. The healthcare field of research has a long tradition in producing systematic reviews [START_REF] Hong | Systematic reviews: A brief historical overview[END_REF].With the number of published systematic reviews increasing each and every day, researchers are now interested in combining many systematic reviews into one single document, i.e. systematic reviews of systematic reviews (SRSRs) [START_REF] Hartling | A Descriptive Analysis of Overviews of Reviews Published between 2000 and 2011[END_REF][START_REF] Page | Epidemiology and Reporting Characteristics of Systematic Reviews of Biomedical Research: A Cross-Sectional Study[END_REF][START_REF] Pieper | Overviews of reviews often have limited rigor: A systematic review[END_REF][START_REF] Pollock | What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary[END_REF]. SRSRs provide a wide picture of the topic under consideration; they are "one-stop shopping for decision makers" (Hartling et al., 2014 p. 488) because all systematic reviews are synthesized together. SRSRs can be fine-tuned to focus on particular populations and/or interventions; they can combine various outcomes, and they are efficient [START_REF] Aromataris | Summarizing systematic reviews: Methodological development, conduct and reporting of an umbrella review approach[END_REF][START_REF] Becker | Chapter 22: Overviews of reviews[END_REF][START_REF] Hartling | Systematic reviews, overviews of reviews and comparative effectiveness reviews: A discussion of approaches to knowledge synthesis. Evidence-Based Child Health[END_REF].
There are challenges in conducting SRSRs including : the overlapping between systematic reviews (primary studies appearing in more than one review, leading to potential duplication of some findings in SRSRs); the misalignment between the scope of the systematic reviews and SRSRs question(s); the variable quality of reporting and methodological quality within systematic reviews; the lack of granularity in reported information; and the assessment of methodological quality of included systematic reviews [START_REF] Ballard | Risk of bias in overviews of reviews: A scoping review of methodological guidance and four-item checklist[END_REF]Lunny et al., 2016;[START_REF] Pieper | Overviews of reviews often have limited rigor: A systematic review[END_REF][START_REF] Pollock | What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary[END_REF]. One of the key steps when conducting SRSRs is to assess the methodological quality/risk of bias in the included systematic reviews (Lunny et al., 2018). We will focus only on the methodological quality at the systematic reviews level because we are interested in the tools required to assess the methodological quality when various types of systematic reviews are included in an SRSR. This paper builds on the literature on SRSRs and methodological quality (e.g., Jimenez et al., 2018a;Lunny et al., 2017Lunny et al., , 2018;;[START_REF] Shea | AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both[END_REF]Whiting et al., 2016a) and is based on our experiences of conducting systematic reviews of systematic qualitative, quantitative, and mixed studies reviews [START_REF] Rouleau | Impacts of information and communication technologies on nursing care: An overview of systematic reviews (protocol)[END_REF][START_REF] Rouleau | Effects of E-Learning in a Continuing Education Context on Nursing Care: Systematic Review of Systematic Qualitative, Quantitative, and Mixed-Studies Reviews[END_REF][START_REF] Rouleau | Effects of e-learning in a continuing education context on nursing care: A review of systematic qualitative, quantitative and mixed studies reviews (protocol)[END_REF]Rouleau, Gagnon, Côté, Payne-Gagnon, Hudson, & Dubois, 2017) and of undertaking methodological, empirical, and conceptual works in the field of systematic reviews and mixed methods [START_REF] Hong | Convergent and sequential synthesis designs: Implications for conducting and reporting systematic reviews of qualitative and quantitative evidence[END_REF]Hong & Pluye, 2018a;[START_REF] Pluye | Opening-up the definition of systematic literature review: The plurality of worldviews, methodologies and methods for reviews and syntheses[END_REF].
From these sources of knowledge, we observed the following issues: (a) there is no attempt to provide simple and clear typology in the field of tertiary level of research to describe SRSRs that includes systematic quantitative, qualitative, and/or mixed studies reviews; and (b) there is little explicit guidance in assessing the methodological quality in systematic qualitative and mixed studies reviews compared to quantitative systematic reviews which are included in SRSRs.
Practical guidance is explicit for authors of SRSRs who assess methodological quality in systematic quantitative reviews that include randomized and non-randomized designs [START_REF] Shea | AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews[END_REF][START_REF] Shea | AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both[END_REF]Whiting et al., 2016a). In fact, three critical appraisal tools are commonly used: A Measurement Tool to Assess Systematic Reviews (AMSTAR, [START_REF] Shea | AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews[END_REF] and AMSTAR 2 [START_REF] Shea | AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both[END_REF]; as well as Risk of Bias in Systematic Reviews (ROBIS, Whiting, 2016b;Whiting et al., 2016a). Recently, one tool was published for assessing methodological quality of mixed studies reviews: the Mixed Methods Systematic Reviews appraisal tool (MMSR, Jimenez et al., 2018b, 2018a). These developments in systematic mixed studies reviews are still in their infancy. To our knowledge, no tool exists for assessing the methodological quality in systematic qualitative reviews: that is why we claim that guidance is needed for different types of systematic reviews included in SRSRs.
Aim and scope
This methodological discussion paper will address the aforementioned knowledge gaps.
The aim of this paper is twofold. The first is to describe a typology of SRSRs including the combination of various systematic reviews (quantitative, qualitative, and mixed studies reviews) and their corresponding evidence (quantitative and qualitative). Here the focus is not to describe in detail all types of SRSR methodologies and synthesis approaches but rather to define what is meant by SRSR and their different types in a simple manner. The second aim is to explore all criteria pertaining to three critical appraisal tools (AMSTAR 2, ROBIS, and MMSR) and their potential applicability and limitations for assessing methodological quality of systematic qualitative and mixed studies reviews. We will also propose recommendations for all criteria used in these tools that could be adapted to match with qualitative and mixed studies reviews. This paper does not focus on the quality of how well an SRSR is conducted (the reporting quality), appraisals which would involve other levels of assessment and guidance (M. [START_REF] Pollock | Preferred Reporting Items for Overviews of Reviews (PRIOR): A protocol for development of a reporting guideline for overviews of reviews of healthcare interventions[END_REF].
Levels of research related to three types of systematic review of systematic reviews
SRSR, considered as the tertiary level of research [START_REF] Biondi-Zoccai | Introduction[END_REF], refers to a review of reviews guided by a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant systematic reviews and to collect and synthesize findings pertaining to these reviews (adapted from [START_REF] Booth | Systematic Approaches to a Successful Literature Review[END_REF]. It differs from the secondary level of research [START_REF] Biondi-Zoccai | Introduction[END_REF] referring to a review guided by a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research and to collect and analyse findings deriving from primary studies. One of the differences between SRSRs and systematic reviews is the level of synthesis: the former synthesizes findings from systematic reviewswhile the latter synthesizes the findings from primary research studies (Gough, Oliver, et al., 2012;Smith et al., 2011). The primary level of research (Biondi-Zoccai, 2016) refers to research based on observation, experiment, or simulation rather than on reasoning or theory alone [START_REF] Abbott | The Causal Devolution[END_REF][START_REF] Porta | A dictionary of epidemiology[END_REF]. Each level of research contains and produces their own set of data. In a SRSRs, data are the findings of included systematic reviews [START_REF] Hong | Convergent and sequential synthesis designs: Implications for conducting and reporting systematic reviews of qualitative and quantitative evidence[END_REF]. In a systematic review, data are the findings of included primary studies. For Gough, Thomas and Olivier (2019), a review, whether narrative or numerical, is systematic as long as it meets the tenets of research, i.e., rigor and transparency. Other systematicrelated adjectives are used, including explicit, reproducible, comprehensive, detailed, and transparent [START_REF] Krnic Martinic | Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks[END_REF]. These adjectives are applicable to primary studies, reviews of studies, and review of reviews [START_REF] Gough | Clarifying differences between reviews within evidence ecosystems[END_REF].
SRSRs can differ depending on the systematic reviews included. We are proposing the following three types of SRSRs (Table 1 and Figure 1) using descriptive typology, partially informed by [START_REF] Hong | Convergent and sequential synthesis designs: Implications for conducting and reporting systematic reviews of qualitative and quantitative evidence[END_REF]: 1) Systematic review of systematic qualitative reviews; 2) Systematic review of systematic quantitative reviews; 3) Systematic review of systematic quantitative, qualitative, and mixed studies reviews. Many reasons justify the use of a typology: a) to define and illustrate the main concept of this paper, i.e. SRSRs, and to highlight a novel type of SRSRs: Systematic review of quantitative, systematic qualitative and mixed studies reviews; b) to understand at a glance (with the figure 1) the included type of systematic review pertaining to each type of SRSRs to lay the groundwork of the assessment of methodological quality; c) to help readers distinguish the levels of research;
and d) to understand where the challenges are encountered. Currently, the existing critical appraisal tools to assess the quality of systematic reviews were mainly developed for systematic quantitative reviews. However, the typology clearly shows that there exist other types of systematic reviews (systematic qualitative reviews and systematic mixed studies reviews and) for which there is a need to adapt existing tools or develop new ones.
The assessment of methodological quality in systematic reviews as a key step for authors of SRSRs
For authors of SRSRs, their judgment about methodological quality refers to a concern about how well (the included) systematic reviews are conducted [START_REF] Pieper | Overviews of reviews often have limited rigor: A systematic review[END_REF]. The terms risk of bias, threats to validity, critical appraisal, and quality assessment are also commonly used to refer to methodological quality. Risk of bias can be defined as the systematic flaws in the design or conduct of a systematic review that can bias the results of both the systematic review and the SRSRs and that can occur at all steps of the systematic review process (Whiting et al., 2016a). The assessment of methodological quality of included systematic reviews is a critical step when undertaking SRSRs. The results of such assessment, i.e. low, moderate and high quality, are an important element allowing to interpret findings and drawing conclusions from an SRSR [START_REF] Pollock | What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary[END_REF].
Methodological quality is considered as one of the three dimensions of quality, together with conceptual and reporting quality (Hong & Pluye, 2018a). Conceptual quality is associated with a clarity of concept/construct and a clear understanding of a concept and a phenomenon given the depth of description provided. Reporting quality concerns the extent, if any, to which a paper offers sufficient details and information about the design, conduct, and analysis of the primary study, systematic reviews, or SRSRs (adapted from [START_REF] Huwiler-Müntener | Quality of Reporting of Randomized Trials as a Measure of Methodologic Quality[END_REF]. The Enhancing the QUAlity and Transparency Of Health
Research Network (EQUATOR Network, 2019) website proposes more than 400 reporting guidelines and checklists to support authors in improving transparency in the reporting of various types of health research (studies, systematic reviews and SRSRs). All reporting guidelines that have been published [START_REF] Bougioukas | Preferred reporting items for overviews of systematic reviews including harms checklist: A pilot tool to be used for balanced reporting of benefits and harms[END_REF][START_REF] Bougioukas | Reporting guidelines on how to write a complete and transparent abstract for overviews of systematic reviews of health care interventions[END_REF] or are being developed at the tertiary level of research (M. [START_REF] Pollock | Preferred Reporting Items for Overviews of Reviews (PRIOR): A protocol for development of a reporting guideline for overviews of reviews of healthcare interventions[END_REF] are for systematic reviews of quantitative systematic reviews. There are reporting guidelines available for authors conducting qualitative syntheses methodologies, such as the eMERGe [START_REF] France | Improving reporting of meta-ethnography: The eMERGe reporting guidance[END_REF] and the ENTREQ (Tong et al., 2012). However, these reporting guidelines for systematic reviews of quantitative systematic reviews and for qualitative syntheses methodologies do not capture the dimension of critical appraisal that is of interest in this paper, namely, the methodological quality of systematic reviews included in a SRSRs.
Assessing such methodological quality requires to determine how to perform this endeavour, i.e. selecting the appropriate tool to do so (Lunny et al., 2018). We faced challenges when we assessed and compared the quality of systematic quantitative, qualitative and mixed studies reviews included in SRSRs (Rouleau, Gagnon, Côté, Payne-Gagnon, Hudson, & Dubois, 2017). These challenges were due to the lack of unequivocal practical guidance, and insufficient tools, to assess the quality of systematic qualitative and mixed studies reviews.
Two examples supporting challenges in assessing the methodological quality in systematic quantitative, qualitative, and mixed studies reviews
We present here two examples in which we conducted SRSRs. These examples supported the experienced challenges in assessing systematic quantitative, qualitative and mixed studies reviews included in an SRSRs. These challenges were the "cue to action" to initiate a reflection about how we can leverage on existing tools to apprehend the assessment of methodological quality of various types of systematic reviews included in an SRSRs.
The first example targets an SRSRs aimed to summarize the effects of information and communication technologies on nursing care. We included 22 systematic reviews that were of different types: mixed studies reviews (n=12); quantitative reviews (n=9); and qualitative reviews (n=1) (Rouleau, Gagnon, Côté, Payne-Gagnon, Hudson, & Dubois, 2017). We used the AMSTAR [START_REF] Shea | Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews[END_REF][START_REF] Shea | AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews[END_REF] to assess the methodological quality of systematic qualitative, quantitative, and mixed studies reviews, even if the AMSTAR was only designed for systematic reviews including randomized controlled trials. At that time, no other tool was designed to assess the methodological quality of systematic qualitative and mixed studies reviews. Consequently, some criteria of the AMSTAR did not fit into the specificities of systematic qualitative and mixed studies reviews. Thus, we slightly adapted some of these criteria. However, this process was still disadvantageous for systematic qualitative and mixed studies reviews as they started with a lower score (i.e., low methodological quality). The systematic reviews with the highest methodological quality, represented by an AMSTAR score of 9 or higher on 11, were mostly the systematic quantitative reviews (except one mixed studies review).
In another SRSR, we used two tools to assess the methodological quality and risk of bias in systematic quantitative and mixed studies reviews: the ROBIS (Whiting et al., 2016a) and the AMSTAR 2 [START_REF] Shea | AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both[END_REF]. The AMSTAR 2 and the ROBIS were applied to 22
SRs, including 11 systematic quantitative reviews and 11 systematic mixed studies reviews [START_REF] Rouleau | Effects of E-Learning in a Continuing Education Context on Nursing Care: Systematic Review of Systematic Qualitative, Quantitative, and Mixed-Studies Reviews[END_REF]. Out of 22 SRs, a total of 9 (41%) were at low risk of bias; 8 (36%)
were at high risk of bias; and 5 (23%) had an unclear risk of bias. Again, all the systematic mixed studies reviews assessed with ROBIS had an unclear or a high risk of bias and 10/11 scored a critically low or low level of confidence with AMSTAR 2. By the time we published this paper, the MMSR tool had been published (Jimenez et al., 2018a). This tool is designed to critically appraise systematic reviews that integrate quantitative and
qualitative evidence to answer various questions regarding the effectiveness of development interventions (Jimenez et al., 2018a). The description of ROBIS, AMSTAR, AMSTAR 2 and MMSR are presented in Textbox 1 and their criteria are described in Table 2.
Here is the summary of what we found in our SRSRs: 1) In both of them, half of the systematic reviews included were mixed studies reviews (and one was a qualitative review); 2) This led to suggest a new type of SRSR, which is systematic review of systematic quantitative, qualitative and mixed studies reviews; 3) Given this novel type of SRSRs, there is a need of guidance of how to assess the methodological of qualitative and mixed studies reviews, such guidance being still embryonic (especially for qualitative reviews) compared to quantitative reviews; 4) Some criteria pertaining to the AMSTAR, AMSTAR 2 and ROBIS were inadequate for appraising the quality of systematic qualitative and mixed studies reviews and demonstrated flaws in application.
Indeed, the criteria underpinning these tools come with their own set of epistemological and methodological assumptions and reasoning that are meaningful for systematic reviews aligned with those positions. For example, ROBIS and AMSTAR 2 tools include a criterion in which review authors make a judgment on whether appropriate methods were used for statistical combination of results if meta-analysis was performed. This criterion is well suited for quantitative reviews but is not adapted to fit with the specificities of qualitative and mixed studies reviews.
Textbox 1. Description of critical appraisal tools
The ROBIS contains 21 signaling questions divided into the following three phases: i) the assessment of relevance (optional); ii) the identification of concerns with the review process, in which bias can be introduced in four domains: study eligibility criteria; identification and selection of studies; data collection and study appraisal; and synthesis and findings; and iii) overall judgment about risk.
The AMSTAR is an 11-item tool used to assess the methodological quality of quantitative reviews using mainly randomized controlled trials (RCT) designs.
The AMSTAR 2 is a 16-item instrument that provides detailed and comprehensive assessment of SRs that include randomised or non-randomised studies of healthcare interventions.
The MMSR tool has been developed to "categorise and critically appraise systematic reviews that incorporate quantitative and qualitative evidence to answer different questions about the effectiveness of development interventions (Jimenez et al., 2018a p. 411)".
Comparison of criteria pertaining to three critical appraisal tools and their applicability for appraising qualitative and mixed studies reviews
We are now entering the main argument of our paper (i.e. the second aim of the paper). The contribution of this methodological discussion paper is to compare three critical appraisal tools and their criteria to show the gaps when applying them to qualitative and mixed studies reviews; and to make suggestions that would allow adapting those criteria to make them usable for qualitative and mixed studies reviews.
This approach was undertaken in three steps: 1) listing and regrouping all the criteria pertaining to AMSTAR 2, ROBIS, and MMSR, and classifying them according to the eight main steps of conducting a systematic review [START_REF] Becker | Chapter 22: Overviews of reviews[END_REF]Liberati, 2009); 2)
examining each criterion and making a judgment about their applicability and limitations for appraising the quality of systematic qualitative and mixed studies reviews; 3) making preliminary recommendations and adding commentaries to adapt some criteria so as to make them potentially applicable for systematic qualitative and mixed studies reviews (see Table 2). At the first step, the classification of criteria in main steps of a systematic review reflects our interest in providing guidance for authors of SRSRs who have to perform the assessment of methodological quality at the secondary level of research, and not at the tertiary one.
We discussed issues around the approach of comparing the tools and their criteria until consensus had been reached within our group, composed of qualitative, quantitative, and mixed methods researchers (mentors and trainees) who have a background in nursing, occupational therapy, public health, community health, and family medicine. Our assumptions were as follows: some criteria and their corresponding bias (e.g., unambiguousness of eligibility criteria) described in existing tools, such as ROBIS, pertain
to "generic" steps of conducting SRs (e.g., defining the review question and eligibility criteria), regardless of whether they are qualitative, quantitative, or mixed studies reviews.
In this case, the unambiguousness of eligibility criteria is applicable for systematic qualitative and mixed studies reviews.
Insert Table 2 about here
Summary of preliminary observations and recommendations
Most criteria pertaining to steps 0 through 4 (see Table 2) are applicable to qualitative and mixed studies reviews: assessment of relevance, definition of the review question and identification of eligibility criteria; application of extensive and comprehensive search strategies; identification and selection of SRs; and data extraction. However, one criterion retrieved from MMSR suggests a multicomponent review which is the mix of methods [START_REF] Gough | Clarifying differences between reviews within evidence ecosystems[END_REF] that is operationalized in the MMSR tool for answering different subquestions: "specification of a separate systematic review question for each review component such as the intervention design, the implementation processes, the participants, and the intervention/program effects (Jimenez et al., 2018a)." This criterion is then partially applicable to all types of systematic reviews depending on the review questions, and subsequently, methods and corresponding data that would or would not help explain the effectiveness of interventions/programs. Some criteria from AMSTAR 2 and the ROBIS contain vocabulary that is not adapted for qualitative and mixed studies reviews, such as pre-defined analysis, risk of bias, heterogeneity, meta-analysis, and publication bias (see the criteria pertaining to step 5, step 6, and step 8 in Table 2). We recommend renaming/adapting existing criteria to fit with the epistemological and methodological tradition of systematic qualitative and mixed studies reviews so as to include all types of systematic reviews. For example, the criterion "Justification of meta-analysis" could be renamed "Justification of the synthesis approach for the combination of results." The MMSR tool also includes criteria for integrating qualitative and quantitative evidence (see Step 7 and corresponding criteria in Table 2), which are applicable for authors interested in answering different systematic review questions about the effectiveness of programs using a logic model.
Contribution to the Field of Mixed Methods Research Methodology
Over the past few years, researchers have shown interest in conducting SRSRs. However, existing methodological guidance is predominant for conducting, interpreting, and reporting systematic reviews of systematic quantitative reviews [START_REF] Hunt | An introduction to overviews of reviews: Planning a relevant research question and objective for an overview[END_REF]Lunny et al., 2017Lunny et al., , 2018;;[START_REF] Pollock | Selecting and implementing overview methods: Implications from five exemplar overviews[END_REF][START_REF] Pollock | What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary[END_REF][START_REF] Pollock | Chapter V: Overviews of reviews[END_REF]. Tools to assess methodological quality have been developed for quantitative reviews included in a SRSRs.
This guidance is still lacking and is imperative for systematic reviews of systematic qualitative reviews (Typology 1, Table 1) and for systematic reviews including systematic quantitative, qualitative, and/or mixed studies reviews (Typology 3, Table 1), thereby highlighting the contribution of our paper to the field of mixed methods research methodology in identifying these gaps and needs for further work. Such type-3 SRSRs would be challenging to conduct, interpret, and report considering the combination of qualitative and quantitative findings involved. The step of synthesizing, presenting, and summarizing the qualitative and quantitative findings of systematic reviews would need special consideration. We strongly recommend building this reflection based on previous works [START_REF] Frantzen | Meta-integration for synthesizing data in a systematic mixed studies review: Insights from research on autism spectrum disorder[END_REF][START_REF] Hong | Convergent and sequential synthesis designs: Implications for conducting and reporting systematic reviews of qualitative and quantitative evidence[END_REF]Hong & Pluye, 2018a;[START_REF] Sandelowski | Mapping the Mixed Methods-Mixed Research Synthesis Terrain[END_REF] related to systematic mixed studies reviews in order, for example, to adapt the synthesis methods at the tertiary level of research. In other words, we have something to learn from works that have been done at the secondary level of research in combining various types of primary studies (quantitative, qualitative and mixed methods); to expand the field at the tertiary level of research, i.e. systematic reviews combining the findings of quantitative, qualitative and mixed studies reviews.
Our methodological discussion paper is original and contributes to this field for the following two reasons: First, it provides a typology centered on SRSRs including the combination of various types of systematic reviews and their corresponding mixed types of evidence. Second, it enables an analysis of common and distinctive criteria of three critical appraisal tools by making a judgment about their potential applicability to systematic qualitative and mixed studies reviews.
Our paper contributes to one of the three dimensions of quality: the methodological quality of systematic reviews included in a SRSRs, specifically of systematic quantitative, qualitative, and mixed studies reviews. The other dimensions of quality related to SRSRs (Hong & Pluye, 2018a), i.e., the reporting and conceptual dimensions, could be the subject of further research. For example, how should we report systematic reviews of systematic qualitative, quantitative, and mixed studies reviews? Based on our past experience, it is very challenging to evaluate and compare the quality of systematic reviews from different epistemological and methodological traditions. To do so clear guidance is needed in two ways. There is a need to develop or adapt a critical appraisal tool to assess the methodological quality of systematic qualitative reviews. There is also a need to explore whether the criteria of the MMSR tool (Jimenez et al., 2018a, b) could be expanded to answer research questions other than those pertaining to impacts and program evaluation.
This paper could be useful to feed methodological support organizations and any other research teams interested in this novel type of tertiary level research, i.e. systematic reviews of systematic qualitative, quantitative, and mixed studies reviews.
Other authors are invited to continue the reflection so as to eventually adapt or develop a unified/integrated tool that could enable the assessment of systematic qualitative, quantitative, and mixed studies reviews on a common ground. This tool could include a set of criteria named in a generic way (e.g., pre-published protocol, justification of synthesis method) that could be applicable regardless of the types of systematic reviews in question.
A guidance document could be made available for authors of SRSRs to explain how to apply and to interpret the methodological quality based on the specificities of each type of systematic review, as provided by Whiting et al. to facilitate the use of the ROBIS tool (2016b). Considering that there are existing and available tools, we should use them and adapt the criteria and their rating guidance as needed.
Limitations and opportunities
In this paper, we focused on typology of SRSRs that include various types of systematic reviews, themselves including various empirical studies with quantitative and qualitative data. We used a "classical/traditional" way of categorizing the various levels of research (Figure 1), i.e. by types of studies: empirical studies, systematic reviews and SRSRs.
However, we considered that systematic reviews of systematic quantitative and qualitative reviews must include systematic mixed studies reviews (Table 1 and Figure 1) in order to extract and synthesize, respectively, quantitative and qualitative findings from the second level of research (systematic reviews). We are aware that recent developments suggest that we should consider not only the types of reviews/SRSRs, but also the broader context of the evidence ecosystem in which the reviews are produced and used, including a consideration of key dimensions as mentioned earlier (e.g., aims, methodologies, and structure of systematic reviews /SRSRs) [START_REF] Gough | Clarifying differences between reviews within evidence ecosystems[END_REF]Gough, Thomas, et al., 2012).
These authors suggest that "for all of the many types of review question and types of systematic reviews, there can be many levels of evidence standards for making different evidence claims based on those reviews (Gough et al., 2019, p.7)." Indeed, the quality and relevance of evidence claims can be assessed within a "dimensions of difference framework": 1) the SRSR method used in the selection and synthesis of the research evidence; 2) systematic reviews included in an SRSR; and 3) evidence produced in the SRSRs [START_REF] Gough | Evidence Standards: A dimensions of difference framework for appraising justifiable evidence claims[END_REF]. We acknowledge that this methodological discussion paper illustrates only a small portion of the complexity of SRSRs that are embedded in a dynamic and changing evidence ecosystem. Otherwise, it has covered only empirical evidence (i.e.
synthesizing research evidence from systematic reviews in SRSRs) as part of the typology and the methodological quality while many other ways of knowing exist (such as theoretical knowing and administrative and individual participant data) [START_REF] Gough | Clarifying differences between reviews within evidence ecosystems[END_REF][START_REF] Nutley | What counts as good evidence? Alliance for Useful Evidence[END_REF].
Putting together the criteria deriving from three existing critical appraisal tools was a first attempt to provide an initial structure in order to reflect on their applicability to appraise the quality in systematic qualitative and mixed studies reviews. What could be done in further work is to use the template suggested in Tables 2 and3 and to systematically analyze the systematic review variations included in an SRSR based on these key dimensions: aims or question, approaches (e.g., ontological and theoretical assumptions), methodologies, structure, and components of the systematic reviews [START_REF] Gough | Clarifying differences between reviews within evidence ecosystems[END_REF]Gough, Thomas, et a., 2012). This exploration should be done at the systematic review level while assessing methodological quality of included systematic reviews for authors of SRSRs.
There are also opportunities for further works in developing reporting guidelines for systematic review of systematic qualitative, quantitative and mixed studies reviews by considering existing published checklists, as mentioned previously (e.g. ENTREQ, eMERGE). We also recommend to expand the works surrounding methodological quality for other fields outside health care (e.g. social science).
Conclusion
This paper has emphasized the importance of using a descriptive and simple typology for eliciting the different types of systematic reviews included in SRSRs. Having a clear and consistent typology is the foundation to extend a wider application of SRSRs, to address broad research synthesis questions, and to assess methodological quality. We have argued that researchers need to consider the combination of systematic qualitative, quantitative, and mixed studies reviews and the integration of qualitative and quantitative evidence for planning, conducting, and reporting SRSRs. For example, we considered that the effects of information and communication technologies on nursing care can be studied through quantitative and qualitative lens (Rouleau, Gagnon, Côté, Payne-Gagnon, Hudson, & Dubois, 2017). These effects may be perceived by nurses as support for their practice and provision of nursing care. Depending on the scope of SRSRs and the question(s) of interest, we have also argued that integrating many types of systematic reviews and a combination of evidence can help increase the understanding of a given phenomenon and provide a comprehensive answer to a complex review question. Finally, our work can help advance the field of assessing the methodological quality of systematic qualitative, quantitative, and mixed studies reviews on a common ground. We have no doubt that the development and validation of a consolidated critical appraisal tool for authors of SRSRs will be of great value.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. The use of systematic and explicit methods to identify, select, and critically appraise relevant systematic qualitative and mixed studies reviews and to collect and synthesize qualitative findings pertaining to these systematic reviews. Some authors also used the terms "Meta-review of systematic reviews of qualitative studies" (May et al., 2016) and "Meta-Review of Qualitative Systematic Reviews" [START_REF] Pearce | Experiences of Self-Management Support Following a Stroke: A Meta-Review of Qualitative Systematic Reviews[END_REF]. Toye et al. ( 2017) referring to "mega-ethnography", representing a method of synthesis rather than a type of SRSR.
2) Systematic review of systematic quantitative reviews
The use of systematic and explicit methods to identify, select, and critically appraise relevant systematic quantitative and mixed studies reviews and to collect and synthesize quantitative findings pertaining to these systematic reviews.
3) Systematic review of systematic quantitative, qualitative, and mixed studies reviews
The use of systematic and explicit methods to identify, select, and critically appraise relevant systematic quantitative, qualitative, and/or mixed studies reviews and to collect and analyse quantitative and qualitative findings pertaining to these systematic reviews.
Examples of this type of SRSR have been published [START_REF] Rouleau | Effects of E-Learning in a Continuing Education Context on Nursing Care: Systematic Review of Systematic Qualitative, Quantitative, and Mixed-Studies Reviews[END_REF][START_REF] Rouleau | Effects of e-learning in a continuing education context on nursing care: A review of systematic qualitative, quantitative and mixed studies reviews (protocol)[END_REF].
Note:
a The typology is partially informed by [START_REF] Hong | Convergent and sequential synthesis designs: Implications for conducting and reporting systematic reviews of qualitative and quantitative evidence[END_REF] b These definitions have in part been adapted from [START_REF] Booth | Systematic Approaches to a Successful Literature Review[END_REF]: The use of systematic and explicit methods to identify, select, and critically appraise relevant systematic reviews and to collect and synthesize findings pertaining to these systematic reviews. o Bias is commonly understood to be a concept drawn from the quantitative research paradigm and to be incompatible with the philosophical underpinnings of qualitative enquiry (Jimenez et al., 2018a).
Assessment of risk of bias/methodological quality by two independent reviewers -We recommend a clear and consistent use of terms.
o For example, in the MMSR tool, the term "risk of bias" is used interchangeably both for systematic QUANT (criterion B.2) and QUAL reviews (criterion C.4), along with the term "quality or critical appraisal".
Synthesizing and presenting findings
Pre-defined analyses are reported (e.g. in a protocol) or departures are explained -We recommend renaming this criterion to make it more neutral and inclusive for all types of SRs: "Planned analyses are reported (e.g., in a protocol) and differences between planned and performed analyses explained." o The planning of pre-defined analyses or departures relies on a quantitative tradition. The word "departure" underlines that deviations from a protocol are sources of bias or errors, introduced by reviewers during the selection of analyses and analysis methods. o As currently named, this criterion is appropriate for SRs of RCTs with metaanalysis but not for systematic QUAL reviews such as meta-ethnography.
Heterogeneity addressed in the synthesis -We recommend adapting the term "heterogeneity" when referring to systematic QUAL reviews in order to fit with this particular epistemological and methodological tradition. o For example, [START_REF] Pearce | Experiences of Self-Management Support Following a Stroke: A Meta-Review of Qualitative Systematic Reviews[END_REF] suggests this question: Do authors discuss convergence and divergence within the QUAL primary study findings?
Justification of meta-analysis (if applicable) for the combination of results
-We recommend renaming this criterion retrieved from the three tools to make it more neutral and inclusive for all types of SRs: "Justification of the synthesis approach for the combination of results." o Meta-analysis is a synthesis approach used in QUANT reviews. In QUAL reviews, other approaches are used, such as meta-synthesis.
In MSR, convergent and sequential synthesis designs are used (Hong & Pluye, 2018a).
Robustness of findings with funnel plot or sensitivity analyses -We recommend adapting the criterion to match with the vocabulary and tradition of systematic QUAL reviews and appropriate methods to ensure the "robustness" of findings. o In systematic QUAL reviews, some other terms can be used to refer to the robustness of findings, such as credibility, congruity, relevance, adequacy of data, credibility, dependence, and trustworthiness (e.g. see (Hong & Pluye, 2018a;[START_REF] Lewin | Using Qualitative Evidence in Decision Making for Health and Social Interventions: An Approach to Assess Confidence in Findings from Qualitative Evidence Syntheses (GRADE-CERQual)[END_REF][START_REF] Munn | Establishing confidence in the output of qualitative research synthesis: The ConQual approach[END_REF].
Integrating qualitative and quantitative evidence
Use of a program theory and/or logic model -We recommend adapting these criteria to take into account different synthesis designs (i.e. sequential exploratory, sequential explanatory, convergent) [START_REF] Pluye | Opening-up the definition of systematic literature review: The plurality of worldviews, methodologies and methods for reviews and syntheses[END_REF] and research questions outside of the program evaluation field.
The way these criteria are formulated are applicable for authors interested in answering different SR questions about the effectiveness of Incorporation of qualitative evidence in SR design
Analysis of intermediate and endpoint outcomes along causal chain
Use of qualitative evidence in causal chain analysis programs/interventions using a program theory/logic model and when a sequential explanatory design is used (i.e. results of the quantitative synthesis inform the qualitative synthesis).
Incorporation of qualitative evidence in other aspects of the analysis -These criteria are applicable for MSRs. However, it could be reformulated to also consider the incorporation of quantitative evidence because a mixed studies review can also start with collecting qualitative evidence, and then, incorporating quantitative evidence (sequential exploratory).
Integration of quantitative and qualitative evidence to form conclusions and suggest implications
Interpreting findings and drawing conclusions
Impact of heterogeneity on the results of SRs -As suggested earlier, the term "heterogeneity" needs to be adapted when referring to systematic QUAL reviews.
Investigation of publication bias and discussion of its likely impact on results
-We recommend developing and/or adapting specific guidance to make this criterion applicable for systematic QUAL reviews and MSRs.
o The guidance provided in ROBIS and in AMSTAR is formulated under a QUANT lens. o The examination of publication bias and its impact is a practice mainly seen in QUANT tradition. In health sciences, QUAL research and mixed methods are often rejected for publication given their perceived "low priority" (e.g., see [START_REF] Greenhalgh | An open letter to The BMJ editors on qualitative research[END_REF]. o The impact of publication bias for systematic QUAL reviews and MSRs can be even higher, compared to systematic QUANT reviews. Considering the hypothesis that few QUAL studies are included in a SR, missing one or more of these QUAL studies may seriously undermine the trustworthiness of the systematic QUAL reviews or MSRs due to unpublished papers that would contain meaningful results. Note: QUAL = qualitative; QUANT= quantitative; MSRs= mixed studies reviews; RCTs = randomized controlled trials; SRs= systematic reviews.
Figure 1 .
1 Figure 1. Different Levels of Research, and Typology related to Systematic Reviews of Systematic Reviews
Table 1 .
1 Liberati, A. (2009). The PRISMA Statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLOS MEDICINE, 6(7), e1000100. https://doi.org/10.1371/journal.pmed.1000100 Three types of Systematic Reviews of Systematic Reviews
Lunny, C., Brennan, S. E., McDonald, S., & McKenzie, J. E. (2016). Evidence map of studies
evaluating methods for conducting, interpreting and reporting overviews of systematic
reviews of interventions: Rationale and design. Systematic Reviews, 5.
https://doi.org/10.1186/s13643-015-0178-0
Lunny, C., Brennan, S. E., McDonald, S., & McKenzie, J. E. (2017). Toward a comprehensive
evidence map of overview of systematic review methods: Paper 1-purpose, eligibility,
search and data extraction. Systematic Reviews, 6. https://doi.org/10.1186/s13643-017-
0617-1
Lunny, C., Brennan, S. E.,
McDonald, S., & McKenzie, J. E. (2018)
. Toward a comprehensive evidence map of overview of systematic review methods: Paper 2-risk of bias assessment; synthesis, presentation and summary of the findings; and assessment of the certainty of the evidence. Systematic Reviews, 7(1), 159. https://doi.org/10.1186/s13643-018-0784-8 May, C. R., Cummings, A., Myall, M., Harvey, J., Pope, C., Griffiths, P., Roderick, P., Arber, M.,
Boehmer, K., Mair, F. S., & Richardson, A. (2016)
. Experiences of long-term life-limiting
Table 2 .
2 Perceived Applicability of Critical Appraisal Tools to Assess Methodological Quality of Systematic Qualitative and Mixed Studies Reviews
Criteria included ROBIS and criteria AMSTAR 2 MMSR criteria Is the criterion perceived as Is the criterion
in 8 steps of a SR a (Whiting et al., 2016a, b) criteria (Shea et (Jimenez et al., 2018a, applicable for systematic perceived as applicable for
al., 2017) b) QUAL reviews? systematic
MSRs?
0. Assessing the relevance
Does the question addressed by the review
SR match the SRSRs question (e.g. for the Phase 1 No No Yes Yes
PICO or the SPICE)?
1. Defining
the SR question and eligibility criteria for including SRs
Pre-published protocol 1.1 2 A1 Yes Yes
Definition of the SR questions and eligibility criteria 1.1 1 A2, A3, A4.1 Yes Yes
Appropriateness of eligibility criteria for SR question 1.2 No A3, A4.1 Yes Yes
Specification of a separate SR question for
each review component, including the
intervention design, the implementation No No A2 Partially yes Partially yes
processes, the participants, and the
intervention/program effects
Unambiguousness of eligibility criteria 1.3 No No Yes Yes
Appropriateness of all restrictions in
eligibility criteria based on SR characteristics 1.4 3 A3, A4.2 Yes Yes
Appropriateness of any restrictions in
eligibility criteria based on sources of 1.5 No A4.2, A5 Yes Yes
information
2. Applying
extensive and comprehensive search strategies using a variety of information sources
Comprehensiveness of the search strategies 2.1, 2.3 4 A4.2 Yes Yes
Appropriateness of restrictions based on date, publication format, or language 2.4, 2.5 4 A4.2 et A.5 Yes Yes
3. Identifying
potentially relevant SRs and selecting the relevant ones
4. Extracting data
Data extraction performed by two independent reviewers 3.1 6 B1 Yes Yes
List of all included SRs No No A6 Yes Yes
List of all excluded SRs as well as their justifications No 7 A6 Yes Yes
Use of methods to address dependency in
findings at the between study level and within individual studies (e.g. multiple 3.2 No A7 Yes Yes
outcomes reported)
Use of all relevant SR results in the synthesis 3.3 No D4,D5,D6 Yes Yes
5. Collecting
Screening of title/abstracts and/or full texts by two independent reviewers 3.1 5 A6 Yes Yes
and presenting data on risk of bias/methodological quality of pertinent individual studies within included SRs
Use of an appropriate technique and/or
appropriate criteria by authors of SRs for
assessing the risk of bias/methodological 3.4 9 B.2, C.4 Yes Yes
quality of pertinent individual studies
within included SRs
Assessment of risk of bias/methodological quality by two reviewers 3.5 No B.2, C.4 Yes Yes
6. Synthesizing and presenting findings
Description and presentation of detailed characteristics of SRs 3.2 8 B.1, C.2 Yes Yes
Inclusion of all studies in the synthesis 4.1 No No Yes Yes
Pre-defined analyses are reported (e.g. in a protocol) or departures are explained 4.2 No No Partially yes Partially yes
Appropriateness and clarity of synthesis approach 4.3 No B.3, C.5 Yes Yes
Heterogeneity addressed in the synthesis 4.4 11, 14 B.4 No Partially yes
Justification of meta-analysis (if applicable) for the combination of results 4.5 11, 12 B.4, B.5 No Partially yes
Robustness of findings with funnel plot or 4.5 15 B.7 No Partially yes
sensitivity analyses
7. Integrating qualitative and quantitative evidence
Use of a program theory and/or logic model No No D.1 Partially Yes Partially yes
Table 3 .
3 Recommendations to Adapt Criteria of Critical Appraisal Tools in Assessing Methodological Quality of Systematic Qualitative and Mixed Studies Reviews
Criteria included in steps of a SR a Recommendations 1. Defining the SR question and eligibility criteria for including SRs
Specification of a separate SR question for each -If it is a multicomponent review (Gough et al.,
review component, including the intervention 2019) that includes broad questions and a
design, the implementation processes, the combination of methods and data making it
participants, and the intervention/program effects possible to explain the effectiveness of
interventions/programs, then this criterion can be
applicable for all types of SRs depending on the
review questions. If the SR question(s) fall
outside the program evaluation field, this
criterion would need to be formulated in a more
inclusive way so that other SR questions can fit
in.
5. Collecting
and presenting data on risk of bias/methodological quality of pertinent individual studies within included SRs Use
of an appropriate technique and/or appropriate criteria by authors of SRs for assessing the risk of bias/methodological quality of pertinent individual studies within included SRs -We suggest avoiding the exclusive use of the term "risk of bias" and refer to "methodological quality" in order to avoid disciplinary or worldview jargon.
Smith, V., Devane, D., Begley, C. M., & Clarke, M. (2011). Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Medical Research Methodology, 11, 15. https://doi.org/10.1186/1471-2288-11-15 The EQUATOR Network. (2019) a Authors of systematic reviews of systematic reviews assess the methodological quality and risks of bias in included SRs. We could also assess risks of bias/methodological quality and other dimensions of quality, such as reporting, at a systematic review of systematic reviews level, but this is beyond the scope of this methodological discussion paper. |
00410303 | en | [
"sdv.bbm"
] | 2024/03/04 16:41:20 | 2009 | https://inserm.hal.science/inserm-00410303/file/Marty_et_al_2009_fichier_auteur.pdf | Isabelle Marty
email: [email protected]
Julien Fauré
Anne Fourest-Lieuvin
Stéphane Vassilopoulos
Sarah Oddoux
Julie Brocard
Triadin: What possible function 20 years later?
During the last 20 years, the identification of triadin function in cardiac and skeletal muscle has been the focus of numerous studies. First thought as the missing link between the ryanodine receptor and the dihydropyridines receptor and responsible of skeletal type excitation-contraction coupling, the current hypothesis on triadin function has slowly evolved, and triadin is envisaged now as a regulator of calcium release, both in cardiac and skeletal muscle. Nevertheless, none of the experiments performed up to now gave a clear cut view of what triadin really does in muscle. The problem became more complex with the identification of multiple triadin isoforms, having possibly multiple functions. Using a different approach from what has been done previously, we obtained new clues about the function of triadin. Our data point to a possible involvement of triadin in reticulum structure, in relation with the microtubule network.
Introduction
Excitation-contraction (EC) coupling, the process which results in muscle contraction after electrical stimulation, is performed by a macromolecular protein complex. The core of this complex is composed of two calcium channels, the dihydropyridines receptor (DHPR) and the ryanodine receptor (RyR). Around these two channels are organized a number of other proteins (calsequestrin, triadin, junctin, …), and an efficient muscle contraction relies on the synchronous work of all these proteins. One peculiar feature of this complex is that it is anchored in two different membranes: the T-tubule membrane, an invagination of the plasma membrane into the cytoplasm, that contains the DHPR, and the sarcoplasmic reticulum terminal cisternae that contains the RyR. In the skeletal muscle, it has been demonstrated that both channels, even anchored in two different membranes, are physically associated [START_REF] Marty | Biochemical Evidence for a complex involving Dihydropyridine Receptor and Ryanodine Receptor in Triad Junctions of Skeletal Muscle[END_REF]. The function of the calcium release complex is therefore based upon the precise organization of the triad (close apposition of two sarcoplasmic reticulum terminal cisternae with one T-tubule) and the implantation of the complex in this membrane structure. This extremely precise organization has to be maintained in a cell, the skeletal muscle fibre, which undergoes large deformations during each contraction. Triadin has first been identified as a 95kDa protein, specifically localized in the triad of the skeletal muscle, thought to connect DHPR to RyR [START_REF] Kim | Isolation of a terminal cisterna protein which may link the dihydropyridine receptor to the junctional foot protein in skeletal muscle[END_REF]. Since its identification in 1990, the specific function of triadin has been the matter of numerous studies, in cardiac as well as in skeletal muscle, the two muscles in which triadin expression has been demonstrated.
Triadin, a multiprotein family
Triadin was first believed to be a skeletal muscle specific protein, responsible of skeletal muscle EC coupling [START_REF] Caswell | Localization and partial characterization of the oligomeric disulfide-linked molecular weight 95,000 protein (triadin) which binds the ryanodine and dihydropyridine receptors in skeletal muscle triadic vesicles[END_REF]. In 1994, the presence of triadin in cardiac muscle was also demonstrated [START_REF] Peng | Structural diversity of triadin in skeletal muscle and evidence of its existence in heart[END_REF], as a 32kDa protein (theoretical MW). Nowadays, it is clear that the triadin gene can undergo multiple splicing, leading to a number of proteins. At least four isoforms are expressed in rodent skeletal muscles, named Trisk 95 (the 95kDa triadin first identified), Trisk 51, a 51kDa protein, Trisk 49 and Trisk 32, respectively 49kDa and 32kDa [START_REF] Marty | Cloning and characterization of a new isoform of skeletal muscle triadin[END_REF][START_REF] Vassilopoulos | Triadins are not triad-specific proteins: two new skeletal muscle triadins possibly involved in the architecture of sarcoplasmic reticulum[END_REF]. None of these isoforms is specific of a skeletal muscle type, and their pattern of expression is the same in fast or slow twitch muscles. Trisk 51 and Trisk 49 show an apparent molecular weight of 65 kDa in SDS-PAGE, and Trisk 32 migrates as a triplet centred on 37kDa. All the skeletal isoforms are truncated versions of Trisk 95, with a specific C-terminal end of variable length (figure 1). We have developed a number of antibodies, and depending on their epitope localization, in the common (N-terminal end) or in specific regions (C-terminal ends), these antibodies are either reactive on every triadin isoforms, or specific for one isoform (figure 2A). The antibody directed against the N-terminal end recognizes all triadins in all tissues and all species (mammalian) tested up to now, as this part is extremely conserved between all the spliced isoforms. Therefore this antibody allows the quantification of the relative expression level of the different isoforms in a given tissue. On the contrary, the antibodies specific of the Cterminal end of each isoform are only specific of a single isoform, and often specific of one species. Using antibodies against the common N-terminal part, we have observed that in rat skeletal muscle Trisk 95 and Trisk 51 are expressed at similar levels, each representing about 40% of the total triadin amount, Trisk 32 accounting for the remaining 20%. The situation is almost identical in mouse muscle. In adult human skeletal muscle, the major isoform is Trisk 51 (about 60% of all triadins), and Trisk 95 represents the remaining 40%. Trisk 32 is undetectable (figure 2B). The major cardiac triadin is CT1 [START_REF] Kobayashi | Identification of triadin 1 as the predominant triadin isoform expressed in mammalian myocardium[END_REF], but isoforms of higher molecular weight (CT2 and CT3) can be detected in cardiac muscle [START_REF] Guo | Biochemical characterization and molecular cloning of cardiac triadin[END_REF][START_REF] Kobayashi | Identification of triadin 1 as the predominant triadin isoform expressed in mammalian myocardium[END_REF][START_REF] Hong | Molecular cloning and characterization of mouse cardiac triadin isoforms[END_REF]. Trisk 32 is unambiguously identical to CT1, as demonstrated by cDNA cloning and by the reactivity of Trisk 32 specific antibodies with both skeletal and cardiac muscle. It is the only triadin isoform expressed in both muscles. The minor cardiac triadins detected with the anti-N-terminal antibody in heart homogenate are not recognized by the skeletal muscle specific antibodies of the same species, and are therefore cardiac muscle specific. As Trisk 32 is the only triadin isoform expressed in both muscles it could be renamed Trisk32/CT1, in order to clarify the nomenclature of the proteins, whereas the other tissue specific triadins should still be named Trisk (TRIadin Skeletal) or CT (Cardiac Triadin) respectively for the skeletal and cardiac specific triadins.
Localization of the triadin isoforms:
The question of the localization of triadin seems trivial, as this protein has been named "triadin" according to the specific triad localization of the first identified isoform, Trisk 95. Nevertheless, considering the identification of multiple isoforms, this question has to be carefully re-examined. Trisk 95 and Trisk 51 localization is restricted to the triad of skeletal muscle where they are associated with RyR. Therefore, these two triadin isoforms are full members of the calcium release complex [START_REF] Marty | Cloning and characterization of a new isoform of skeletal muscle triadin[END_REF][START_REF] Vassilopoulos | Triadins are not triad-specific proteins: two new skeletal muscle triadins possibly involved in the architecture of sarcoplasmic reticulum[END_REF]. The situation is much more complex for Trisk 32, which behaves differently in heart and in skeletal muscle. In heart, it is both colocalized and associated with the cardiac RyR. In skeletal muscle, the protein is mainly in the longitudinal sarcoplasmic reticulum [START_REF] Vassilopoulos | Triadins are not triad-specific proteins: two new skeletal muscle triadins possibly involved in the architecture of sarcoplasmic reticulum[END_REF], and therefore not associated with RyR. Interestingly, we have shown that in the longitudinal sarcoplasmic reticulum, Trisk 32 is associated with the IP3receptor. Nevertheless, a closer examination showed that in fact Trisk 32 is localized in the whole sarcoplamic reticulum, both in the terminal cisternae which are part of the triad and in the longitudinal reticulum which connects two adjacent triads. Immunoprecipitation experiments confirmed that in fact a low amount of RyR1 is associated with Trisk 32. This means that even though the majority of Trisk 32 is not at the triad, the low amount present in the triad is associated with RyR. The localization of the main triadin isoforms in skeletal muscle is schematized in figure 3.
Function of triadin : overexpression or Knock Out/Knock Down experiments:
In order to identify the function of triadin, a number of experiments were performed, based on the modification of triadin expression level, followed by the evaluation of the resulting calcium release by RyR. First, triadin (Trisk 95) was overexpressed in primary culture of rat skeletal myotubes, using adenovirus gene transfer, which resulted in a blocking of depolarization induced calcium release (Smida [START_REF] Rezgui | Triadin (TRISK 95) over-expression blocks excitation-contraction coupling in rat skeletal myotubes[END_REF]. On the other hand, experiments were performed on a calcium release complex completely or partially depleted of triadin. Mutant forms of RyR1 containing deletions in the triadin interacting domains (identified by [START_REF] Lee | Negatively charged amino acids within the intraluminal loop of ryanodine receptor are involved in the interaction with triadin[END_REF] were expressed in dyspedic myotubes, hence leading to suppression of triadin from the calcium release complex. In these myotubes, electrical depolarization was unable to induce calcium release from sarcoplasmic reticulum (Goonasekera et al, 2007), probably because of a reduction in the amplitude and kinetics of the calcium release. Similar results were obtained by reducing triadin expression with siRNA in C2C12 myotubes [START_REF] Wang | Altered stored calcium release in skeletal myotubes deficient of triadin and junctin[END_REF]. Lowering the amount of triadin resulted in a partial inhibition of the depolarization induced calcium release. The total deletion of triadin from the calcium release complex was performed by the production of a triadin KO mouse [START_REF] Shen | Triadins modulate intracellular Ca(2+) homeostasis but are not essential for excitation-contraction coupling in skeletal muscle[END_REF]. These experiments are more difficult to analyze, as compensatory reduction in calsequestrin expression is observed in the muscles of these mice. Nevertheless, in addition to the lower amount of stored calcium, a reduction in the depolarization induced calcium release was observed in myotubes which presented no reduction in calsequestrin. Therefore, the conclusion of all these experiments is that any modification in triadin expression levels, either by overexpression or by knock out/knock down, results in a common effect: reduction in depolarization-induced calcium release. Such apparent conflicting results underline the necessity of developing new approaches to understand triadin function.
Triadin expression in non muscle cells
To get insight into the function of triadin, we decided to use a different approach in order to reveal the intrinsic properties of triadin in a system devoided of its usual muscle partners. We expressed triadin in a non myogenic cell line, the COS-7 cells, and analyzed both the endoplasmic reticulum (ER) structure and the microtubule network, using specific antibodies [START_REF] Fauré | Triadin binding to the C-Terminal luminal loop of the ryanodine receptor is important for skeletal muscle excitation-contraction coupling[END_REF]. We observed that triadin induced drastic modifications of the ER structure, forming rope-like ER structures, associated with a massive reorganization of the microtubule network, the microtubules being bundled at the periphery of the cell. Such modifications were never observed when other ER-homing proteins, RyR for example, were overexpressed. In addition, these bundled microtubules are stable, resisting a depolymerization induced by nocodazole. Similar observations have already been made after overexpression of proteins involved in anchoring the ER to microtubules [START_REF] Vedrenne | Morphogenesis of the endoplasmic reticulum: beyond active membrane expansion[END_REF]. These different experiments thus point to a possible new function of triadin. Triadin could act as a protein connecting the reticulum (or the calcium release complex) to the microtubule network, and could also be involved in the stabilization of microtubules in muscle cells. This function would be a major one, since microtubules are known to depolymerize under high Ca 2+ concentration. If microtubules are indeed involved in triad organization, their depolymerization during each contraction would probably be dramatic. If triadin is involved in this structural function, it is probably not the only protein responsible for this anchoring/stabilization role, because triadin knock out mice muscles do not exhibit massive triad disorganization. Nevertheless, 30% of the triads show an abnormal orientation in the muscle of these triadin KO animals [START_REF] Shen | Triadins modulate intracellular Ca(2+) homeostasis but are not essential for excitation-contraction coupling in skeletal muscle[END_REF], and this proportion is confirmed in a triadin null mouse line we have developed in our team (unpublished data). This amount of misoriented triads was reproducibly observed in two different mouse lines, leading to a strong confirmation of the possible involvement of triadin in maintaining the triad structure. Therefore, these studies in a non muscle cell line have allowed us to unmask a specific intrinsic property of triadin, which is more difficult to visualize in muscle cells, perhaps because of other muscle proteins partially sharing similar function with triadin.
Conclusion
We have observed that triadin expression in COS-7 cells induces drastic modification in the organisation of the endoplasmic reticulum and microtubules. Therefore our current hypothesis is that triadin could be involved in the sarcoplamic reticulum structure in skeletal muscle, and could perform a regulatory function on RyR through this structural function. The overexpression as well as a down regulation of triadin in skeletal muscle cells would result in local sarcoplasmic reticulum disorganization, thereby induce an uncoupling between RyR and the facing DHPR. Nevertheless, additional studies need to be performed to confirm this hypothesis in skeletal muscle.
Figure 1 :
1 Figure 1: The rat skeletal muscle triadin isoforms
Figure 2 :Figure 3
23 Figure 2: A -The rat skeletal muscle isoforms, detected by an antibody in common region (Nter) or in specific Cterminal region of each isoform. B -The triadin isoforms expressed in different tissues (mouse heart, human skeletal muscle, mouse skeletal muscle, rat skeletal muscle) detected with an antibody against the common Nterminal part of all triadins, therefore able to react with all the triadins and used to quantify their relative amount. (H: heart; SK: skeletal muscle)
Acknowledgments:
The work of our team was supported by grants from Association Française contre les Myopathies (AFM) and GIS-Maladies Rares. |
04103186 | en | [
"phys",
"sdu"
] | 2024/03/04 16:41:20 | 2023 | https://hal.science/hal-04103186/file/colour%20dynamics2.pdf | Brian Craig
email: [email protected]
Elementary properties of colour charge and the fine structure constant as revealed by the dynamics of the P-sphere
Keywords: fine structure constant, dyon, Euclidean space, magnetic charge, colour charge
This paper explores the formation of Three-Dimensional Euclidean space and some of the fundamental characteristics of Coulomb charge and Colour charge. A periodic lattice of sites, with each site being the centre of a ball (P-sphere) possessing charge, angular momentum and tachyon-like mass, at least P+1 dimensions, is shown to provide a good model for a short-lived 3-dimensional Euclidean space with NaCl like cubic symmetry. Of particular interest is the derivation of the value for the fine structure constant , finds each ball to be a hollow hypersphere and possessing the charge of a dyon, containing both magnetic charge (26 units of Dirac monopole charge, g) and Coulomb charge (e/16). The calculated value for is reproduced to the accuracy of the measured value (1/137.035999206). Either a simple dipole of magnetic charge or a quadrupole of magnetic charge may be a precursor for this lattice. This (NaCl-like) virtual state is proposed to transform to another NaCl lattice where the lattice sites are occupied by dyons of magnetic charge of single unit Dirac monopole, paired with Coulomb charge. Half of the sites have (magnitude) Coulomb charge of 27e, and half have charge 27⅓e. This state gives rise to interesting types of excitations of magnitude ⅔e and ⅓e. Coulomb charge and three Colour charge are the four dimensions of a 3-sphere.
Introduction
The aim of this paper to present qualities of the state obtained from the derivation of the value for the finestructure constant, in the context of the initial formation of a three-dimensional space from an apparent void, possessing a total energy of zero. A subsequent phase change leads to release free energy in the form of excitations which are assumed to possess qualities of Coulomb and Colour charge. This paper builds on two earlier papers [1,2] which introduced the basic concepts and recent work has found a key relevant state. The state is a lattice of dyon charge, possessing five additional compact dimensions. A model for colour charge and coloured bosons is deduced for this (3+5)-dimensional space, including the Coulomb values of -1/3 and +2/3. The five compacted dimensions are assumed to relate to one for Coulomb charge (such as the electron -e), one for magnetic charge (g) and three colour charges (A, B and C). Euclidean space is described as a lattice of touching P-spheres, which are hollow in (P-1) dimensions. The angular momentum within these spheres is critical. The magnitude of charge is dictated by the Kaluza-Klein approach (Lorentz force) relating charge with speed in a compact dimension.
Rather than just the three-dimensional lattice appearing suddenly at time zero, this paper also looks at the scenario that a finite-sized multiple, such as dipole, quadrupole or octupole, makes the initial appearance and the lattice is built from a replication of this multipole, as an effective unit cell. In particular the octupole of a square of two positive charges in opposing diagonal and two negative charges of the other diagonal can be replicated (many times) to first form the cubic NaCl-like lattice. Subsequent phase occur until the magnetic charge is a single unit (Dirac monopole, g). The approach needs to consider Lorentz symmetry of the observable Universe, and yet introduces the notion of crystalline symmetries for space as its own entity. To proceed in this task, we introduce an order parameter where =0 is Lorentz symmetry and crystalline symmetries can occur for ≠0. To succeed, we need all free particles, those which can exist in isolation, to have no measurable interaction with the proposed crystalline structure of space for the range of energy that have been used by experiments to date. Effective Lorentz symmetry is aided by the underlying structure continuously oscillating between similar, but different, structures.
The approach is based on a variation of the tachyon field method, where the sombrero is turned upside down by reversing the sign of the coefficients as well as including the sixth order, to give 𝑉(𝜑) = - + 𝜑 - 𝜑 + 𝜑 where o , 1 , 2 , 3 are all positive, and is a scaler parameter. The system has Lorentz symmetry at = 0, and this Lorentz invariant system is stable. The global minimum is =0 where for small , V(0±)= -o+r 2 2 , and r is a real mass.
However, this Lorentz-symmetric state is not the initial state. Instead, the initial state is assumed to have zero energy, does not possess Lorentz symmetry, is a local maximum, where dV()/d=0 at =T. This occurs for T, and V(T)=0 for zero total energy, giving
= 3 𝛬 - 3 𝛬 1 - 1 3 𝛬
where
𝛬 = 1 -1 -3
This value T will correspond to a local maxima, where V(T ± )= -i 2 2 , and i is the imaginary tachyon mass and
𝜑 = 3 1 -1 -3
indicating a virtual state possessing zero total energy, and a finite life-time t , ≈ h/(ic 2 ). These states described quantised 3D space. Each point is a centre of a charged dyon, possessing both a magnetic charge jg and a Coulomb (electronic) charge ne/j, for the rational fraction n/j. The charge -e corresponds to the electron charge, and the charge g is the corresponding Dirac magnetic monopole charge. Each dyon has a mass containing an imaginary part.
The key feature is that this Tachyon state T is the state prior to the big bang and is quantised as a crystal lattice.
This state has total energy V(T)=0, and will not possess Lorentz symmetry as this state is not anisotropic. Apparent Lorentz symmetry is achieved for excitations of the Coulomb component of the dyon charge following a phase change. The ground state describing the isotropic crystalline state for 3D space (T) is not a single crystalline structure, but a resonance between two states, differing in Coulomb charge.
Earlier versions of this work have been placed on the web [1,2] had incorrectly deduced a large value for the magnetic charge for the final Euclidean state. Instead, such states are intermediate states and the final state has much lower values for the magnetic charge. (This preliminary works is not well written by myself with of several typographical errors and clumsy explanations.) Recent work has focussed on tracing the transition from an original multipole through a series of three-dimensional structures. The multipole is the original state, which can be thought as a small seed crystal, somehow induces replication to form the current three-dimensional structure for space. Attention is given to the formation of colour charge, bosons that exchange colour charge and an explanation for the puzzling Coulomb charge components of magnitude e/3 and 2e/3 and e.
Underlying Concepts for the Tachyon State
The virtual tachyon (=T) state
First of all, some simple underlying assumptions are made for the virtual tachyon (=T) state. The fundamental object of space is assumed to be the P-sphere, a spatial object of radius r and possessing P angles. This object has P+1 spatial dimensions and a hypersurface of P spatial dimensions. Three-dimensional space is assumed to be a network of touching P-spheres. A simple model for an extended object is a periodic lattice of touching spheres. The charge of the sphere is assumed to be at the centre of the sphere, and the quantum rigid rotor model is used for the angular momentum of the sphere, where mass is located on the hypersurface. The mass of each lattice point is m=mr+imi. The ratio of mr/mi is a key quality. The tachyon is a state with this ratio much less than unity. A stable state is where the ratio is much higher than unity. The absolute values of mr and mi are not known.
If P=2 then the touching spheres of the three-dimensional lattice are all hollow. If P>2 then the mass is distributed through the three-dimensional sphere and there are compactified dimensions. As the spheres touch in the three dimensional lattice, and if r describes both the radius of the P-sphere (P>2, r 2 =x 2 +y 2 +z 2 +w 2 +..) and the distance from the sphere centre to the touching point (r 2 =xTP 2 +yTP 2 +zTP 2 ), where (xTP,yTP,zTP,wTP,..) are the coordinates of the touching point, then the magnitude of any compactified dimensions (wTP) at the touching point needs to be zero. The model does assume r is the same, and the compactified dimensions are zero at the touching points. Compactified dimensions are relevant as the Kalusa-Klein concepts relate the speed of a mass (of a parton) in a compact dimension to the value of charge (of the parton).
The T lattice has aether-like qualities rather than Lorentz symmetry. Each ball is a (P=N+1)-sphere (S N+1 ) possessing angular quantum number L [1], radius r, and dyonic charge q=(ne/j, jg) where -e is the electron charge and g is the Dirac magnetic charge, with integers n and j. The Madelung constant (M) of the lattice structure (position vectors ri), given by
𝑀 = 𝑒 𝑒 𝑒 𝑟 𝒓 -𝒓 exp - 𝒓 -𝒓 𝑟 𝑜
where the term (ro -1 =mic/ħ) is an attenuation factor, ħ is the reduced Planck constant, c the velocity of light. The distance rac=2r is that between nearest neighbours, possessing Coulomb charge ei (and magnetic charge gi) with the same magnitude but opposite sign, and for ro >> rac, the value of M is very close to the solid state value calculated for an infinite crystal. In fact no significant attenuation is found for the proposed state for T as the solid state value gave complete agreement. An analogy of this 3-dimensional Euclidean space is a salt crystal where rac is the distance between a cation and an adjacent anion.
We also consider a small finite sized multipole, such as a dipole (one positive and one negative, a quadrupole (two positive and negative in a square) of octupole (four positive and negative in a square) then the summation is over a small number of sites to produce an effective M. If the value for rac is not many orders smaller than r0, due to all the mass distributed to a handful of poles, then M may be in need of a small reduction, by the fraction (1-rac/r) where rac/r <<1. The multipole may be the initial structure, acting as a seed for crystal growth.
Fine structure Constant
The value for is given by the following equation 𝛼 = 1 4𝜋𝜀 𝑒 ħ𝑐 which is the Coulomb expression for for electronic charge, where the (Coulomb) charge of an electron is -e. On the other hand assuming the charge unit q is instead the Dirac magnetic monopole, possessing charge g, defined as
𝑔 = 𝑘 2𝛼 𝑒𝑐
where the Dirac normalisation condition require k to an integer, then for k=1, (4𝛼) = ħ𝑐 𝜇 4𝜋 𝑔 which is the magnetic charge expression for . The aim of this work is to derive an expression for ħc in terms of Kq 2 , where q is the charge unit and K is a constant, and thereby gain an expression for .
The third type of particle is the dyon, possessing both electronic charge and magnetic charge. The interaction between two dyons with respective charge (e1,g1) and (e2,g2) sees the Potential energy is of the form [START_REF] Chuu | Path integral quantization of the relativistic dyonium system[END_REF] 𝑉(𝑟) = 𝑒 𝑒 4𝜋𝜀 𝑟 + 𝜇 4𝜋
𝑔 𝑔 𝑟 and magnetic potential vector A(r) for r =(x1,x2,x3) is
𝑨(𝒓) = ħ𝑄 𝑥 𝒙 𝟐 -𝑥 𝒙 𝟏 |𝒓|𝑥 𝑥
where x⊥=(x1,x2,0) and Q=e1g2-e2g1. The parameter Q = 0 when e1 = -e2 = qee, and g1 = -g2 = jg for n, which satisfies the Dirac condition for magnetic charge. The value for the lattice energy ELE is given by the expression
𝐸 = ½ 𝑀 𝑟 𝑞 4𝜋𝜀 + 𝜇 4𝜋 𝑗 𝑔 = ½ 𝑀 𝑟 𝑞 + 𝑗 4𝛼 ħ𝑐𝛼
where M is the Madelung constant for the lattice array (of positive and negatively charged dyons). Note the dyon charge (q) is Coulomb charge (qee) plus magnetic charge (jg).
The Derivation of the expression for the fine structure constant from the Dirac Expression
Each element of three-dimensional array has the same value of tachyon-like mass-energy (m = mr + imi). A tachyon possesses a non-zero imaginary part of the mass-energy and such quantum states may have a virtual or transient existance. The lattice energy due to lattice interaction between these charges, the mass-energy, the rotational energy and gravitational interaction are the same for each site and the summation of all these energy components is assumed to be zero. The interactions within the tachyon state are limited to an extended although finite range, as dictated by mi. The total energy per site is given by Dirac expression 𝐸 = ((𝑝𝑐) + (𝑚 𝑐 ) ) / + 𝑉 where the potential energy is the sum V=ELE+EG of the lattice energy ELE and the Gravitational energy EG. At this stage no assumption is made on the nature of q or K, giving
𝐸 = ½ 𝐾 𝑟 𝑞 𝑀
where M is always negative. The Madelung constants calculated for salt crystals [START_REF] Tavernier | Clifford boundary conditions: a simple direct-sum evaluation of Madelung constants[END_REF][START_REF] Zucker | Madelung constants and lattice sums for hexagonal crystals[END_REF] provide good estimates when 𝑟 ≫ 𝑟 . Otherwise an effective Madelung constant could be calculated as a convergent finite summation, arising from significant exponential attenuation when ro and rac(=2r) of comparable magnitude. For the dyon then Kq 2 =e ((qe) 2 +(j/2) -2 )/4o.
Gravitational terms
Newtonian gravity is assumed as the rigid lattice is uniform and all positions are fixed, and the term is calculated over an extended volume encompassing a very large number of lattice positions.
In extended three dimensional space (3D), the gravitational energy for the transient state having a mass-energy of m is
𝐸 , (𝑚) = - 1 2 𝜌 exp (- 𝑥 𝑟 )𝑢(𝑥)4𝜋𝑥 𝑑𝑥
where D is the mass-energy density, with ro -1 =mic/h. The mass-energy density is derived from the particle mass energy m, is given by the expression 𝜌 = 𝐷 𝑚 𝑟 where D accounts for packing of the touching spheres within the lattice, where D=for the NaCl structure and D= for the CsCl structure. Newtonian gravity uses 𝑢(𝑥) = 𝐺 , giving
𝐸 (𝑚) = -2𝜋 𝑚 𝐺𝐷𝑟 𝑟 = -𝑉 , 𝑟 𝑚 𝑚 + 𝑖
where
𝑉 , = 2𝐺𝜋𝐷 ħ 𝑐 𝐸 , (𝑚) = -𝑉 , 𝑟 𝑚 𝑚 + 𝑖
noting VG,PD must be positive, and P is the number of uncompacted (spatially extended) dimensions.
3.2 Energy due to rotation in N+1 sphere.
The spatial quanta are assumed to be an (N+1)-sphere (P=N+1) of radius r = rac/2, which implies this model Euclidean space to be a system of touching spheres. The momentum squared p 2 , where there is only angular momentum within an (N+1)-sphere, which is given by the expression [START_REF] Frye | Spherical Harmonics in p Dimensions[END_REF]
〈𝑝 〉 = 1 𝑟 〈𝐿 〉 = ħ 𝑟 𝐿(𝐿 + 𝑁)
for a rigid rotor of the radius r and angular quantum number L, which is an integer. This is the equation for a rotation about a point, which is an axis of zero dimensions. If the axis has one dimension then we have a hypercylinder with the (N+1)-sphere as the "hyper-circle" in (N+2)-dimensional space. The above equation describes this motion. To make a simple example, consider three-dimensional space. A mass rotating in a 1-sphere (circle) around a line (1-dimesional axis) in 3-dimensional space, (which is a hollow cyclinder). This uses the above expresion for N=0. A 2-sphere around a point in 3-dimensional space uses the above expression for N=1.
This will be generalised to be a (N+1) sphere rotating about a axis. The axis may have zero or more dimensions. Such dimensions of the axis will be described as passive. The dimensions of the (N+1) sphere are described as active. Motion changes within those active dimensions. The above equation describes this rotation.
The expression for the fine structure constant
The total energy E is assumed to be zero, describing a transient state which does not require either an input or a release of energy, giving (𝑝𝑐) + (𝑚𝑐 ) = (𝐸 + 𝐸 (𝑚)) which, on substitution becomes
(ħ𝑐) 𝑟 𝐿(𝐿 + 𝑁) + (𝑚𝑐 ) = (𝐾𝑀𝑞 ) 16𝑟 - 𝐾𝑞 2𝑟 𝑀𝑉 𝑚 𝑚 + 𝑖 + 𝑉 𝑟 𝑚 𝑚 + 𝑖
The transient state is assumed to have a value for mi that allows ro>>r. The value for mi cannot be zero as we need an imaginary contribution to the above equation, as will be shown below. However mi can have an extremely small but positive value and in such a case the transient state may become very long lived.
Multiplying both sides of the previous expression by r 6 Then if is the of unity, mr the same order or less than mi, then mi is the order of the Planck mass (2x10 -8 kg).
The 2-torus, or square lattice, (P=2) gives from which the solution is
𝑉 , 𝑟 (𝑚 𝑐 ) = 𝐾𝑞 𝑀 2(𝑚 𝑐 ) 2 𝑚 𝑚 -1 1 ± 1 + 16 𝑚 𝑐 𝑟 𝐾𝑞 𝑀 𝑚 𝑚 -1 giving [𝑉 ] ± = - 𝐾𝑞 (-𝑀)𝑟 4 𝑚 𝑚 -1 1 ± 1 + 16 𝑚 𝑐 𝑟 𝐾𝑞 𝑀 𝑚 𝑚 -1
where VGPD must not be negative, noting that M<0. If mr>mi then the term outside the bracket is negative, requires the the minus sign, as below
𝑉 (𝑚 > 𝑚 ) = - 𝐾𝑞 (-𝑀)𝑟 4 𝑚 𝑚 -1 1 -1 + 16 𝑚 𝑐 𝑟 𝐾𝑞 𝑀 𝑚 𝑚 -1
If mi>mr then the term outside the bracket is positive, then requires the term inside the square root of the solution
𝑉 ± (𝑚 < 𝑚 ) = 𝐾𝑞 (-𝑀)𝑟 4 1 - 𝑚 𝑚 1 ± 1 -16 𝑚 𝑐 𝑟 𝐾𝑞 𝑀 1 - 𝑚 𝑚 to satisfy 1 -16 𝑚 𝑐 𝑟 𝐾𝑞 𝑀 1 - 𝑚 𝑚 > 0
Now for mrc 2 <<Kq 2 , we have noting the term in the square brackets should be extremely small for magnetic charge (j>0), and r is extremely small (r<<ro). It is important to note this is the tachyon case, when mi>mr , and allows for short-lived transitional states.
𝑉 ± (𝑚 < 𝑚 ) = 2 𝑟 𝐾𝑞 (-𝑀) 𝑚 𝑐 𝑟
The non-tachyon case (mi<<mr) is a potentially stable state, and this is a requirement for the Euclidean space. We define the residual as The initial multipole of a few spheres could not be conclusively deduced as any one particular structure or charge. For such a small system the tachyon mass may be high, leading to a very small though significant exponential attenuation of "lattice energy", which is better described as molecular-like potential energy. The large value for the magnetic charge is not an impediment for this state and there are only a few spheres in total. The multipole and extremely small value of r, provides some qualitative features of a singularity. Such a multipole may act as a seed,
Lattice
Number for replication and expansion, to a crystal. The last sentence is quite speculative and the author regrettably is unable to expand on this.
The most likely candidate for the initial crystal state is the j=26, NaCl structure. A j=13 state is found for a small unfamiliar value of Coulomb charge. However, for reasons discussed in the next four sections, this state does not appear to be the final state. A more detailed discussion of Table 2 is presented in section 8, which includes the interesting series of states for j=1.
Excitations within the Rotational Field
The strong (nuclear) interaction is the order of ħc, which is 137 times greater than that between to two Coulomb charges (e) and 34 times less than that between two magnetic charges (g). We need to examine what type of change within the (N+1)-sphere may provide an interaction of that magnitude. We shall examine a shift in the parameter N and a shift in the parameter L.
The equatorial grand-circle of the N+1 sphere is a N-sphere. The analogy is the equator of radius R of a ball (2sphere) of radius R, has been reduced (to be a 1-sphere) by removing one dimension in the equation of the angularonly-Laplacian. In the ball case (or planet) the dimension removed in the north-south axis. Note that this Laplacian also holds for a thin cylinder if the dimension normal to the circle is "passive" in the sense that all angular momentum is around this axis. Rewriting the equation for angular momentum squared,
〈𝑝 〉 = 1 𝑟 〈𝐿 〉 = ħ 𝑟 𝐿(𝐿 + 𝑁)
For a N+1 sphere. The difference in rotational energy is
〈𝑝 𝑐 〉 / -〈𝑝 𝑐 〉 = ħ𝑐 𝑟 𝐿(𝐿 + 𝑁) -𝐿(𝐿 + 𝑁 -1) = ħ𝑐 𝑟 𝐿(𝐿 + 𝑁) -𝐿(𝐿 + 𝑁) -𝐿)
which, for large L, on expanding the square root for very small (1/L), gives
〈𝑝 𝑐 〉 -〈𝑝 𝑐 〉 ≈ ħ𝑐 2𝑟
If, however L (for the N+1 sphere) is increased from L to L+1 , then 2 (𝐿 -1)(𝐿 + 𝑁 + 1) which similarly decreases as 1/L. Now consider the Coulomb charge qe on this site. If this charge (at only this site) changes for q, then the change in lattice energy for just this site () is given by
𝛿𝜀 = 𝑀 𝛼 2𝑟 𝑞 𝑒 (𝛿𝑞 ±2 )
noting M is negative and the factor of ½ results from the nearest neighbour distance is 2r. Now if q±2 is the same sign as qe then is negative (as there is a boost in the ionic (or spherical) charge), and if a different sign, then is positive.
This suggests a small increase (or decrease) in rotational energy at a site corresponds to a small boost (or reduction) in the Coulomb charge at this site. In simple classical terms the increase in speed leads to an increase in charge, which is consistent with Kaluza Klein [START_REF] Th | Zum Unitätsproblem in der Physik Sitzungbar[END_REF][START_REF] Klein | Quanten theorie und fünfdimensionale Relativitätstheorie[END_REF]. Note although L, which effectively describes the number of cycles (of rotational orbit) per second, has decreased from L to L-1, the additional dimensions has effectively increased the path length for each cycle, and a small increase in kinetic energy is realized. This implies of rotational energy required to excite from (N,L) to (L+1,N-2) is through as increase of qe by q-2 and the release of rotational energy from the deexcitation from (L,N) to (L-1,N+1) can decrease qe by q-2. This gives the following expression
𝑁 -1 2𝐿(𝐿 + 𝑁) = -𝑀𝛼𝑞 𝑒 𝛿𝑞 +2 + 𝛿𝑈 -2 𝑁 + 1 (𝐿 -1)(𝐿 + 𝑁 + 1) = +𝑀𝛼𝑞 𝑒 𝛿𝑞 -2 + 𝛿𝑈 +2
Noting M<0 and the residual energy U is expected to be small, otherwise the formation of these excitations would be excessively hot. The U may be partially consumed by forming the apparent mass of the excitation representing a particle. The ratio ofq+2 to q-2 is given by 𝛿𝑞 -2 𝛿𝑞 +2 ≈ -𝑁 + 1 𝑁 -1 which is now assumed to exactly equals. Thus, if the charge (q) has (quark-like) values like +2/3 (and -1/3), then N=3. The value N=3 gives the charge associated with increasing from N=3 to N=3+2, being twice that, and of opposite sign, to the charge associated with reducing from N=3 to N=1.
Note that these values of (N+2) refer to the number of dimensions of the active hyper-(N+1)-sphere, and does not include the inactive dimensions. Inactive dimensions form the hyper-axis, around which the mass rotates. The value of N=3 implies 5 active dimensions, a 4-sphere rotating about an axis. Remember these excitations are limited to a single sphere. The cases of interacting excitations, and a high density of excitations, are discussed in section 7.
Colour Dynamics
This section is a tentative description of the colour charge process in light of the work above. This is speculative work, which may be revised.
Increasing the number of two active dimensions by two, extends the number of active dimensions from 5 to 7. The magnetic and Coulomb compact dimensions are always active. That leaves five other active dimensions. We must have at least seven dimensions, and a minimum of 8 is assumed. These consist of 3 Euclidean, one associated with Coulomb charge, one associated with Magnetic charge and 3 associated with three colour charges (labelled A, B and C). All five dimensions associated with charge are assumed to be compact. The cylinder condition of the Kaluza-Klein model [START_REF] Th | Zum Unitätsproblem in der Physik Sitzungbar[END_REF][START_REF] Klein | Quanten theorie und fünfdimensionale Relativitätstheorie[END_REF] is assumed for compact dimensions.
If all three colour dimensions are active at one time, then the interaction (force) is two dimensional (within 3D Euclidean space). If only two-colour dimensions are active then the interaction can be three dimensional (in 3D Euclidean space). Consider the case of two compact colour dimensions active (A,B) at a particular hypersphere, with all three Euclidian dimensions active. In a brief moment of time, we have three colours (all compact) active during which A is cancelled by anti-A and replaced by C, A unchanged. We now have (B,C). This has been obtained by absorbing a two colour boson consisting of (anti-A,C). This can be also induced by emitting a two-colour boson consisting of (A,anti-C). During this moment of time only two of the Euclidean dimensions are active. A tentative process of six steps is 1. The sphere has two colour dimensions active (A+B), 2. The sphere has three colour dimensions active (A,B,anti-A,C). One Euclidean dimension is inactive.
3. The sphere has two colour dimensions active (B+C), 4. The sphere has three colour dimensions active (B,C,anti-B,A). One Euclidean dimension is inactive. 5. The sphere has two colour dimensions active (A+C), 6. The sphere has three colour dimensions active (C,A,anti-C,B). One Euclidean dimension is inactive. Return to step 1.
Excitations arising from a reduction (by 2) in spatial dimensions from N=3 (a 4-sphere) to N=1 (a 2-sphere) is interesting we now only have 3 active dimensions. These seem to be colour A compact, Coulomb compact, and Magnetic compact. The other four (or more) dimensions must be inactive. So how can we get one compact dimension of colour charge. The only possible answer, assuming the theory is sound, is all the Coulomb charge has vanished to be inactive, and transferred to be a colour charge.
We now have another process of six steps. All steps have no active spatial Euclidean dimensions. The compact magnetic charge dimension is always active.
1. The sphere has one active compact colour (A). Active Coulomb compact dimension. 2. The sphere has two active compact colour dimensions (B=A+anti-A+B). No active compact Coulomb compact dimension. 3. The sphere has one active compact colour (B). Active Coulomb compact dimension. 4. The sphere has two compact colour dimensions active (C=B+anti-B+C). No active Coulomb compact dimension. 5. The sphere has one compact active colour (C). Active Coulomb compact dimension. 6. The sphere has two compact colour dimensions active (A=C+anti-C+A). No active Coulomb compact dimension.
and this switching of the four colour/Coulomb (dimensions) from one colour active (and two inactive) and one Coulomb, to two colour active (and one inactive) with Coulomb compact dimension inactive. This switching is a continuous rotation of the two-dimensional hyper-equator (about an axis of 2 inactive dimensions) within the threesphere (4 dimensions, consisting of one compact Coulomb and three compact Colour charge). When the Coulomb charge is active, the value is q+2=±1/3. For the previous case of seven active dimensions and charge q-2=∓2/3, there is a one-dimensional equator (circle of fixed radius), about an axis of one dimension, that also rotates on the surface of the 3-sphere.
For this process to be valid we need to interaction of the Coulomb charge to be the same order as colour charge which is ħc. Therefore, we need the magnitude of Coulomb charge to be approximately () -1/2 e (where e is the magnitude of the electron charge). Therefore, the relative magnitude Colour charge should be the order of 12e. Again, we propose a K-K process for this charge with magnitude of colour charge proportional to speed of rotating mass in that colour dimension.
The apparent disappearance of Coulomb charge occurs when the two-colour boson is absorbed or emitted. This step is assumed to be a very short fraction of the time.
Pionic Crystal
In this section excitations with pion-like properties are discussed. The term "pion" is used loosely as spinors are not employed, particularly for the two component excitations. The author uses this term associated with spin S=0, mindful that the W and Z particle (Spin=1), may be hidden.
We show in the previous section that q may change to qe+q, where q-2 (N=3 to N=1) is of the opposite sign to qe and q+2 (N=3 to N=5) is the same sign as qe and the magnitude of q+2 is twice that of q-2. Therefore, we may see a "charged pion" form if a positively-charged sphere is excited by q+2 and a negatively-charged sphere is excited by q-2. Or we may see a "neutral pion" if the positive sphere and the negative sphere have the same change (in value for N). These "pions" require the two components to be located with range to interact and bond.
In the limit of a dense pionic crystal, we can consider all spheres undergo these changes. Here all spheres are assumed to have charge qe+q-2, or N=1 (qe-⅓). Another system would be all spheres are assumed to have charge qe+q+2, and N=5 (qe+⅔). Now we examine excitations of a pionic crystal. Assume a crystal where all spheres have Coulomb charge ±qe=±(J+⅓), and N=5, where J is an integer. The change in kinetic energy, for a single hypersphere, is 〈𝑝 , 𝑐 〉 / -〈𝑝 , 𝑐 〉 ≈ + ħ𝑐 𝑟 𝑁 -1 2 𝐿(𝐿 + 𝑁) describing the change from N to N-2. As N=5 then and the new state for this single hypersphere will have N reduced to 3, and Coulomb charge qe will by 2/3, from J+⅓ to J+1.
This reduction (q=⅔) leaves a hole (effective charge -⅔) and "particle" (effective charge (+⅔). Similar changes may occur on for the sphere of opposite charge with q= -⅔. Both holes and particles may hop from sphere to sphere. A particle may transform from a charge of +⅔ to one of -⅓ through the absorption of a boson of Coulomb charge +1 or emission of a particle (excitation that can move) of Coulomb charge -1.
One possible state for the boson is an excitation from N=5, L to N=1, L-4. The relevant release of kinetic energy is summation of the following two equations 2 𝐿(𝐿 + 𝑁) which equates to the Coulomb charge of unity (q=1), since for N=5, the numerator (2N-4) = 6, whereas before (N-5) = 4 equated to ⅔. An excitation with a full unit of Coulomb charge may be related to a lepton or a W particle. Establishing a confident link requires considerable more work.
A resonating pionic crystal is a crystal which has two states, one regular (integer J) and the other pionic (J+⅔ or J+⅓). Such an oscillating state may assist in providing effective Lorentz symmetry as long-range periodicity may be attenuated.
The structure of Euclidean Space
We can summarise the above discussions to three criteria or conditions for a crystalline-like state for threedimensional space. These are Condition 1. must be satisfied. This yields values of qe, j, N and L. Very high accuracy is required, with a value for a to within (at least eight) and preferrably ten significant figures.
The equation
Accruracy of the order of ten significant figures should be expected for the first crystalline phase. A value of N=3 should be expected for the final crystalline phase, unless pionic crystal is found. If the latter occurs, then the appropriate values for q and N should be expected for any state. A minor disagreement may indicate the release of free energy (for a change in crystal phase), or a very large tachyon (imaginary mass mass) for an initial multipole (finite size, not a crystal). The large tachyon mass may attenuate interaction, on induce gravitational repulsion (imaginary mass) for the initial small multipole. The value of qe is the order of 1/ √ 𝛼, which implies L is the order of
-𝑀√𝛼
which gives L the order of 40, as well at qe the order of 12. This estimate is only for the final crystal state and cannot for be applied any transitional states. However, this is completely different to that indicated in earlier work, wrongly estimating L the order of 10,000. These estimates for L and qe should be treated as a means of locating the required order of magnitude and not quantitative. However, the condition 1 (matching the value for ) is strict requiring high accuracy, as well as the value of N=3 and the values for q.
The final step, of this initial investigation of colour dynamics, is to present a possible scenario for the formation of Euclidean 3D space. The results are presented in Table 2. First is step the formation of a multipole, for which a dipole (of two spheres) for N=0 and j=227 is the best fit. This leaves just two active dimensions, assuming one compact dimension for magnetic charge, and one for the dipole axis (quasi-Euclidean). This multipole is the seed which other multipoles will attach and expand to form a three-dimensional crystal. Other potential multipoles are the j=39 within a quadrupole, and the j=66 within an octupole. The value of is the order of 10 -8 and N=1 for these two, compared to a smaller value less than 10 -9 for the dipole. All mulipoles have no Coulomb charge, as such would increase the value for . 3: Residuals, E and , for the NaCl lattice for several values of L, N, qe and j. terms are calculated for the most accurate published measurement for [START_REF] Morel | Determination of the Fine Structure Constant with an accuracy of 81 parts per trillion[END_REF]. The terms in the last two columns are the residuals E and .
L
The preferred initial crystal is the NaCl lattice with P-spheres of dyons where the magnetic charge is 26g. This crystal is attracive for two reasons. Excellent agreement for the fine structure is obtained for N=10. Three others sets of Coulomb charge, L and N also provide reasonable values for are also displayed Table 2.
The value for L for the NaCl is about 10,000 about three orders of magnitude to high than that deduced from Conditions 2 and 3. We require the number of active dimensions to be either N=1 or 3 or 5, so we have q1= -2q2 for two excitations. The preferred lattice is NaCl, j=1, L=37 (and 38), and qe=27, and consistent with Conditions 1 to 3. This lattice is a resonating pionic lattice, N=5, oscillating between (qe=27) and ( qe=27⅓). Note the values of E for these two crystal states have opposite sign and the magnitude agrees to the fifth significant figure for the energy residual E. The magnitude for E is greater for the q=27⅓, while the magnitude of the residual, , is larger for the q=27 state. The measure E is best for separate systems (crystals) while is best for coexisting systems (crystals). This can be resolved by a resonating state with finite spatial and temporal coherence, allowing for the different states to interact. Now we consider the residuals . These are almost unity. Yet they occur for different values of qe. Something very interesting appears to be going on. The author assumed the wurtzite structure, without a point of inversion, as the strong candidate for basic crystal structure. However, the NaCl lattice appears to the structure. What can a shift in rotational energy of unity times ħ𝑐/𝑟? We initially test 𝐿 (𝐿 + 𝑁 ) = 300 ± 1 finding 43x7=301 and 23x13=299 as the prime factors. A value for N1=36 (L1=7) has an excessive number of dimensiond and dismissed, leaving N1=10 (L1=13) as the only potential candidate. A more complex situation is ½[𝐿 (𝐿 + 𝑁 )+𝐿 (𝐿 + 𝑁 )] = 300 ± 1 which is satisfied by L2=16, N2=3, L3=14, N3=7, giving 299.
We now have a lattice with all sites have the same charge qe=27⅓, half the sites with L=16, N=3; and half with L=14, N =7. Excitations of Coulomb charge 2/3 and ±1, may also occur. The key expression is from Condition 2, 𝛿𝑞 ± = (𝑁 ∓ 1) 6 which which defines the charge but not the mass of the excitated state. Thie mass may vary with the values of N and qe. For example N+1=4 and N-5= 4 show potential 2/3 states. Also qe=27 and qe=27⅓, (N=3 unchanged, N-1=2) show 2 potential 1/3 states. These may indicate the two flavours with low values of mass. The third flavour, with much higher mass, requires a broad search of potential changes in qe and N. One avenue for future work. This slightly imperfect fit of pure crystals is interesting as this will help remove long range order. We do not want all mass to belong to belong to a simple system (built from the well-known fermions and bosons of the Standard Model). Measurements of the Cosmic background have clearly shown that (directly) unobservable dark matter is about three times more massive than observale matter (based on quarks and leptons). The model of the (N+1)-sphere can produce other excitations (or channels) than those discussed in Sections 5 to 7, and the resulting particles may not interact with observable mass beyond gravitational effects.
An example of the process leading to the formation of the deduced Euclidean state (27e & 27⅓e,1g) is: Firstly a dipole of charge j=227, with mi>mr. Secondly formation of the 26g NaCl structure, leading to the forementioned deduce Euclidean state, with mr>>mr. The (j=39) quadrupole is interesting alternative for the initial step (multipole), as the value of j=39 is consistent to those for the lattices. Note the Euclidean state may have at least seven components (listed in Table 3). We require the radius of the (N+1)-sphere to be many orders less than the range of the strong nuclear force so any observed Lorentz symmetry of excited states is insensitive to the periodic symmetry of Euclidean space.
Concluding remarks
This paper presents an interesting model for the initial formation of Euclidean three dimensional space. The dyon charge plays a central role in binding space as an entity, qualitatively similar to ionic-Coulomb charge binds binary salt lattice (crystal). A phase change occurs such the subsequent state for 3D space is still a crystalline NaCl system. This system is a set of at least seven structures, as shown in Table 3, and The dynamics of the (N+1)-sphere provides an interesting basis for the magnitude of Coulomb charge, as multiples of e/3 and a strong hint on flavour.
4 .
4 Solutions of the equations for L, N, qe and j Determinations can only made within the accuracy of the values of M and . The possible values for L and N are deduced by testing momentum/dimension pairs (L and N being integers) along with magnetic charge and Coulomb charge in the above expression the known value of and searching for a close match (at least 8 significant figures) for any of the multipole and lattice types listed in
+ 𝑁) -𝐿(𝐿 + 𝑁) + 𝐿 + (𝐿 + 𝑁 + 1))for L being large. An increase in N by one is an effective excitation of ħ𝑐/2𝑟 , while a reduction in N, by one, is a de-excitation of ħ𝑐/2𝑟. An increase in the value of L is an excitation, requiring an energy of ħ𝑐/𝐿𝑟 , and a decrease of L is de-excitation, releasing energy ħ𝑐/𝑟. Together, a decrease of N by 2, and an increase of L by 1, gives 𝐿 + 𝑁) which describes the energy required to induce this excitation. This value decreases like 1/L. The reverse is 𝐿 + 𝑁) which describes a release of energy. By replacing N with N+2, and L with L-1, gives ∆ , = 〈𝑝 , 𝑐 〉 -〈𝑝 , 𝑐 〉 / ≈ + ħ𝑐 𝑟 𝑁 + 1
Condition 2 .
2 We have 𝛿𝑞 = ±2/3, and 𝛿𝑞 ∓ 1/3, and N=3 is deduced from the equations Note, this relationship is approximate, and depends on the magnitude of U. If the term (U) is much smaller than the two other terms, then the above expression increases in accuracy.
/m 2 gives
(ħ𝑐) 𝑟 𝐿(𝐿 + 𝑁) (𝑚 -𝑚 ) -2𝑖𝑚 𝑚 (𝑚 + 𝑚 ) + 𝑐 𝑟
= (𝐾𝑀𝑞 ) 16 (𝑚 -𝑚 ) -2𝑖𝑚 𝑚 (𝑚 + 𝑚 ) 𝑟 - 𝐾𝑞 𝑟 𝑀𝑉 , 2𝑚 + 𝑉 , 𝑟 𝑚 (𝑚 -𝑚 ) + 2𝑖𝑚 𝑚
3.3.1 Imaginary terms
Grouping imaginary terms gives
(𝐾𝑀𝑞 ) 16(ħ𝑐) -𝐿(𝐿 + 𝑁) = 𝑉 , 𝑟 1 ħ𝑐 𝑚 𝑚 + 1
which are rewritten as
(ħ𝑐) 𝐿(𝐿 + 𝑁) = (𝐾𝑀𝑞 ) 16 - 𝑉 , 𝑟 𝑚 𝑚 + 1
noting that if the second term on the right hand side is much smaller than the first term, then
ħ𝑐 ≈ (-𝑀) (𝐾𝑞 )
4 𝐿(𝐿 + 𝑁)
(𝐾𝑀𝑞 ) 16(ħ𝑐) -𝐿(𝐿 + 𝑁) = 𝜀 = 𝑉 , ħ𝑐 𝑚 𝑚 + 1 = 𝐺𝑚 ħ𝑐 𝑚 𝑚 + 1
as M is negative. Note for the one-dimensional circle (1-torus, or linear grid), P=1 and
Noting as ro>>r, suggessting a substantial reduction in the magnitude of mi. The 3D torus, or cubic lattice, gives
𝜀 = 4𝜋 𝐺𝑚 𝑟𝑐 𝑚 𝑚 + 1 = 4𝜋 𝐺𝑚 𝑚 𝑟𝑐 𝑚 𝑚 + 1 = 2𝜋 𝑟 𝑟 𝐺𝑚 ħ𝑐 𝑚 𝑚 + 1
𝜀 = 4𝜋 ħ𝐺 𝑟 𝑐 𝑚 𝑚 + 1 = 8𝜋 ħ𝐺𝑚 𝑚 𝑟 𝑐 𝑚 𝑚 + 1 = 2𝜋 𝑟 𝑟 𝐺𝑚 ħ𝑐 𝑚 𝑚 + 1
as ro=(2ħ/mic), also can use miro=(2ħ/c),
3.3.2 Real terms
Grouping the real terms gives
(ħ𝑐) 𝑟 𝐿(𝐿 + 𝑁) (𝑚 -𝑚 ) (𝑚 + 𝑚 ) + 𝑐 𝑟 = (𝐾𝑀𝑞 ) 16 (𝑚 -𝑚 ) (𝑚 + 𝑚 ) 𝑟 - 𝐾𝑞 𝑟 𝑀𝑉 , 2𝑚 + 𝑉 , 𝑟 𝑚 (𝑚 -𝑚 )
which, on using the results of the imaginary terms, is simplified to a quadratic in terms of VG/(𝑟 mic 2 )
𝑟 = - 𝐾𝑞 𝑀 2(𝑚 𝑐 ) 𝑉 , 𝑟 (𝑚 𝑐 ) + 2 𝑚 𝑚 -1 𝑉 , 𝑟 (𝑚 𝑐 )
Table 1 :
1 which should be close to zero for three dimensions if the exact values of M and are known and used. Another related quantity (E) is an energy residual given by Values of Madelung Constant M for spatial geometries and number of non-compactified dimensions.Values for the Madelung constants are displayed in Table1above. We have the following key expression
𝛿 = 8𝜋 ħ 𝑐 𝐾𝑞 (-𝑀) 𝑚 𝑚 + 1 𝑟 𝑟 = 𝐿(𝐿 + 𝑁) - (𝐾𝑀𝑞 ) 16(ħ𝑐)
𝛿 = 𝐿(𝐿 + 𝑁) - (𝐾𝑀𝑞 ) 16(ħ𝑐)
Table 2 :
2 table 1[START_REF] Tavernier | Clifford boundary conditions: a simple direct-sum evaluation of Madelung constants[END_REF][START_REF] Zucker | Madelung constants and lattice sums for hexagonal crystals[END_REF]. Relevant examples are shown in Table2. Note that there is a preference for integer values for j. Residuals, , for values of L, N, n and j. The terms are calculated for the most accurate published measurement for [START_REF] Morel | Determination of the Fine Structure Constant with an accuracy of 81 parts per trillion[END_REF]. The terms outside brackets are the residuals
L N Lattice type j qe L(L+N) x
441113 0 dipole 227 0 194,774,816,889 -170 (8.7)
16482 1 quadrupole 39 0 283,669,806 -5.2809(183.7)
10143 10 NaCl 26 1/16 102,373,899 -0.0062 (-0.6069)
10144 8 NaCl 26 6/16 102,373,908 0.1876 (18.14)
10145 6 NaCl 26 8/16 102,373,915 0.1294 (12.36)
10146 0 NaCl 26 10/16 102,373,918 0.0569 (5.56)
2529 1 NaCl 13 147/512 6,398,370 0.00089 (1.39)
2529 1 NaCl 13 0 6,398,370 1.3286 (2,076)
2528 3 NaCl 13 0 6,398,368 -0.6714 (1,049) |
04103227 | en | [
"sde"
] | 2024/03/04 16:41:20 | 2001 | https://hal.science/hal-04103227/file/ACRS2001.pdf | B Satyanarayana
B Thierry
D Lo Seen
A V Raman
G Muthusankar
REMOTE SENSING IN MANGROVE RESEARCH -RELATIONSHIP BETWEEN VEGETATION INDICES AND DENDROMETRIC PARAMETERS: A CASE FOR CORINGA, EAST COAST OF INDIA
Keywords: Mangroves, Remote Sensing, PCQM, Vegetation Index
The mangrove forest of the Godavari estuary, Andhra Pradesh, represents the second largest area of such vegetation formations along the East Coast of India, next to the Sunderbans (West-Bengal). Although declared as Wildlife Sanctuary since 1972, this rich but fragile ecosystem has undergone serious alterations largely induced by human activities. Continuous efficient retrieval of reliable information from the mangroves is therefore necessary for conservation purposes. Satellite remote sensing is a useful source of information as it provides timely and complete coverage of the study area, complementing field surveys of higher information content but which are more difficult to carry out, especially in mangroves. The purpose of the present study is 1) to map the mangrove formations and its surroundings based on a supervised classification of remote-sensing data and 2) to analyse the potential relationships between mangrove dendrometric parameters and spectral indices extracted from satellite data. The supervised classification was carried out with an IRS-1C LISS3 image of March 1999 and was trained from ground truth data and field knowledge. Among the resulting 14 classes, 3 correspond to different mangrove signatures. The ground truth includes 128 sampling locations for which mangrove vegetation parameters like basal area and tree density have been estimated using the Point Centred Quarter Method (PCQM) on transect lines of at least 100 m. In a second stage, Vegetation Indices (VI) have been calculated at locations for which mangrove parameters were obtained from field surveys. Various statistical tools, among which scatter-plots and analyses of variance (ANOVA), have been used in order to explore the relationships that may exist between VI and mangrove parameters. The first results show that a relationship exists between VI and basal area whereas this is not the case with density. Furthermore, when spectral indices and mangrove parameters are considered altogether, it appears that only two classes of mangrove can be discriminated.
INTRODUCTION
Mangroves represent a specific ecosystem found in the intertidal zone along tropical and subtropical coastlines, and are often located near estuaries and deltas [START_REF] Spalding | World Mangrove Atlas[END_REF]. Being highly productive ecosystems and harbouring a large diversity of species adapted to these particular habitats, they are considered of utmost ecological importance. Moreover, they provide a number of direct and indirect services, ranging from protection against coastal erosion [START_REF] Pearce | An unnatural disaster -Clearing India's mangrove forests has left the coast defenceless[END_REF] to the multiple forest products usage by local population [START_REF] Blasco | The Mangroves of India[END_REF]. For the past decades however, the situation of mangrove forests has been continuously deteriorating due to an increasing human pressure resulting in conversion to agricultural lands, renovation of brackish water fisheries, prawn and shrimp farms, salt pans, urban and industrial pollution, etc. [START_REF] Clough | Mangrove Ecosystems in Australia: Structure, Function and Management[END_REF].
The mangrove forest of the Godavari estuary, Andhra Pradesh, is the second largest area of such vegetation formations along the East Coast of India, next to the Sunderbans (West-Bengal) and counts about fifteen mangrove species, among which Avicennia marina, A. officinalis and Excoecaria agallocha are the most dominant ones. Although declared as Wildlife Sanctuary since 1972, this rich but fragile ecosystem has undergone serious alterations largely induced by human activities. Continuous efficient retrieval of reliable information from the mangroves is therefore necessary for conservation purposes.
Satellite remote sensing is a useful source of information as it provides timely and complete coverage of the study area, complementing field surveys of higher information content but which are more difficult to carry out, especially in mangroves. For these reasons, studies have been carried out on mangrove ecosystems using aerial photography (e.g. [START_REF] Dahdouh-Guebas | Four decade vegetation dynamics in Sri Lankan mangroves as detected from sequential aerial photography: a case study in Galle[END_REF], optical (e.g. [START_REF] Rasolofoharinoro | A remote sensing based methodology for mangrove studies in Madagascar[END_REF], and radar (e.g. [START_REF] Mougin | Multifrequency and multipolarization radar backscattering from mangrove forests[END_REF] remote sensing data, or a combination of them (e.g. Pasqualini et al., 1999). However, the objectives of the studies differ according to what can be expected from the different types of remote sensing data. For example, mapping mangroves at the species level can be attempted with high-resolution aerial photography, whereas mapping the landscape level environmental indicators of a coastal area can generally be carried out using optical satellite images from sensors like Landsat TM, SPOT HRV or IRS LISS [START_REF] Klemas | Remote Sensing of landscape-level coastal environmental indicators[END_REF][START_REF] Ramachandran | Application of Remote Sensing and GIS to coastal wetland ecology of Tamil Nadu and Andaman and Nicobar group of islands with special reference to mangroves[END_REF]. For the estimation of mangrove forest parameters like basal area or biomass, radar remote sensing seems most promising [START_REF] Mougin | Multifrequency and multipolarization radar backscattering from mangrove forests[END_REF][START_REF] Proisy | Interpretation of polarimetric radar signatures of mangrove forests[END_REF], although appropriate configurations of frequency, polarisation and spatial resolution are currently not available on orbiting radar satellites.
In the present study, we explore the possibility of using spectral indices derived from optical satellite images to give quantitative estimates of mangrove vegetation parameters. The study area, the Coringa Forest, is one of the most closely followed mangroves in India for the last decade. Significant ground data of the area have been acquired during numerous measurement campaigns in the mangrove forests during the last five years. In a first step, the mangrove formations and its surroundings are mapped, based on a supervised classification of an IRS-1C image of March 1999. In a second step, the potential relationships between mangrove dendrometric parameters measured on the ground and spectral indices extracted from the satellite data are analysed and interpreted.
MATERIALS AND METHODS
Study area
The Godavari is the second longest river of all the Indian sub-continent. It divides into Gautami and Vasista just after the Dowlaiswaram Dam about 60 km before reaching the Bay of Bengal. The mangroves studied are located around the Gautami-Godavari estuary. The study area extends from around 82°05'E to 82°25'E and 16°30'N to 17°05'N and includes Kakinada city, Kakinada Bay, the Coringa Wildlife Sanctuary where the most important stretch of mangrove is found, and also the mangrove forest situated south of Gautami-Godavari River. Several different landscapes compose the area (Figure 1), with paddy fields and coconut tree plantations in the west and south, mangrove forests, aquaculture ponds for shrimp farming spreading into mangrove forests, saltpans, casuarinas plantations along the beach and on Hope Island, villages and urban areas (Kakinada, Yanam).
Ground data collection
The ground data used in the present study were acquired during the period 1998-2000, and consist mainly of mangrove vegetation identification, counting and dendrometry. A regular sampling grid of 1-minute latitude and longitude spacing is first used. Additional sampling points have also been taken, mainly in the mangrove area so that, out of a total of 128 sample plots visited, 83 pertained to the mangrove forest (Figure 1). Each sample plot was accessed with the help of a GPS receiver (model Garmin 45) with an estimated accuracy of 100 m due to the Selective Availability (SA) introduced by the US Department of Defence at that time, i.e. before May 2000. When any given grid node could not be reached, the sample plot was installed at the nearest accessible place and the surrounding land-use recorded.
For the mangrove sampling plots, data acquisition was done using the PCQ-Method (Point Centred Quarter Method) [START_REF] Cintrón | Methods for studying mangrove structure[END_REF]. This allows for the measurement of density, basal area, mean diameter and relative composition of forest stands. For each mangrove plot, a transect line of at least 100 m (depending on accessibility) was laid out westwards. Every ten meters along the transect, four quarters were established by drawing a line perpendicular to the transect line. Then, in each quarter, the tree nearest to the node was measured (Figure 2). Following the recommendations of [START_REF] Cintrón | Methods for studying mangrove structure[END_REF], diameter at breast height (dbh) was obtained by measuring girth at 1.3 m from the ground in the case of erect tall trees (Avicennia marina, A. officinalis, A. alba, Bruguiera gymnorrhiza, Excoecaria agallocha, Sonneratia apetala and Xylocarpus mekongensis) and above the highest established prop roots for Rhizophora trees. In the case of individuals less than 3 m high (mainly Aegiceras corniculatum, Bruguiera cylindrica, Ceriops decandra, Lumnitzera racemosa and some A. marina), the girth was measured below the lowest branch point. All the measurements made did not require any sophisticated equipment. For that reason, data like Leaf Area Index (LAI), which could have been interesting for this remote sensing study, were not acquired. The data collected were entered in a spatial database for subsequent spatial queries and analysis.
Satellite data analysis
The multispectral image used in this study is an IRS-1C LISS3 image of March, 8 th 1999 with 4 spectral bands (green, red, near infrared (NIR) and short-wave infrared (SWIR)). The spatial resolution (expressed as pixel size) is 23.5 m for the three visible/NIR bands and 70.5 m for the SWIR band. The image has been geocoded in UTM projection based on ground control points obtained by post-SA GPS measurements. The RMS error of the transformation was less than 15 m. One of the goals of the remote-sensing data analysis was to produce a land-use map of the mangrove forest and its surroundings. This was done in a two-step process. First, a maximum likelihood supervised classification was carried out using training areas chosen according to extensive field knowledge but without any specific reference to the grid sample points. Afterwards, the raw result of the supervised classification was checked during visual interpretation of the satellite image and field visits. Small polygons that were obviously wrong (e.g. sediment plumes into the bay were classified as fallow land) have been recoded so as to match the operator's field knowledge (Figure 3).
The second part of the study concerned the analysis of potential relationships between vegetation indices and mangrove dendrometric parameters (namely, density and basal area). Normalised Difference Vegetation Index (NDVI) was calculated from the image as the band ratio (NIR -Red)/(NIR + Red). The transect locations were then entered in the spatial database as rectangles of roughly 150 m x 75 m, which correspond to 18 pixels. This size represents a compromise between the actual dimensions of the field transects and a pixel set large enough to minimise statistical bias due to the spatial variation of pixel value. In the analysis, only sample plots (83) in mangroves were considered. Each transect location on the image was reviewed to account for the 100 m position uncertainty given by the GPS. This prevented overlapping with neighbouring categories of land-use like beach, river and barren land. The mean value of the area thus covered by one transect rectangle was then extracted from the NDVI image and the land-use map. This value was then stored for each mangrove sample plot in the spatial database along with the other available information.
1 6 ° 3 8 ' 16°38' 1 6 ° 4 2 ' 16°42' 1 6 ° 4 6 ' 16°46' 1 6 ° 5 0 ' 16°50' 1 6 ° 5 4 ' 16°54' 1 6 ° 5 8 ' 16°58' 1 7 ° 2 ' 17°2' 8 2 ° 1 0 ' 8 2 ° 1 0 ' 8 2 ° 1 4 ' 8 2 ° 1 4 ' 8 2 ° 1 8 ' 8 2 ° 1 8 ' 8 2 ° 2 2 ' 8 2 ° 2 2 '
Supervised classification
The land-use map derived from the satellite image is presented in Figure 3. The classification led to 14 classes, among which three were for mangrove forest, i.e. dense, medium dense and less dense mangrove. In order to evaluate the accuracy of the land-use map, a confusion matrix was produced [START_REF] Congalton | A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data[END_REF] using the 128 ground truth locations. As the classes used for ground truth data were different from those of the land-use map, a
Study area
non-bijective correspondence had to be set between them. This took the form of a lookup table as presented in Table 1. After overlaying the ground truth locations on the land-use map, the land-use classes corresponding to each of them were noted. Given the relatively small number of locations considered, each individual location was checked both on the image and land-use map to avoid residual misplacement due to SA uncertainty. Plots that fell on or very near boundaries between aquaculture ponds and paddy fields were assigned to the mixed class. The results are given in the confusion matrix shown in Table 2.
Table 1: Lookup table between land-use map classes and ground truth classes Table 2: Confusion matrix of supervised classification and ground truth
The reading of the confusion matrix draws some comments. First of all, it can be noted that mangrove has been very well classified, especially since the three mangrove classes have been merged into one. This is not surprising as it is displayed with a characteristic spectral signature on the satellite image as shown in false colour composite (FCC) in Figure 1. The two points misclassified are found on barren areas within mangroves and are anyhow close to mangrove vegetation. The confusion between Agriculture and Barren land on one hand and Aquaculture (ground truth) and Barren land (supervised classification) on the other hand can be explained by the seasonal cycles of the paddies and ponds. Periodically, aquaculture ponds need to be emptied for maintenance and cleaning (at least once a year) and during that period, the dried ponds have the same spectral signature as barren land. As the image was taken on March 8 th , 1999 and the fieldwork had been carried out at different periods that spans over 3 years, it is most likely that a number of ponds were empty during the satellite overpass. A similar explanation is also valid for paddy fields: after harvest (which happens once or twice a year), the paddy fields are either left as fallow lands or used for growing leguminous plants (e.g. grams). These sparsely vegetated areas tend to have a spectral behaviour very close to that of barren lands, hence the confusion that arises from the time difference between the image acquisition date and the field visits.
Relationship between NDVI and dendrometric parameters
Here, we try to verify whether the three different mangrove classes obtained from the satellite image are substantiated by mangrove vegetation field measurements. In other terms, potential relationships between NDVI and dendrometric factors (viz. density and basal area) of mangrove forest are investigated. This has been done through qualitative analysis as well as quantitative -statistical -methods.
An efficient statistical method to analyse relationships between a qualitative factor (e.g., mangrove classes) and a quantitative factor (e.g., basal area or density) is analysis of variance (ANOVA). This method investigates, in a set of values organised in several groups, the proportion of variance that can be explained by within-group variability (called Mean Square Error or MS err ) and inter-group variability (called Mean Square Effect or MS eff ). The smaller MS err and higher MS eff , the stronger the relationship between the set of values and the groups. Yet, as shown in Figure 4, mean values for basal area of Less dense mangrove and Medium dense mangrove are not significantly different (respectively, 1.034m 2 /0.1ha and 1.031m 2 /0.1ha) to be considered as two groups with respect to basal area. The same observation also holds for density. Consequently, Less dense mangrove and Medium dense mangrove classes have been merged into one class called Less/Medium dense mangrove. With these two groups (i.e., Less/Medium dense mangrove and Dense mangrove), ANOVA gives MS eff = 97.4 and MS err = 3.3 for basal area on one hand and MS eff = 444193.8 and MS err = 68895.1 for density on the other hand. Using the F test, which measures the significance of the ratio of the two variances, we obtain F = 29.5 for basal area and F = 6.4 for density. Therefore the two mangrove classes can be considered highly related to the basal area and much less pertinent relative to density. These results are clearly illustrated on Figure 5 and Figure 6. The two scatter plots show the relationships between NDVI and basal area (Figure 5) and between NDVI and density (Figure 6) for the three classes, viz. Less, Medium dense and Dense mangrove. In both figures, Less, Medium dense and Dense mangroves are displayed in correct order, that is, with increasing mean NDVI from Less dense to Dense mangrove. This is as expected because both NDVI and the classification were derived from the same data source (satellite image). When NDVI is plotted against basal area, two zones can easily be identified along the main diagonal, one in the lower left quarter, and the other in the upper right quarter of the graph, which indicates that a relationship exists between basal area and NDVI. However, when NDVI is plotted against density, no relationship is found. This is not quite surprising because density expressed as number of individuals per unit area is not a good indicator of the 'amount' of vegetation as seen in the satellite image, as the sizes of the individuals can be very different (height up to 10 m, dbh up to 60 cm). In comparison, basal area is shown to be a better indicator of vegetation 'amount' such that Less/Medium dense mangrove may stand for lower basal area and Dense mangrove for higher basal area. These preliminary results show that at least a broad estimate of the basal area of mangrove forest can be obtained from optical satellite imagery. In fact, the relationship is more between classes of NDVI and basal area than between NDVI and basal area directly. This relationship is therefore considered not significant enough to allow basal area mapping of the mangrove forest.
11 2 - 1 - - - 14 79% Barren land - 5 - - - - 2 7 71% Mangrove - 2 81 - - - - 83 98% Mixed - - - 3 1 - - 4 75% Plantation - - - - - - - 0 N/A Settlement - - - - - 1 - 1 100%
CONCLUSION
A land-use map of the Godavari estuary area was made from supervised classification of an IRS-1C LISS 3 satellite image of March 8, 1999. A difficulty faced while carrying out this classification was to correctly identify some aquaculture ponds and agricultural fields while they were seen as barren lands on the day of the satellite overpass. This type of errors can however be reduced/eliminated using multi-date classification of images taken at different seasons. The mangrove areas have been presented in three classes in the land-use map corresponding to Less dense, Medium dense and Dense mangroves, according to differences in spectral characteristics revealed during the classification procedure. When compared with ground data, these differences were consistent with differences in basal area only after merging Less dense and Medium dense classes, as the basal area measurements for these two classes were not significantly different. In that way, Less/Medium dense mangroves were found to correspond to mangroves with lower basal area, and Dense mangroves, to those with higher basal area. However, no such relationship was found with measured 'density'. This observation can be understood by the fact that the term 'density' has different meanings in the classification and in ground measurements. In the classification, density stands for the 'amount' of vegetation seen by the sensor or by a person walking in the mangroves, whereas the measurements are expressed in number of individual trees per unit surface area. These mangroves being neither mono-specific nor even-aged, it is normal that a same number of trees will correspond to different 'amount' of vegetation. In that sense, basal area has a definition closer to the density used in the classification, which explains the relationship between density classes and classes of basal area. The direct relationship between spectral indices and basal area is however not considered significant enough to allow basal area mapping. A possible explanation is that spectral indices integrate variabilities in optical characteristics, amount and spatial organisation of leaves due to species and edaphic conditions that are only partially expressed in basal area. Conversely, a similar study aiming at species identification would be hampered by variabilities in vegetation densities that are not related to species composition. Nevertheless, recent and future sensors with increased spatial and spectral resolutions are bringing new hope to obtain timely and precise information necessary for managing mangrove ecosystem conservation.
Figure
Figure 1: IRS-1C image of the study area, the 128 sample plots are also represented
FigureFigure 6 :
6 Figure 3: Land-use map
ACKNOWLEDGMENTS
The authors wish to thank Dr. L.P. Jayatissa from the University of Ruhuna, Sri Lanka, for helpful discussions and suggestions. Research during 1998-'99 was carried out with funds provided by the Department of Ocean Development, New Delhi and later by the European Commission under Contract No. ERB IC18-CT98-0295. The authors are grateful to Dr. B.R. Subramanian, Director, DOD for encouragement. Facilities at the IFP, Pondicherry and Andhra University were utilised and we thank the authorities concerned. |
04103439 | en | [
"info.info-hc"
] | 2024/03/04 16:41:20 | 2019 | https://hal.science/hal-04103439/file/Demo-CSCL-final-cmat.pdf | Valentin Lachand
email: [email protected]
Aurélien Tabard
email: [email protected]
Christine Michel
email: [email protected]
Toccata: A Multi-Device System for Activity Scripting and Classroom Orchestration
We present Toccata, a system supporting the management of rich multi-device pedagogical activities. Activities designed with Toccata are reusable, shareable and adaptable to the situation. Teachers face numerous challenges in designing and scripting pedagogical activities that incorporate rich media and applications, combine devices, group formations, and spaces. They also face challenges in orchestrating these activities in class, especially to guiding learners, following their progress, and maintaining a coherent learning experience. Our demonstration will showcase how Toccata answers these challenges by supporting individual or collaborative activities, in class or outdoors, under diverse technical conditions, e.g., offline on mobile devices, or fully connected with Desktop computers.
Introduction
Toccata is an activity-centric system supporting orchestration of pedagogical activities for classrooms. Orchestration aims at helping teachers to create pedagogical scripts, adapt and execute them in a given context [START_REF] Dillenbourg | Design for classroom orchestration[END_REF]. Based on interviews with teachers [START_REF] Jalal | Design, Adjust and Reuse --How Teachers Script Pedagogical Activities[END_REF], we identified a set of recurring challenges for creating and conducting digital pedagogical activities in today's classrooms, such as resilience to networking problems, or support for a wide variety of devices. We developed Toccata to facilitate the management of rich multi-device pedagogical activities. In Toccata, activities are reusable, shareable and adaptable to the situation. Toccata supports tight or loose activity scripting. It lets teachers conduct digital activities in class and in more open environments. Toccata also lets teachers modify and adapt unfolding activities according to the situation in the classroom.
Teachers face numerous challenges in integrating digital tools into smooth and coherent teaching activities. Rich pedagogical activities are often fragmented in time, split into multiple sub-activities, built upon multiple media and applications, and may unfold in various locations. They can be initiated in a specific context (technical, physical or social) and continued in another, following a plan more or less strictly defined by teachers.
For example, in a vocational school, a horticulture lesson can start in class with a lecture leveraging an interactive whiteboard, continue in the school's greenhouse with individual work on tablets to inventory flowers and, finish back in the classroom with a synthesis in groups shared with others. Follow-up activities and homework can finally be conducted outside of the school on personal devices.
Combining various activities, devices, group formations, and spaces brings new pedagogical opportunities such as the scenario previously described. But organizing such scenarios is complex as teachers must be able to guide learners, and follow their progress, while maintaining a coherent learning environment. The lack of tools to support these practices makes it difficult for teachers to put rich pedagogical activities into place. And when such activities do happen, infrastructural problems (network, device set-up, content distribution) make it even more challenging.
We designed Toccata to work in schools with various technical set-ups and policies. Teachers can run activities in their classroom, on fixed computers with a reliable network, but also across several rooms or with multiple devices, even if there is no insurance of a reliable network. Toccata also supports disconnected contexts, such as in sports class or activities unfolding outside of school. We have tested Toccata in three different middle schools with highly varied situations: in classrooms over multiple sessions, in activities mixing digital and physical resources, in nomadic activities.
Our demonstration will showcase the versatility of Toccata. We will present activities created by teachers and run with Toccata such as: 1) a fact checking activity, in which students have to verify and correct wiki articles over multiple sessions; 2) a sales management activity which involves roaming, i.e. work on tablets and on a large display in the classroom, but also documentation in a greenhouse; and 3) a collaborative activity involving paper and tablets, in order to learn Agile project management. We will also discuss the underlying architecture allowing our system to work in schools with diverse technical and policy constraints, and its theoretical grounding
Toccata
Toccata is a Web-based application enabling teachers to create digital pedagogical activities and conduct them with a class. As a Web-based application, our system works with any kind of devices containing a web browser (computer, tablet, phone, video-projector linked to a computer). Toccata is developed to run in online or offline mode, with an optimal mode and two degraded modes according to network reliability.
Activity Scripting
Toccata builds upon an Activity Based [START_REF] Bardram | Activity-based computing-lessons learned and open issues[END_REF] model to represent pedagogical activities. When teachers create an activity, they can define the following components, before enactment or live:
1. A set of Instructions for students to guide them in conducting the activity. 2. A list of Sub-activities (steps), created by teachers to divide the activity according to pedagogical needs. 3. A set of Resources can be associated with activity or steps and are typically all kind of documents openable with a web browser (pdf, video, audio, website, etc.). These resources are read-only in our system. 4. A set of Applications are a set of tools allowing teachers and students to run operations on resources, such as text editors; and to control the activity flow, such as timers. 5. A list of Participants involved in the activity. 6. Notes that teachers can attach to an activity or step, to jot down things to remember or document of the class unfolded, to improve future iterations of the activity.
Activity Orchestration
During the enactment phase, the teacher and the students have a similar interface. However, the teacher can take notes on the activity, and has more options and actions available on the applications. S/he can edit during class each component of the activity. This run-time edition of the activity can help the teacher in the adaptation of the activity for students. For example, if a student is blocked on a particular exercise, the teacher can refine instructions and give new resources to help the student, or s/he can change the order of steps, add new exercises for specific group, change the visibility of steps for several groups.
Scenario
We illustrate the use of Toccata with a scenario developed in collaboration with a teacher from a vocational middle school. The activity was run in the school over a period of 90 minutes, with two groups of learners to test Toccata in-situ.
Thomas is an economics teacher in a vocational middle school. At the end of the year, he decides to create an activity to review the topics covered during the year. He creates the activity with Toccata and divides it in three steps:
1. Students watch a video of a sale situation, analyze it, and answer questions about concepts from previous course chapters presented in the video in a text editor. They can use the pdf viewer to read the previous lessons related to the video. 2. Students calculate taxes and prices using a collaborative spreadsheet. 3. Students prepare the plant catalog of the school. They visit the greenhouses of the school, take pictures of plants with the tablet and store them in their space in order to build the catalog. At the end of the first two steps, Thomas includes a correction: a student will do the exercise on the teacher environment in front of the class and project it with the classroom's video-projector. At the end of the third step, Thomas includes a class discussion: each group will present the pictures taken in the greenhouse by using the video-projector and discuss with the class how they could be used to create the sale catalog.
In the first phase, students analyzed a sales situation using external applications within Toccata. Due to an unreliable school network, external Web applications such as the collaborative text editor did not work perfectly, but resources hosted on Toccata could be properly accessed. Although the internet connection was fluctuating, students did not encounter major problems, and managed to move from one step to the next. During the second step, due to the use of an external application for the spreadsheet, changes made during the correction on the teachers' computer connected to the video-projector were not synchronized with students' activity, and students had to manually update the activity on their tablet. In the third step, students moved to a greenhouse, with no WiFi coverage. This step worked smoothly and students could grab their tablet and continue their activity as expected. In the greenhouse, they moved freely and took pictures of plants they later added to a sales catalog. When they came back to the classroom, the activity was updated on the global server and the teacher could access the pictures taken in the greenhouse.
Implementation
Toccata architecture is built on three layers. The first layer consists of a main Web server and Web applications. The second layer is a local server running inside classrooms. The last layer is composed of client devices running Toccata. The second layer is not mandatory, and devices running Toccata can directly communicate with the remote Web server. When the third layer is not available, the devices synchronize with each other via a local server if they are connected to one. Otherwise they run independently.
Toccata is a Progressive Web Application built with Angular, with duplicated layers, and extra synchronization mechanisms. The server delivers a single page application running in the browser. It is hosted on Firebase. Activities are stored on a CouchDB database. External applications are iframes opened inside Toccata and local applications are Angular components.
The local server acts as a WiFi hotspot, it can either connect to the Ethernet network of the school, or connect to a tethering smartphone, or run without any Internet connection. The server runs on Node.js and only delivers a simple Single Page Application with very little logic. The activities are stored in a CouchDB database that syncs with the remote one.
Each device runs Toccata and a PouchDB instance. PouchDB allows synchronization between multiple instances of CouchDB servers. As a Progressive Web Application, is offers some native-like feature, like home icons on mobile OS, and strong caching mechanisms for the data, but also for the application shell: the webpage can load even when the device is totally offline, and it will synchronize back to the server when it becomes available.
Toccata is open-source and available on GitLab (https://gitlab.com/lachand/Toccata). Since Toccata is built on Angular with reusable web components, everyone can reuse components of Toccata in its own project and we think that this may lead to a better integration and assimilation of research projects in commercial and deployed in school systems. In addition, we create a demonstration page where people can use Toccata with a student or teacher account in order to enable teachers, researchers or system editors to test our system and concepts beneath Toccata. This demonstration is available at the following page: https://demo.toccata.education
Future work
In future work, we will study the use of Toccata in different contexts, such as a collaborative activity with one device per student instead of one device per groups. We will also work on a classification of tasks and new interaction to support teachers when they had to manage an activity with several devices (attention management, support to students, etc.). Another interesting point fur future work is to pass from a note-taking process (what is actually done by teachers) to a reflexive process to design and re-design pedagogical activities.
Conclusion
We presented Toccata, a system allowing teachers to prepare multi-device activities before class. Toccata enables teachers to edit their activity at it occurs, in order to adapt it to students' progression or to unexpected events occurring in class. Toccata has been tested in different schools and pedagogical context, such as a course on several sessions, a course mixing paper and digital activities, and a course running in class and outside class, with no available network. The demo of Toccata offers participants the ability to edit and run existing activities created by real teachers, and to create their own scripts. Participants will be able to run their scripts on tablets during the demonstration.
Fig. 1 :
1 Fig. 1: A preview of an activity in Toccata, and components of the activity
Fig. 2 :
2 Fig. 2: (a) Tutor view of Toccata when looking at different group progression; (b) Student view of Toccata during an activity with a project management board and timer loaded |
04103532 | en | [
"shs.eco"
] | 2024/03/04 16:41:20 | 2022 | https://shs.hal.science/halshs-04103532/file/WP4.pdf | Ségal Le
Guern Herry
Jeanne Bomare
email: [email protected]
Arun Advani
Pierre Cahuc
Nicolas Coeurdacier
Matt Collin
Maxime Ferrer
Niels Johannesen
Andres Knobel
Yajna Govind
Jakob Miethe
Thomas Piketty
Stefan Pollinger
Jean-Marc Robin
Andy Summers
Maxime Vaudano
Will We Ever Be Able to Track Offshore Wealth? Evidence from the Offshore Real Estate Market in the UK
Keywords: D31, H24, H26, K34)
This paper provides evidence of the growing importance of real estate assets in offshore portfolios. We study the implementation of the first multilateral automatic exchange of information norm, the Common Reporting Standard (CRS), which introduces cross-border reporting requirements for financial assets but not for real estate assets. Exploiting administrative data on property purchases made by foreign companies in the UK, we show that the implementation of the CRS led to a significant increase of real estate investments from companies incorporated in the tax havens that were the most exposed to the policy. We confirm that this increase comes from company owners of countries committing to the new standard by identifying the residence country of a sub-sample of buyers using the Panama Papers and other leaked datasets. We estimate that between £16 and £19 billion have been invested in the UK real estate market between 2013 and 2016 in reaction to the CRS, suggesting that at the global scale between 24% and 27% of the money that fled tax havens following this policy were ultimately invested in properties.
Introduction
International macroeconomic statistics indicate that the equivalent of 8% of households' financial wealth is held in tax havens (Zucman, 2013), leading to substantial tax losses (Pellegrini et al., 2016). Since the financial crisis of 2008 and the large deficits that followed, governments have renewed the set of policies aimed at curbing offshore tax evasion, with limited efficiency (Johannesen and Zucman, 2014; Johannesen, 2014; Caruana-Galizia et al., 2016). A major policy development took place in 2014 when the OECD designed the Common Reporting Standard (CRS). 1 The CRS is a standard of automatic exchange of information (AEOI) designed to limit the possibilities for taxpayers to hold undeclared assets, as it introduces third-party reporting of foreign financial assets between participating jurisdictions (Kleven et al., 2011). It is to date the most comprehensive policy enacted to increase tax transparency, and it led to a reduction in cross-border bank deposits held in tax havens (Menkhoff and Miethe, 2019; Casi et al., 2020; O Reilly et al., 2021; Beer et al., 2019).
The CRS, however, only covers financial assets. This means that investing offshore holdings in non-financial assets constitutes an attractive strategy to dodge the reporting requirements after the implementation of the agreement. 2 In this paper we study this alternative evasion strategy by focusing on real estate assets, and analyze the ownership of undeclared propertiesreal estate evasion. We show that the implementation of the CRS -that we also refer to as the beginning of AEOI throughout the paper -led to a substantial inflow of investments from tax havens to the UK real estate market. This suggests that there was a significant shift from financial to real estate assets, when the policy was announced. Offshore real estate matters for at least three reasons. First, despite scarce evidence on the precise amounts at stake, it is substantial (Alstadsaeter et al., 2022b). Second, it can have an effect on real estate prices, and this throughout the whole distribution of property prices, ejecting some residents out of the property market (Sá, 2016). Third, it may be used for illicit purposes, like money laundering or the avoidance of international sanctions (Collin et al., 2022; OECD, 2007). We use administrative data on corporate foreign ownership of properties in England and Wales merged with several leaks from offshore financial institutions and with the corporate registry of Luxembourg to document how real estate serves as a new favored final destination for offshore wealth after 2014 and the enhanced efforts to crack down on financial evasion. The UK offers a particularly interesting setting to investigate this issue for two reasons. First, the real estate market in the UK, and London in particular, is highly globalized and attracts large amounts of foreign investments (Sá, 2016; Badarinza and Ramadorai, 2018), a high proportion of which comes from individuals at the top of the wealth distribution (Knight Franck, 2016). Second, it is often considered as a "safe haven" for foreign capital, meaning an asset for which demand is not curtailed in periods of political or economic uncertainty (Badarinza and Ramadorai, 2018). Third, properties in the UK can easily be acquired anonymously through companies registered in jurisdictions such as the British Virgin Islands, Jersey or Cyprus, making this form of investment particularly attractive to investors seeking secrecy (De Simone, 2015). As an illustration, about 90% of all property purchases in England and Wales involving a foreign company are made by entities incorporated in tax havens.
From a theoretical perspective, the CRS changes the trade-off faced by non-compliant taxpayers. It leads to an important increase in the expected cost of financial evasion -i.e. owning undeclared financial assets and receiving unreported income -as it substantially increases the probability of getting caught. Under this standard, participating countries have to automatically exchange information about account holders. That is, if a UK taxpayer owns an account in e.g. Switzerland, the Swiss tax administration will automatically and annually report to the British tax authorities the information linked to this account. Over 100 countries are exchanging information with each other in 2022, and this number is still growing as new jurisdictions already committed to enter the agreement. Thus, taxpayers engaged in financial evasion will have three possibilities.
They can choose to do nothing in the face of this increased detection probability, but take the risk of being caught. They can also start to comply with their reporting requirements, which means they will have to pay back taxes avoided, start paying higher taxes, potentially pay an additional penalty and sometimes face criminal charges. Finally, they can revise their evasion strategy in order to reduce their detection probability.
Usually, there are three main ways to adapt to new enforcement rules. It is possible to reorganize the way one holds assets offshore, for example by transferring the ownership of one's assets to a shell company instead of owning them directly (Johannesen, 2014; Omartian, 2017). An alternative is to switch the location of the offshore assets, toward a tax haven not participating to the new policy (Johannesen and Zucman, 2014; Casi et al., 2020). The broad scope of the CRS and the fact that it covers financial assets held both directly and indirectly greatly limit the possibilities for non-compliers to use these first two strategies. However, evaders can still restructure their offshore portfolios away from financial assets to avoid the reporting requirements.
When considering real estate evasion, ultimately, the new offshore portfolio allocation will depend on the degree of substitution between financial and real estate evasion, which is an empirical question. 3 Our paper provides evidence of a substantial shift of offshore holdings toward real estate assets following the 2014 transparency shock. We establish a causal relationship between the introduction of the CRS and a sharp increase in offshore real estate investments in the UK. We derive four key results. First, we find that offshore real estate in the UK is large. We estimate that in January 2018, foreign companies held the equivalent of £109 billion in properties in England and Wales, including £73 billion in London. When adding up real estate owned directly by foreign individuals, these figures rise to £219 billion for England and Wales, and £142 billion for London.
Second, we show that real estate investments from (shell) companies incorporated in the tax havens mostly used by CRS-adopting countries increase significantly when these countries commit to the CRS. We make an important methodological contribution to circumvent the fact that we do not know the identity of those investing in the UK through shell companies. We exploit the Panama Papers and other tax-related leaks data which provide information on the identity and the residence country of shell company owners in many tax havens worldwide, and show that individuals from different regions of the world do not use the same countries to create companies. This allows us to identify a group of tax havens particularly exposed to the CRS, because they are mainly used by individuals coming from CRS-adopting countries. Using a differencein-differences design, we show that real estate investments coming from the most exposed tax havens are very similar in trend and level to investments from less exposed havens during the 14 years preceding the CRS, but start to diverge sharply just after.
Third, we confirm that the increase in real estate purchases following the CRS is due to an increase in the investments of individuals affected by the transparency shock. In order to do that, we match the Panama Papers and the other leaked foreign ownership datasets to our administrative data. We identify the ultimate owners of purchasing companies in almost 4% of the real estate transactions in our sample and show that owners from CRS-adopting countries invest significantly more in real estate than non-affected individuals after the transparency shock. We find that British residents account for a large proportion of those investing in the offshore real estate market in the UK after the CRS. This is coherent with the fact that we see no peak in the number of disclosures under the UK amnesty program around 2014, contrary to what happened in some other countries (see Alstadsaeter et al. (2022a) for Norway). 4 This seems to suggest that UK tax evaders decided to engage in alternative evasion strategies like real estate evasion rather than enter into compliance. This finding also indicates that a significant part of foreign real estate investments in the UK are actually made by residents wanting to hide their identity or to avoid certain property taxes, which shows that disentangling "real" foreign investments from "disguised" domestic flows is necessary in order to study the effects of foreign investments on domestic real estate markets.
Fourth, we estimate that between £16 and £19 billion have been invested in real estate in England and Wales over the 2013-2016 period because of the threat Automatic Exchange of Information constitutes for people hiding assets offshore. Translating these figures to a global effect and comparing them to estimates of the effect of the CRS on financial assets found in the literature, our results suggest that between 24% and 27% of the offshore financial wealth that left tax havens due to enhanced tax enforcement through AEOI was shifted to real estate globally. 5 This result sheds light on the growing importance of real estate as an offshore asset, and provides a new insight into the composition of offshore portfolios. Combined with the finding that the distribution of offshore wealth is very concentrated at the top of the income and wealth distribution (Alstadsaeter et al., 2019; Guyton et al., 2021; Londoño Vélez and Ávila-Mahecha, 2021; Leenders et al., 2021), it has important implications on the composition of wealth at the top, suggesting that the share of real estate has been substantially underestimated.
Our paper has major implications for the design of information exchange policies. It suggests that the CRS' efficiency at curbing tax evasion has been substantially reduced by the omission of real estate assets, leaving opportunities for non-compliant individuals to ensure they still avoid the new reporting requirements. Broadening the scope of the agreement to cover not only all jurisdictions, but also all type of assets, would fix this leaking pipeline of information exchange. The pre-requisite to achieve such an ambitious policy is for governments to improve their existing real estate assets registers, in order to guarantee they systematically collect information about the identity of individuals buying properties indirectly. This paper first contributes to a broad strand of the literature studying the amount of financial wealth held offshore (Zucman, 2013; Pellegrini et al., 2016; Vellutini et al., 2019; Henry, 2012) and its distribution across countries (Alstadsaeter et al., 2018) and within countries (Alstadsaeter et al., 2019; Guyton et al., 2021; Londoño Vélez and Ávila-Mahecha, 2021; Leenders et al., 2021). It is closest to Alstadsaeter et al. (2022b) which shows that offshore real estate is large and mainly owned by individuals at the top of the wealth distribution. Our paper also relates closely to the growing literature on the effects of policies aimed at improving tax transparency (see Slemrod (2019) for an overview), and more particularly to papers assessing the efficiency of the CRS (Menkhoff and Miethe, 2019; Casi et al., 2020; O Reilly et al., 2021; Beer et al., 2019) and of FATCA, the US policy of AEOI (De Simone et al., 2020). It confirms that non-compliant individuals adapt their behavior in response to changes in the international tax environment, and find alternative concealment strategies to continue under-reporting their assets (Johannesen, 2014; Johannesen and Zucman, 2014; Casi et al., 2020). Close to our results, De Simone et al. (2020) find that real estate prices in markets open to for-eign investments increased more than in markets with investment restrictions after the implementation of FATCA, which they interpret as evidence that evaders invested more in real estate to circumvent the reporting requirements. Third, we complement empirical studies exploiting foreign ownership leaked datasets to analyze the extent, the distribution, or the structure of offshore tax evasion (Caruana-Galizia et al., 2016; Omartian, 2017; Alstadsaeter et al., 2019; Londoño Vélez and Ávila-Mahecha, 2022; Collin, 2021). Finally, our paper contributes to a small set of studies focusing on the determinants and the effects of foreign investments in the real estate market, without taking into account the key role that the international tax transparency environment can have on domestic property markets (Badarinza and Ramadorai, 2018; Sá, 2016; Cvijanovic and Spaenjers, 2020).
The rest of the paper is organized as follows. Section 2 provides some background elements on foreign ownership of real estate in the UK, and describes our data. Section 3 shows how corporate real estate investments in the UK responded to the CRS. In section 4, we identify the country of origin of a subsample of individuals buying properties through offshore entities. This allows us to estimate the distribution of offshore real estate in the UK across countries and to analyze where responses to the CRS come from. Section 5 estimates the global shifting effect from financial to real estate assets. Section 6 concludes.
Data
Overseas Companies Ownership Dataset
The offshore real estate market in the UK. The UK, and London in particular, constitute an attractive location for global real estate investments, especially for investments from people at the top of the wealth distribution (Knight Franck, 2016). Average property prices in the UK real estate market have been increasing a lot and part of this hike is due do increased foreign investments in the market since the beginning of the 2000s (Sá, 2016).
In this paper, we study a very specific set of the property market in the UK, which we call the offshore real estate market: properties that are owned by companies incorporated in tax havens. 6 This ownership scheme is not illegal per se and some of the investors behind these companies might use an offshore intermediary for legitimate purposes. However, using a shell company allows one to keep their identity hidden, which in turns makes it easier not to report on the property to the tax administration of their home country.
Holding a property through a shell company, nonetheless, does not allow its owners to completely avoid paying taxes in the UK. Four main taxes apply to the owners of UK properties: Stamp Duty and Land Tax (SDLT) when buying the property (with a top rate of 15% from 2012 onward), Capital Gains Tax when selling it, Income Tax if the property generates rental income and Inheritance Tax in case of death of the owner. Until recent years, UK residents, non-UK residents and UK-residents non-domiciled ("non-dom") 7 have been able to decrease their liabilities with respect to the four taxes by "enveloping" UK properties with an offshore company (i.e. owning the property through a company instead of directly), the scale of the "savings" depending on each specific tax status. From 2012 onward, the UK government has progressively introduced a series of tax changes which greatly reduced the tax advantages previously enjoyed by the owners of enveloped UK properties. For each of the four taxes mentioned above, we describe in Appendix A1 the advantages of indirect ownership and give information on how they evolved since 2012.
Presentation of the dataset. In order to capture these offshore real estate investments, we exploit public data released by the British Land Registry. The Land Registry records all real estate purchases made in England and Wales by foreign companies in the Overseas Companies Ownership Dataset (OCOD). 8 The registry compiles information on the time and location of the purchase, the price paid (when available), the tenure (Freehold or Leasehold)9 and on the purchasing company (name, country of incorporation, address).
It is exhaustive, as companies are required to lodge their purchase with the Land Registry.
The OCOD suffers from two main limitations. First, when one company buys several properties at the same time, the bundle of properties is frequently recorded in the registry as one unique transaction. In these cases, we recover from the addresses the number and the location of properties that have been bought by the same company. To give an example, one address in our sample is: "24 and 26 Brompton Road and 15, 16 and 17, Knightbridgegreen, London". Here, and in similar cases, we split this observation into five distinct transactions, corresponding respectively to "24 Brompton Road, London", "26 Brompton Road, London", "15 Knightsbridegreen, London", "16 Knightsbridegreen, London" and "17 Knightsbridegreen, London".
Then, we divide the price indicated for the whole transaction by the number of properties bought at oncein this case 5. Almost 30,000 properties in our sample are sold in bundles, which represents around 20% of all the transactions.
The second limit of our dataset is that the purchase price is only specified for 36% of the transactions. Therefore, we predict missing prices using the sample of transactions where the price is available. We use the characteristics of the purchase (type of property, location, date of the purchase, etc) to infer the price paid, estimating an OLS with 5-fold cross-validation model. We detail our inference method in Appendix section A2. Appendix Figure 9 displays the distribution of out-of sample predicted prices and the distribution of observed prices. They are very similar, indicating that our model closely matches the true distribution of prices. Appendix Figure 10 plots the out-of-sample predicted prices against observed prices. It confirms that our model doesn't systematically overestimate or underestimate prices, as observations are symmetrically distributed around the 45 degree line.
Descriptive statistics. The first transaction in the registry dates back to 1959, but most of the purchases take place from 2000. The registry records more than 143,000 transactions over the period 2000-2020. Panel A of figure 1 shows the evolution of the number of transactions in the dataset and their total value from 2000 to 2019. 10 The number of purchases increased slowly from 2000 to 2011, with a maximum of more than 6,000 transactions in 2007. There is a first peak of purchases in 2012 with approximately 12,000 properties bought that year. The number of purchases steadily increases in the following years and reaches almost 14,000 in 2015 before starting to decline. The aggregate value of real estate transactions follows a roughly similar evolution.11 Panel B shows that the value of purchases made by foreign companies grows steadily, from more than £2 billion in 2000 to £10 billion in 2013. It jumps to £18 billion in 2014 and reaches a peak of £22 billion in 2015. In total, the OCOD records transactions for a value of more than £190 billion.
Table 1 displays the average characteristics of the transactions in our dataset. The average price in our dataset is £1.38 million (£2.31 when only taking into account transactions with non-missing prices). There is an equal number of Freeholds and Leaseholds, and about 43% of the transactions take place in London.
However, the OCOD does not provide a lot of details about the properties bought by foreign companies.
Using the addresses of the properties purchased, we are able to match about 43% of the transactions in the OCOD to the Energy Performance Certificates (EPC) data. 12 The EPC compiles information on the type of property, its size and the number of rooms, obtained when a property is put on the market and its owners have to proceed to an energy assessment. Table 1 shows that among the residential properties that we match to the EPCs data, the average size is 104 square meters and the average number of rooms is above 4. The average size of the commercial properties reaches almost 2000 square meters, which is consistent with a much higher average price. The calculations are made after we cleaned the data according to the process detailed in section 2.1, and imputed the prices as described in appendix section A2. Prices are corrected using the UK House Price Index (HPI) computed by the Land Registry. We apply the UK HPI to all transactions in our dataset, regardless of their location.
T : C
Notes: This table displays the characteristics of the transactions in the OCOD. Column "All -full" provides information on the full dataset. Columns "All -no prices" and "All -with prices" shows the average characteristics of properties for which the price is missing or indicated, respectively. We are able to match 35% of the bought properties to The Domestic Energy Performance Certificates (EPCs) data for residential real estate and about 8% to the Non-Domestic EPCs data for commercial properties. The EPCs dataset provides more detailed information on the property characteristics. More information on the EPCs dataset in Appendix section A2. Columns "Matched to EPC -residential" and "Matched to EPC -commercial" shows the average characteristics of the sample of properties matched to the domestic and non-domestic EPCs, respectively. The average price of the properties with non-missing price is £2.31 million. After inferring the missing prices, the average price in our dataset is £1.38 million. The row "Expensive London" gives the proportion of properties located in Westminster, Kensington and Chelsea, the City of London or Camden.
About 43% of the purchases take place in London. Figure 2 shows the location of the transactions across The boroughs are ranked in five quintiles according to the total number of purchases made by foreign firms over the period 2000-2020, from the boroughs where foreign companies make the less purchases (quintile 1) to the boroughs where they make the most (quintile 5).
London boroughs. We divide the 33 boroughs of the city in five quintiles, from the boroughs where companies make the less purchases (quintile 1) to the boroughs where they make the most (quintile 5). To give a sense of the importance of foreign companies in the British real estate market, appendix figure 13 in section A3 shows the evolution of the proportion of the total number of transactions (panel A) and of the total investments (panel B) in the English and Welsh real estate market coming from foreign companies.
Purchases made by foreign firms represent between 0.3% and 1.3% of the total number of real estate transactions over the 2005-2019 period. 13 The amounts involved in these transactions lie between 1% and 6% of 13 Statistics for the whole UK real estate market are only available from 2005.
the total value of the English and Welsh property market over the same period -with an important increase in 2014. This suggests that the properties bought are on average more expensive. We confirm this observation in Figure 3 More than 90% of the transactions in the registry are made by companies incorporated in tax havens. 14 Table 2 shows the top-five buyers in volume and value of purchases, separately for havens and non-havens.
Four of the five main buyers in the haven group are the havens with the strongest links to the UK: the Channel Islands (Jersey and Guernsey), the British Virgin Islands (BVI) and the Isle of Man. Companies incorporated in Jersey are the most popular vehicle of investment, and purchased more than 30,000 properties for a total value of more than £54 billion over the period 2000-2020. Purchases from non havens are mainly coming from European countries (the Netherlands,15 Germany, Sweden, France) and the United States.
Beneficial ownership data
Presentation of the dataset. The Overseas Companies Ownership Dataset provides information on the country of incorporation of companies investing in UK real estate but not on the residence of their ultimate owners. These two variables are likely going to be identical for legitimate businesses; but most shell companies incorporated in tax havens belong to foreign citizens and have no real activity in the country in which they are registered (Kristo and Thirion, 2018).
To recover information on the country of residence of the owners of companies buying in the UK, we exploit several files leaked from offshore service providers over the period 2013-2021: the Offshore Leaks, the Bahamas Leaks, the Paradise Papers, the Panama Papers and the Pandora Papers. These documents from law firms and offshore financial institutions provide data on the beneficial owners of thousands of shell companies they created or managed for their clients. Taken together, they shed light on the structure and activities of more than half a million offshore entities created between 1865 and 2018. The files have been analyzed by the International Consortium of Investigative Journalists, who published the name, address and countries of the entities' owners.
We also use the recently unveiled OpenLux data. OpenLux is not a leaked dataset, but is the result of an investigation led by the French newspaper Le Monde, who scraped Luxembourg's companies registry and gathered information on more than 260,000 entities. When the country made its register of beneficial ownership public in September 2019, the journalists were able to access to the details of more than 70,000 company owners.
Finally, we exploit data leaked in 2019 from the Cayman National Bank and Trust in the Isle of Man (CNBIOM), an Isle of Man subsidiary of a financial services provider based in the Cayman Islands. The CNBIOM dataset provides precise beneficial ownership information for more than 1,400 companies, most of them incorporated in the Isle of Man. 16 Descriptive statistics. Table 3 details the characteristics of each of the foreign ownership dataset we use.
The first company to appear in the leaks was incorporated in 1865, while we also have entities created as recently as 2020. In total, we have an insight into the organization of more than one million companies and into the holdings of more than 500,000 identified beneficial owners;17 more than half of these records come from the Panama Papers and the Paradise Papers.
Data source
F : M
Notes: This figure shows the most frequent tax havens used to incorporate companies, by region of the beneficial owner(s). It is constructed using beneficial ownership data and company data from the Bahamas Leaks, the Offshore Leaks, the Panama Papers, the Paradise Papers, OpenLux data and CNBIOM data. We display the percentages of owners from each world region in the leaks on top of the figure. To compute the percentages, we remove beneficial owners who are linked to a tax haven, and beneficial owners who are companies. The total might not add up to 100% because of rounding. Panel A pools all years of our data together, while Panel B shows the evolution of tax haven use from 2000 to 2016. The graph stops in 2016 because one of the main leak we use, the Panama Papers, was released in April 2016. As a consequence, the composition of intermediaries could be mechanically affected after this date.
Panel A of Figure 4 shows where the companies in our offshore ownership datasets are incorporated, by region of residence of the beneficial owner. 18 We display the proportion of UBOs in the leaks from each world's region on top of the figures. Several factors could potentially explain this heterogeneity in tax haven use by individuals. First, residents from different countries could have specific preferences over countries in which to incorporate their shell companies, for example because tax havens offer specialized services, or cater to particular segments of the population. Omartian (2017) analyzes the Panama Papers and find that some jurisdictions seem to be specialized in certain activities; for example, companies owned through bearer shares are often incorporated in Panama. Second, this heterogeneity could be explained by the preferences of the intermediaries used by individuals to create shell companies. These corporate service providers could be more likely to use a specific set of havens to incorporate entities, for example if they rely on their own network of actors and infrastructures built over time for the incorporation process. If, in turn, they attract residents from different parts of the world, this would lead to an heterogeneous pattern of tax haven use depending on the residence country of the individual. 20 Panel B of Figure 4 provides an insight into the dynamics of these patterns of tax haven use. It shows that some tax havens like the British Virgin Islands are less and less used to incorporate companies, while some others, like Malta, gain in importance as incorporation centers.
3 The effect of tax transparency on the demand for offshore real estate
In this section we study how the offshore real estate market in the UK reacted to the launch of automatic exchange of information among OECD countries. We provide evidence that some tax havens are likely to be more impacted by the CRS, because they are used primarily by residents from CRS-adopting countries.
Then, we show that the trend in real estate investments from these tax havens starts to diverge sharply from investments from other tax havens once the CRS is launched.
Methodology
When studying the effect of tax transparency on the UK real estate market, we are faced with two main challenges. First, the UK real estate market is highly globalized and its dynamics depends on many factors (see e.g. Poon (2017) for a review). Therefore, we need a sufficiently sharp and salient shock in tax transparency in order to precisely estimate a potential effect of tax enforcement on the amounts invested in the UK property market. Thus, we consider two sets of events affecting many countries at the same time. The second issue when studying the offshore real estate market in the UK is that we only observe the country of incorporation of the companies purchasing properties, not the country of residence of their owners. This means that we are not able to analyze directly the evolution of real estate purchases of residents from countries adopting the CRS. To circumvent this issue, we exploit the heterogeneity of tax haven use by country of residence that we documented in section 2.2. We exploit the leaks and the OpenLux data to find country patterns of tax haven use, which allows us to identify a group of tax havens that are mostly used by investors from the countries adopting the CRS in 2013 or 2014. More specifically, we compile the country of incorporation and the country of residence of the owners of the companies in the offshore ownership datasets; then for each tax haven, we compute the proportion of owners coming from each country.
For example, we find that 53% of the people creating companies in Jersey are from the United Kingdom, 7% are from South Africa, 4% are from the US, 3% from Israel etc.21
Using these figures, we construct a measure of "CRS exposure", which is equal to the proportion of company beneficial owners coming from countries adopting the CRS in 2013 or 2014. We build two groups of tax havens according to their degree of exposure:
• Highly-exposed tax havens: jurisdictions that have more than 75% of their company beneficial owners coming from early-adopting countries
• Other havens: jurisdictions that have less than 75% of their company beneficial owners coming from early-adopting countries, or tax havens that do not appear in our leaks data as hosts for shell companies Figure 14 (appendix section A4) displays the distribution of CRS exposure for the 52 havens in our sample and shows where the top 5 buyers of both groups are located. The group of the most exposed countries includes some of the havens investing the most in the offshore real estate market, Jersey, Guernsey, the Isle of Man and Luxembourg. The British Virgin Islands however, second investor overall both in value and volume of transactions, counts only 62% of owners residing in early-adopting countries and therefore belongs to the less exposed group, along with Gibraltar, Panama and the Cayman Islands. Twenty three havens have a CRS exposure of zero, either because none of the individuals owning a company there come from early-adopting countries or because they do not appear in our leaks-lux data as incorporation countries.
We make the hypothesis that purchases coming from tax havens highly exposed to the CRS are going to react to the early commitment waves of 2013 and 2014. Indeed, some non-reporters from the G20 and the Joint Announcement countries will want to invest part of their offshore wealth in real estate to dodge the new reporting requirements. If they do so using shell companies incorporated in their preferred tax havens, we should see an increase in purchases coming from these countries. On the other hand, we expect investments from less exposed tax havens to react less to the CRS; as they are also used by a lot of individuals from non-committing countries, their real estate investments should be less affected by the policy.
As a result, to test whether the commitment to AEOI among OECD countries had an effect on real estate purchases in the UK, we compare transactions made through highly exposed tax havens to transactions made by other havens, around the two waves of commitment of 2013 and 2014. We use a difference-in-differences setting, with highly-exposed tax havens as the treatment group and the other havens as the control group.
Our identification hypothesis is that both groups' investment trends would have evolved in the same way without the commitment to the CRS. Using less exposed havens as the control group allows us to take into account the evolution of the dynamics of the UK real estate market, as well as the tax changes faced by foreign companies buying properties throughout the period. Note that we estimate a lower-bound of the effect of the CRS, as the control group of less-exposed havens is also used by residents from early-adopting countries.
We estimate the following equation:
Y iq = j =2013q2 β q • Quarter j=q • T reat i + γ i + η q + v iq (1)
where Y iq denotes the amount in million Pounds invested in real estate by country i in quarter q (in 3quarters moving average), Quarter j=q is a quarter dummy, T reat i is a dummy equal to 1 when country i is a highly-exposed haven, γ i is a country fixed effect, η q a quarter fixed effect and v iq the error term. We have a balanced panel of 52 tax havens.
The difference-in-differences coefficient β q captures the effect of the AEOI events in quarter q relative to the pre-commitment period, the second quarter of 2013. A coefficient β q equals to 100 means that on average, the difference between a highly-exposed haven and another haven's real estate investments in quarter q exceeds the investment difference in 2013q2 by £100 million.
Results
Before moving to the results of the formal difference-in-differences analysis, we show in Figure 5 the aggregated value of real estate investments coming from firms incorporated in highly-exposed havens and firms incorporated in the other havens. The flows of investments follow each other closely during a very long period spanning from 2000 to mid-2013, with strikingly similar levels of investments being in both groups during the whole period. The two trends start to diverge sharply in the third quarter of 2013, right when the G20 countries commit to AEOI; we observe a large jump in real estate investments from highly-exposed havens, that is not matched by investments from the other havens. the G20 support for AEOI. After September 2013, the difference in the value of purchases made by highlyexposed havens and the other havens surges and reaches on average about £150 million until the end of 2017.
We observe a first increase in investments in the treatment group compared to the control group in the third quarter of 2013, and a second increase after the second quarter of 2014; this corresponds to the two major steps taken by early-adopting countries toward the implementation of AEOI.
In Appendix figure 15 (section A5), we replicate our analysis using other real estate market outcomes as our dependent variable. We study the evolution of the overall number of transactions as well as the number of transactions above £1 million, £2M, £3M, £4M and £5M, respectively. For expensive transactions, the graphs are very similar to the one in figure 6, with no significant difference between the highly-exposed havens and the other havens until the CRS is launched, and then a great divergence between the two groups that remains significant. The picture is somewhat different when looking at the total number of transactions. We observe an increasing trend in the volume of purchases from highly exposed tax havens compared to other havens slightly before 2013. This pre-trend is driven by differences in purchases of less expensive properties, as it disappears when restricting the sample to purchases of more than £1M.
To sum up our results, we use a simple static difference-in-differences model with a continuous treatment variable, and present the results in table 4. We estimate the following equation on the sample of tax havens:
Y iq = P ost + P ost • Exposure + γ i + η q + ER iq + v iq (2)
Where Y iq is the outcome variable, P ost a dummy for the post-CRS period (2013q3-2016q4), Exposure is
F : D ---
Notes: This figure shows the difference-in-differences coefficients comparing quarterly amounts of real estate investments from companies incorporated in highly-exposed havens to investments from companies incorporated in the other havens. The flows are normalized at their value of 2013q2. The estimation is based on the full data provided in the Land Registry OCOD.
the measure of exposure to the CRS, γ i a country fixed effect, η q a quarter fixed effect, ER iq the exchange rate of the currency used in country i at quarter q and v iq the error term. A positive coefficient associated to "P ost x Exposure" would indicate that the more a tax haven is affected by the CRS, the more its real estate investments increase compared to the pre-CRS period. Our cut-off period here is the second semester of 2013. 22 We control for exchange rates evolution, which can influence cross-border investments.
In column (1) of table 4, we look at the value of investments in Pounds, and in column (2), at the number of transactions. In both cases, coefficients associated to "P ost x Exposure" are large and highly significant.
We find that the average quarterly difference in real estate investments between fully exposed tax havens (Exposure is equal to one i.e., tax havens only used by residents of CRS-adopting countries) and not exposed tax havens (Exposure is equal to zero) is higher by £180 million after the CRS compared to the reference period (2013q1 and 2013q2).
We also look at the effect of the CRS on investments in Pounds (column ( 5)) on the subsample were prices are indicated i.e. on the subsample for which we didn't have to infer prices. The coefficients are very similar in column (1) and in (5), indicating that our results are not driven by the way we infer the missing prices.
T : S
Notes: This table shows the results of the estimation of equation 2. Columns ( 1) and ( 2) shows the results for our whole sample, respectively for the value and the volume of the purchases. Columns ( 3) and ( 4) presents the results for the same outcome, scaled by their average value during the pre-CRS period. Columns ( 5) and ( 6) restrict our sample to the transactions for which the price is indicated in the OCOD, for the value of the purchases and the scaled value of the purchases.
Robustness checks
Are some extreme values driving our results? One potential concern with our results stems from the fact that a large part of the purchases made by tax havens is attributable to a few countries only. Therefore, one could fear that our estimates are driven by increased purchases made by a single tax haven. An ideal way to deal with this issue would be to simply use a log-transformation, but there are many zeros in our estimation sample and a log-specification would be misleading. Thus, in columns (3) and ( 4) of table 4 we normalize the outcome variable of each country i by scaling it by its pre-CRS period average value. 23 This standardization ensures our results are not driven by some havens which would be investing heavily throughout the whole period. For each quarter q and country i, we scale the amount invested and the number of transactions in each period q by the average quarterly value for country i between 2005 and 2010. 24 With this specification, the effect remains significant for the amounts invested, but becomes insignificant -though still positive -for the number of transactions. We also look at the effect of the CRS on scaled investments (column ( 6)) on the sub-sample where prices are indicated and the coefficient remains highly significant. Its size increases substantially compared to the coefficient in column (3) because information on prices is more often missing at the beginning of the period -and therefore between 2005 and 2010 -than in later years, which mechanically decreases the denominator of the scaled variable compared to a specification where predicted prices are included.
To further check that our results are not driven by only one outlier country, we replicate our analysis on 52 sub-samples, excluding successively one different haven in each sub-sample. We also vary our sample of analysis in two major ways, first by excluding the two most important buyers in the treatment group (Jersey and Guernsey) and second by including only those tax havens that are themselves participating to the Joint Announcement in 2014. We present the results in Appendix table 13, where coefficients are obtained from the estimation of a variation of equation 1 using a single Post-CRS dummy to capture the effect of the policy.
In all of these cases, our findings stay qualitatively unchanged, although the size of the coefficient may vary.
Another concern with our specifications in Pounds (equation 1 and 2) is that our positive effect could be driven by some extremely expensive properties being bought just after the CRS by highly exposed tax havens. To check this is not the case, we estimate equation 1 but windsorize the price of each property at the 0.1%, the 0.5%, the 1% and the 5% levels (both for the bottom and top tails of the distribution). The graphs obtained from the various windsorization levels are shown in appendix figure 16. The magnitude of the effect decreases with the level of windsorization, which indicates that it is partly driven by very expensive properties. However, the results remain qualitatively unchanged: the 14 years pre-trend are still insignificant and there starts to be a statistically significant difference between the highly-exposed tax havens and the other havens immediately after September 2013. It indicates that the positive effect of the CRS on real estate investments we estimate does not come from extreme values in property prices.
Robustness of our main results to alternative specifications. Because our results are obtained by forming two groups of tax havens based on a proxy for their attractiveness for residents of CRS-adopting countries, it is straightforward to assess how changing this measure affects our result. A first concern could be that the threshold of 75% of owners from CRS-adopting countries we use to define highly-exposed tax havens drives the difference in investments between the two groups; even though we show in table 4 that our findings still hold when using a continuous measure of exposure. We show in Appendix figure 17 how our results vary when we change this threshold. In every specification, the trends of the purchases of both groups is not significantly different before the second quarter of 2013, at which point they start to diverge sharply.
The difference in the post-CRS period is significant for all thresholds but the 90% one, in which case only 5 countries are in the treatment group while all the main buyers are in the control group.
Another issue could be that the leaks and OpenLux data we exploit to compute heterogeneity in the use of tax havens are not representative of the true distribution of offshore preferences among countries. Indeed, these datasets suffer from several limitations. First, the Offshore Leaks, the Panama Papers, the Pandora Papers and the Paradise papers data only provide information on people who used specific providers of offshore services to incorporate shell companies. 2526 If the clients of these providers are not representative of tax haven uses in their own countries, we will not capture the true attractiveness of each tax haven for residents from CRS-adopting countries. Second, we identify only a portion of the UBOs of all the companies present in the leaks-lux data. For some companies, we only have access to the identity of the directors, the managers etc. For others, the listed owner is either another company or individuals residents in a tax haven and as such are likely to be nominees instead of the actual owners. To address this potential selection bias, we exploit haven use information computed with data from the Bank for International Settlements (BIS).
The BIS provides information on cross-border bank deposits on a bilateral basis, for 48 countries. Studying tax haven uses by CRS-adopting countries using the BIS data draws a very similar picture than with the leaks data. The only notable difference in group composition is that Guernsey appears to be less favored by residents from early-adopting countries according to the BIS, and as such moves to the less-exposed havens group. As a result, our key finding of a divergence in investments trends between both groups just after the commitment to the CRS remains unchanged (Appendix figure 18).
Finally, table 13 also shows that our results are robust to varying the sample of countries we consider to be tax havens, whether we use the "consensus list" compiled by Menkhoff and Miethe (2019) of the 29 countries classified as tax havens in all recent studies on tax evasion and international taxation, or the 41 countries from the list of Hines and Rice (1994). The different countries included in these lists are presented in Appendix section A8, table 19.
Missing transactions. An issue in our dataset is that the Land Registry record of overseas companies transactions starts only in November 2015. This means that all properties that are bought and then sold before this date will not appear in our data. It could be an issue if the treatment group (highly-exposed havens) sells properties relatively less frequently than the control group (other havens). In this case, we would miss more purchases from the control group than from the treatment group before 2015. In turn, this would lead us to overestimate the additional investments made by highly-exposed havens, because we would miss relatively more transactions made by the control group. 27 To check this is not the case, we exploit available data from the years 2015-2020. Over this period, we have access to the complete set of purchases and sales of properties made by offshore companies. In particular, we can observe whether a property bought from 2015 onward was sold during the 2015-2020 five-year window.
This allows us to compare the selling behavior of the control and of the treatment group. For the years 2015-2020, we compute the proportion of properties that are bought in year t and then sold one year later, 2 years later, ... and 5 years later. The results are shown in Appendix figure 19. 28 We see that the treatment group almost always sells its properties more often than the control group. The only exception is for properties sold after one year. However, the difference is very narrow, as 12.3% of properties are sold after one year for the control group, and 12.1% for the treatment group. We take this figure as evidence that we are likely to miss more purchases from the treatment group than the control group before November 2015. It means that the databreak would actually lead us to underestimate the effect of the CRS on real estate investments.
Are some countries ejected from the real estate market? Another concern one may have is that the surge in real estate investments following the CRS would lead some countries not affected by the CRS to be "ejected" from the market due to higher prices. Indeed, if higher demand for real estate from the investors affected by the CRS results in property price increases in the UK, this could divert some buyers from this marketespecially if their incentives for buying real estate are unaffected by the CRS. The extent of this effect depends on the price elasticity of real estate assets with respect to the demand. If the elasticity is high enough, real estate investments from individuals not directly hit by AEOI would in reality be affected negatively by the transparency shock. As by definition these investors mainly use tax havens from the control group, i.e. the less-exposed jurisdictions, real estate investments from this group would be negatively impacted, leading to biased estimates of the CRS effect. To assess whether some buyers were effectively ejected from the UK property market following the CRS, we simply compute for the highly-exposed havens and for the other havens the number of countries making at least one purchase during the year, for each year of our period of analysis. We plot the results in Appendix figure 21 (section A5). If buyers from the less-exposed group were massively ejected from the UK property market after the CRS, we would expect the number of countries in this group making at least one purchase a year to decrease from 2014. Reassuringly for our identification, we see on the contrary that the curve for the "other havens" group remains relatively constant throughout the whole period.
Direct evidence of asset shifting
We have shown in the previous section that real estate investments from companies incorporated in tax havens the most exposed to the CRS have increased significantly once the policy is announced. One limitation is that we don't observe directly who are the ultimate investors behind the corporate vehicles and the evidence of responses to the CRS we provide is therefore only indirect. Thus, we match the OCOD data to leaked corporate registries in order to identify the nationality of a number of company owners appearing in the property transactions records. In this section, we first describe our matching process. Second, we provide descriptive evidence on where investors buying UK real estate through offshore shell companies come from. Third, we combine our data on indirect ownership to alternative sources containing information on direct ownership of UK offshore property in an attempt to give for the first time a comprehensive picture of who owns offshore real estate in England and Wales, and how much is owned from abroad. Fourth, we check that responses to the CRS we documented in the previous section come from individuals effectively affected by the policy. For that purpose, we show that investors from early-adopting countries increase substantially more their real estate investments after the CRS than investors from non-adopting countries.
Identifying who hides behind the shell companies buying real estate in the UK
To identify the nationality of the ultimate beneficial owner(s) (UBO) of companies buying properties in the UK, we match the property transactions data to beneficial ownership data -the Leaks, the OpenLux and the CNBIOM data presented earlier. We proceed in several steps:
1. We use standardized companies names to match beneficial ownership data to the OCOD.29
2. We keep only companies for which the country of incorporation is the same in both datasets.
3. We keep companies that were active at the time of the purchase.30
4. If a matched company appears to be owned by another company instead of a real person, we go one layer further and look for the owners of this second company in the leaks data. We repeat the operation four times in order to identify the "true" UBO of as many matched companies as possible.31 5. We drop the "true" UBOs that still appear to be companies.32
6. We drop UBOs listed as residents from tax haven countries. 33 7. We allocate the shares of the companies identified. If a matched company is owned by n identified owners, we allocate 1 n share to each UBO. Notes: This table shows the number of companies, of transactions and of ultimate beneficial owners we identify in the OCOD, by data source. Note that we do not identify any beneficial owners using the Bahamas Leaks, hence we do not show it in the data sources.
Descriptive facts about offshore real estate ownership in the UK
Where do the buyers come from? Figure 7 shows where the offshore real estate investors come from, ranked by total amount invested over the 2000-2020 period. As mentioned above, we take into account the fact that the ownership of some companies is split into several owners (often with the same family name).
When this is the case, we divide the value of the property bought by the number of individuals owning the company making the purchase. Notes: This figure shows the value and the volume of real estate investments in England and Wales made through an offshore company, by country of residence of the ultimate beneficial owner. The data comes from the OCOD sample matched with the Offshore Leaks, the Bahamas Leaks, the Paradise Papers, the Panama Papers, the Pandora Papers, the CNBIOM and the OpenLux data. Countries written in green take part to the May 2014 Joint Announcement while countries written in black do not.
will attribute 1 2 transaction for France and 1 2 for Canada and the amount invested from each country will be £500,000.
Interestingly, residents of the United Kingdom constitute the first group of buyers through offshore companies in our identified sample. Between 2000 and 2020, they bought more than 1000 properties for a total of almost £1.5 billion and an average price per purchase of about £1.4 million. While this is suggestive of a "home bias" in the individuals' investments decisions (Coeurdacier and Rey, 2013), one would not expect to find so many UK residents buying UK real estate via an offshore entity in the absence of tax planning or secrecy motives for such investment schemes. A UK resident may own a UK property through an offshore entity both for "legal" tax avoidance or illegal motives. For "non-domiciled" residents -which are not taxed on their non-remitted foreign income -enveloping UK properties through an offshore entity may be particularly attractive, as it is a way to exclude rental income or capital gains from their personal income tax base. 35 Investors from the Middle-East, first and foremost the United Arab Emirates, represent another important share of the identified ultimate investors. Residents from these countries face very low -sometimes zeroeffective tax rates on their income and wealth; this suggests that these investors are using a shell company 35 See appendix section A1 for more details on the legislation. to channel their purchases for secrecy motives rather than to lower their tax liabilities. Indeed, for high net worth individuals living in a politically unstable country, avoidance of political reprisals is sometimes pointed as a major consideration in the use of tax havens (Harrington, 2016).
Where do they buy? Appendix figure 22 (section A6) displays maps of London showing the most favored London neighborhoods per investors' country of residence -for the top 5 buyers and for Russia. First, we see in these figures that two boroughs, Westminster and Kensington and Chelsea, are the top-location of real estate investments for all main groups of buyers, except Israeli residents. More generally, the North-west of the city seems to be favored by buyers of every nationality. However, there is still some heterogeneity in location choices according to the country of residence. When comparing country maps to the location of all transactions in our London sample (figure 2), we see for example that citizens from the United States buy relatively more properties in the South-West and in the North-east, while United Kingdom residents are more likely to purchase estates located in the South-east borough of Bromley.
What are the intermediaries involved? Figure 23 replicates our analysis of tax haven use by region of the world, restricting our sample to companies we observe as buying real estate in the UK. Comparing this figure to Figure 4, the first striking element is the relative importance of companies incorporated in the British Virgin Islands when we focus on entities buying real estate in the UK. This is partly explained by the fact that among the popular havens to hold real estate in the UK, the British Virgin Islands is relatively well represented in the tax-related leaks data while Jersey, Guernsey, Luxembourg and the Isle of Man are not. 36 Appendix figure 23 also displays the proportion of owners identified in our sample by world region.
Comparing these proportions to the ones displayed in Figure 4 allows us to identify regions which are relatively more represented in the identified sample. There are two main reasons why some regions could be over-represented in the identified sample. First, residents from these regions could have a strong propensity to choose the British Virgin Islands to incorporate companies. This is because we identify a relatively high proportion of companies from the BVI. Second, residents from these regions could have specific preferences for offshore assets: if a group of countries accounts for a higher proportion of the identified owners in the Land Registry data than in the leaks, this suggests that they hold relatively more UK real estate in their offshore portfolio than the rest of the individuals in the leaks. This is the case for residents of the United Kingdom, who represent 12% of the UBOs in the full leaks and OpenLux data but 24% of the identified UBOs in our sample. This makes sense, as we expect UK residents to be more likely to hold UK assets in their offshore portfolio. However, this is the case for other groups of countries as well: the Arabian Peninsula (3% of UBOs in leaks data, but 22% in the identified sample), North America (12% of UBOs in leaks data, but 17% in the identified sample) and Africa (3% of UBOs in leaks data, but 10% in the identified sample).
Property purchases through shares of companies. If an investor -individual or company -buys a residential property in the UK, the Stamp Duty and Land Tax (SDLT) is charged on the whole amount of the purchase. The rate is progressive and reaches 15% for purchases above £500K made by corporate bodies. 37 A way to avoid the SDLT is to purchase the property through a corporate structure and to buy the shares of the company instead of the property itself. This investment scheme is well known but the size of the phenomenon remains uncertain. Our leaked corporate records allow us to identify some transactions of this kind. For almost 60% of the companies buying real estate for which we can identify the beneficial owner, we have information on the exact dates of beneficial ownership. Thus, if person 1 appears as UBO of company A until March 2015 and person 2 starts to be UBO of the same company from March 2015 onward, we consider that person 1 sold company A to person 2 in March 2015. Out of the 1,838 companies buying real estate we are able to match, 107 seem to have been sold to other owners after the property is bought. This represents about 10% of the 60% of companies for which we have information on the dates of beneficial ownership, which indicates that this phenomenon is substantial. In total, 222 properties in our matched sample appear to be purchased through the shares of the holding company rather than directly. Appendix table 16 (section A7) displays, for the top 10 countries using this tax loophole, the amounts invested as well as the number of transactions.
Combining direct and indirect ownership of offshore real estate.
We combine our figures on country-by-country ownership of real estate through foreign shell companies to new data on direct ownership of UK properties by overseas individuals (recently published by the Centre for Public Data -CFPD) 38 in an attempt to give for the first time a comprehensive picture of who owns offshore real estate in England and Wales. The estimation we give is based on January 2018 data. First, we estimate how much of UK real estate is held through shell companies in January 2018 in total. For that purpose, we match the stock of properties listed in the OCOD register in January 2018 to our tax-related data leaks following the same method as the one presented in section 4.1. In the matching process, we recover information on the country of residence of the owners of 3.31% of the properties held through shell companies at that date, which represents 4.28% of the total in value. Then, we multiply the total value owned by each country in the identified sample by 1 0.0428 in order to allocate all properties appearing in the OCOD in January 2018 to one country. This gives us an estimation of how much of UK real estate held through foreign shell companies is ultimately owned by residents of each country. The assumption we make here is that the distribution of ownership across countries in the matched sample is similar to the distribution in the full stock of properties owned by foreign companies in January 2018.
Second, we estimate the amount of UK real estate that is directly held from abroad, meaning that the buyers purchase the properties in their own name rather than through an anonymous shell company. The data on direct ownership we use here have been obtained through FoI requests to HM Land Registry, the British tax authority, and gathered by researchers from the CFPD. It gives information on the number of property titles owned by individuals with an overseas correspondence address, every two years between January 2010 and August 2021. Since there is no information on the value of the owned properties, we estimate the price of each property based on its location (district) and the average price of residential properties bought in that district in January 2018, using Office for National Statistics data.
Appendix table 17 shows the value of real estate wealth held directly and indirectly in January 2018, for all buyers in the world with nonzero ownership, excluding tax havens. We give estimates for England and Wales and also for London in particular. In total, in January 2018, we estimate that offshore real estate in England and Wales is equal to £219 billion, and to £142 billion for London only. The biggest owner is the United Arab Emirates with £26.11 billion, then comes the United States with £11.28 billion. 39 Overall, the Arabian Peninsula is very well represented with Saudi Arabia, Qatar, Kuwait, Oman and Bahrain all belonging to the top buyers.
CRS and real estate investments
Observing the country of residence of roughly 3,000 individuals buying real estate in England and Wales through an offshore company, we can check whether the increase in real estate investments in the UK following the adoption of Automatic Exchange of Information is due to investors effectively affected by the increase in tax transparency i.e. residents from CRS-adopting countries. We focus again on the G20 support for global AEOI (September 2013) and on the commitment announcements of March and May 2014. The most important buyers for each group of countries are displayed in Figure 7 where early adopters appear in green and the others in black.
In our analysis, we split our sample of early adopters into one sample of G20 countries (excluding the United States which followed a distinct path toward AEOI adoption with the implementation of their own policy, FATCA, and Russia which did not commit to AEOI in 2014) and a second sample of countries committing to the CRS in 2014 that do not belong to the G20. We then compare real estate investments from each CRS-adopting group (treated group) to investments from countries that do not commit to information exchange in 2013 or 2014 (control group). We use equation 1 and estimate two distinct difference-in-differences equations -one for each treatment group. Since our matched sample is small, we consider a specification where property prices are windsorized at the 0.5% level40 in order to prevent some outlier transactions from having too much influence on our results. We show later that our results are robust to other windsorization levels or no windsorization at all. Again, our identification hypothesis is that both groups' investment trends would have evolved in the same way without the launch of the CRS. Our observation unit is at the quarter-country level, investment is expressed in 3-quarters moving averages and our panel is balanced.
Results are presented in Figure 8. First, we confirm that the increase in property purchases from the highly-exposed havens documented in the previous section is indeed due to individuals affected by the CRS. Indeed, while there is no statistically significant difference in real estate investments between treated and control groups between 200541 and 2013, the difference becomes significant immediately after the commitment to the CRS of G20 countries for the G20 countries (Panel A) and just after the second quarter of 2014 for the other early adopters (Panel B). This is what we would expect, as individuals from non-G20 countries should not react to the commitment to AEOI of G20 countries. The event of September 2013 can thus for this latter group be considered as a "placebo event". Second, Figure 8 suggests that most of the increase of real estate investments in the UK offshore real estate market seems to come from individuals from G20 countries. We see that the increase in investments from individuals from G20 countries is significant and long-lasting, while it is more modest and less persistent for individuals from non-G20 countries. However, these differences are to be interpreted with cautious due to the small size of our identified sample.
Results from a static difference-in-differences estimation, where we estimate a variation of equation 1 using a single Post-CRS dummy to capture the effect of the policy, are presented in table 6. The coefficients associated to "Post x Treated" capture the effect of the CRS on quarterly real estate investments -evaluated over the 2013q3-2016q4 period -for the treated group relative to the control group. The reference period is the first semester of 2013. Importantly, while the size of the coefficients decreases as we increase the level of windsorization, they remain highly significant, which indicates that the positive effect of the CRS we measure is not due to extreme values in property prices.
One concern in our analysis could be that investors decide to buy real estate in the UK from 2013-2014 Notes: This figure shows the difference-in-differences coefficients comparing investments from countries adopting the CRS to nonadopters, for each quarter. In Panel A, we restrict the sample of countries adopting the CRS to G20 countries. In Panel B, we restrict this sample to non-G20 countries. The flows of investments are normalized at their value of 2013q2. Property prices are windsorized at the 0.5% and 99.5% levels.
onward for other reasons than for tax purposes. The residents from some of the treated countries (e.g. Saudi Arabia) face low effective tax rates on income and wealth and a real estate response to the CRS in that case would suggest that tax evaders are not the only ones affected by the transparency policy. To test this assumption, we build two groups of countries according to their level of top marginal income tax rate in 2014. 42 We classify in the "low-tax group" the countries with a top marginal income tax rate below the median and in the "high-tax group" the countries above the median.43 Column (7) of table 6 shows the results of the estimation of equation 1 with a single Post-period dummy for the subsample of countries for which we have information on the top marginal income tax rates. Column (8) displays coefficients associated to the Post-CRS period for low-tax and for high-tax countries. Only the coefficient associated to high-tax countries remains significant, which suggests that the tax-evasion motive behind the responses to the CRS is likely to be the most important one. Note that the coefficients associated to "Post" are positive and significant in columns 3, 4 and 5. This indicates that when we reduce the influence of extreme values through windsorization -as we do in our preferred specification -we find that countries not affected by the CRS also invest more in real estate after 2013 relative to the pre-CRS period. This can be rationalized by investors anticipating high inflation in the UK housing market and therefore investing in real estate in order to make capital gains. The increase in
T : S
Notes: This table shows the coefficients estimated from a difference-in-differences equation with a single "Post-event" dummy. Coefficients associated to "Post" capture average quarterly real estate investments in the post CRS period (2013q3-2016q4) relative to the first semester of 2013. Coefficients associated to "Post x Treated" capture the difference in real estate purchases increase between the treated and control group in the post CRS period. Columns (3) and ( 4) restrict the sample to countries for which information on top marginal income tax rates is available. Column (4) estimates the CRS effect for low-tax and high-tax treated countries separately.
real estate investments after 2013 is particularly strong for some countries that do not commit to the CRS like Qatar or Oman.
Estimating the global shifting effect
To give a sense of the amount of financial wealth that was shifted to real estate globally as a result of the CRS, we start by quantifying the effect of the CRS on real estate investments in the UK. Then, based on the relative size of the UK cross-border real estate market globally, we scale up our UK estimates to obtain a global effect.
Finally, we compare this number to the effect the CRS had on financial assets, as estimated in the literature.
Effect of the CRS in the UK. To estimate the total effect of AEOI on investments in the UK real estate market, we compare the aggregated value of purchases made by highly-exposed havens and by the other havens. As trends in real estate investments follow each other closely before the commitment to the CRS in these two groups of countries, we simply aggregate investments of the two groups of havens and normalize the series to 1 during a reference pre-CRS period. Appendix Figure 24 (section A8) shows the aggregated series for both groups, normalized in 2013q2. Then, for each quarter q, we compute the effect of the CRS δ q as:
δ q = Y highly q Y highly q0 - Y other q Y other q0
with Y highly q the investments coming from companies incorporated in highly-exposed havens in quarter q, Y other q the investments from the other havens in quarter q and Y j q0 the investments for each group j during the reference period. To get the effect of the CRS in pounds ∆ q , we multiply our estimates by the amount invested by the highly-exposed group in the reference period:
∆ q = δ q • Y highly q0
We estimate the total effect of AEOI on UK real estate investments over the 2013-2016 period by summing ∆ q over the post-CRS period (2013q3-2016q4). We provide a range of estimates by using three different reference periods, in order to prevent our results from being dependent on a quarter-specific investments value from either group. Thus, we have q 0 = 2012q4, q 0 = 2013q1 or q 0 = 2013q2. Notes: This table shows the estimates obtained when computing the effect of the CRS as a simple difference between the aggregated investments of highly-exposed havens vs other havens, using three different reference periods. These estimates are for the period 2013q3-2016q4.
Effect of the CRS on offshore real estate globally How can we estimate the effect the CRS had on the real estate market at the global level? According to figures from an international real estate broker, 44 the UK represented about 20% of the value of global cross-border real estate transactions in 2016. 45 A simple back-of-the-envelope calculation entails that the effect of the implementation of the CRS on the global real estate market would lie between £82 and £94 billion. This represents about 1.5% of the total stock of offshore wealth held by households in all tax havens in 2015 (Alstadsaeter et al., 2018). 44 Investment flows around the globe: cross-border property transactions in 2016, https://tranio.com/articles/ investment-flows-around-the-globe-cross-border-property-transactions-in-2016_5321/, retrieved on the 27/08/2021. 45 Our sample of transactions only covers England and Wales, not the whole UK. Based on the total number of transactions in the UK reported in the UK Property Statistics (compiled by HM Revenue & Customs), we find that for year 2016, 90% of the real estate transactions in the UK took place either in England or in Wales. This would imply in turn that England and Wales amount to about 18% of global cross-border real estate transactions. However, we make the hypothesis that England, and more particularly London, is the choice destination for a large part of foreign real estate investments in the UK. As such, we consider the 20% figure instead of the 18% one. This leads to a more conservative estimate of the global real estate effect of the CRS.
Effect of the CRS on offshore financial wealth
To estimate the extent of asset shifting resulting from the CRS, we need to compare additional real estate investments caused by the CRS to the decrease in financial assets it induced. We evaluate the financial effect of the CRS in two steps: i) we estimate the amount of offshore financial wealth that was owned by the residents of the early-adopting countries, ii) based on estimates from the literature, we compute how much of this offshore wealth fled participating tax havens following the CRS.
First, we compute the wealth early adopters held in tax havens in 2013. To do this, we draw on countryby-country estimates of offshore wealth obtained by Alstadsaeter et al. (2018). They update the offshore wealth measure of Zucman (2013) and allocate this amount to each of the world's country. 46 The amounts of offshore wealth held by residents of each country are very heterogeneous, amounting to the equivalent of 60% of GDP for countries like Russia, and to only a few percentage points for countries like Japan or Denmark. The estimated shares of global offshore wealth owned by each country in Alstadsaeter et al. (2018) are computed for 2007. In order to have estimates for 2013, we allocate their 2013 estimates of total offshore wealth ($7.7 trillion) to each country, according to the country-by-country shares of 2007. Thus, we make the assumption that the geographical distribution of offshore wealth has not changed between 2007 and 2013.
We compute the total stock of offshore financial wealth owned by the early adopters in 2013 by simply adding the figures for all non-havens committing to the CRS in 2013 or 2014. 47 We find that these countries were holding more than £3 trillion in tax havens that year.
Second, we build on O Reilly et al. (2021), who estimate the reduction in bank deposits held in tax havens caused by the Joint Announcement of March 2014, using Bank for International Settlements (BIS) cross-border deposits data. While other papers studied the effect of the CRS on offshore bank deposits (Menkhoff and Miethe, 2019; Casi et al., 2020; Beer et al., 2019), the paper from O'Reilly et al. is -to the best of our knowledge -the only one studying specifically the effect of the Joint Announcement, which is the event we exploit in our paper (together with the previous commitment to the CRS of G20 countries).
The authors estimate a two periods difference-in-differences model with time and country-pair fixed effects where the treated group is made of non-haven/haven pairs both participating to the Joint Announcement of March 2014, and the control group of non-haven/haven pairs both not participating. Their results suggest that one year after the event, offshore deposits owned by treated jurisdictions have decreased significantly more than for the non-treated countries, leading to an estimated effect of the Joint Announcement on offshore bank deposits of -11%. By applying this estimate to the amount of offshore wealth held by early-adopting countries, we find that the Joint Announcement would have led to a reduction in financial wealth of about £330 billion.
Asset shifting responses to the CRS Finally, we compare the reduction in financial assets caused by the Joint Announcement to the global effect the policy had on real estate investments. Table 8 sums up our results. We find that the global surge in cross-border real estate flows caused by the CRS would represent between 24% and 27% of the reduction in offshore wealth the policy induced. This suggests that between 24% and 27% of the financial wealth held in tax havens before the implementation of AEOI was ultimately shifted to real estate in order to dodge the new policy. This is a sizeable response, as it means that about a quarter of the assets targeted by the CRS ended up being shielded from any reporting requirements, by simply switching the final destination of the investments made through offshore portfolios.
How plausible are these figures? First, the real estate effect we estimate is based not only on the Joint Announcement shock but also on the G20 support for global AEOI from September 2013, and the impact of each event cannot be separately estimated. Ideally, we would take into account the effect the G20 event had on offshore deposits, but this has not been estimated in the literature. Indeed, O Reilly et al. (2021) provide estimates of the effect of the Joint Announcement only, by studying the evolution of offshore deposits after the first quarter of 2014. However, we show that G20 countries already started to respond to AEOI from September 2013. Thus, our measure of the wealth decrease in table 8 may be somewhat underestimated.
Besides, the 11% decrease in offshore deposits from O Reilly et al. ( 2021) is estimated on four postannouncement quarters (i.e. until March 2015), while the post-CRS period we consider when estimating the real estate effect goes from the second quarter of 2013 to the last quarter of 2016. We choose to consider a larger post-event period because real estate is much less liquid than deposits and thus property transactions may take more time to be completed. Moreover, even though new jurisdictions commit to the CRS during the post-period, the countries participating to the Joint Announcement enter effectively earlier into the CRS than the others, and thus we assume that the division between highly-exposed havens and the other havens remain relevant at least until the end of 2016. Nonetheless, the 11% estimate from O Reilly et al. ( 2021) is computed on a period before the transparency policy effectively enters into force in many countries. Therefore, the estimated reduction in offshore bank deposits is likely to be a lower bound of the total effect of the CRS. Note that real estate responses to AEOI we estimate may also be a lower bound of the total response to the CRS, which might have accelerated in 2017 and 2018 when information exchanges effectively started.
Drawing on different analysis periods and different samples of countries, other papers have found that the CRS caused a reduction ranging from 11.5% to 31.8% of bank deposits held in tax havens. We show in ap-pendix table 18 how much our real estate effect represents compared to these alternative estimates of the financial effect of the CRS; the percentage ranges from 8% to 26% depending on the paper. These comparisons should however be interpreted with care as they are based on papers studying the implementation or the signature of the CRS rather than its announcement. As such, they capture responses to the CRS from more countries, and during a period when the coverage of the agreement is more extensive. Therefore, the effect they estimate might be of a different nature.
Second, our estimates are based on the UK real estate market, where transactions taxes (i.e. stamp duty taxes) are very high. In appendix section A9, we perform a bunching analysis following Best and Kleven (2018) and show that corporate buyers do seem to respond to this tax. Therefore, the cost of investing in the UK property market is likely to be higher than in other globalized housing markets. If real estate responses to the CRS are stronger in other markets -like New York or Hong Kong -we would underestimate the true shifting responses in reaction to the CRS. Third, our results strongly rely on the estimates of the UK share in the global cross-border real estate market, which we find to be 20%. On the one hand, the true figure of the UK market share is likely to be indeed very important. The literature has shown that London real estate in particular is a safe-haven asset or a "safe-deposit box" for which demand increases in times of economic downturns (Badarinza and Ramadorai, 2018; Fernandez et al., 2016) and that UK real estate accounts for a sizeable share of real estate investments of households at the top of the income and wealth distribution globally (Knight Franck, 2016).
On the other hand, there is no comprehensive data on cross-border real estate and more work is needed to precisely quantify the value of properties owned by individuals outside of their country of residence (Alstadsaeter et al., 2022b).
Fourth, O Reilly et al. (2021), as well as the other papers studying the CRS we exploit in table 18, estimates the reduction of bank deposits following the CRS, not the reduction of total offshore wealth. As a result, the figures we compute are correct only under the assumption that the effect of the CRS is homogeneous across all financial assets. Even though deposits are more liquid and therefore may react more rapidly to changes in the tax enforcement environment, the new trade-off imposed by the CRS is likely to be similar for other unreported offshore financial assets, like equities or mutual fund shares. 48 Therefore, we believe this assumption is relatively plausible.
In brief, even though there is still uncertainty about the exact amounts involved, our results suggest that investors affected by automatic exchange of information shifted a significant share of their offshore financial wealth -around 25% -to real estate as a result of the transparency policy. 48 The sanctions in case the evader gets caught and the probability of detection are relatively similar.
T : E CRS
Notes: This table compares the effect of the CRS on real estate investments we estimate in our paper to the amount of offshore financial wealth that left tax havens due to the transparency shock as estimated in O Reilly et al. (2021). Column "Estimates" gives estimates of the CRS effect in terms of reduction in offshore wealth. Column "Wealth decrease" refers to the stock of offshore financial wealth owned by early adopters in 2013 that fled tax havens because of the CRS (it depends on the column "Estimates" and on the total stock of offshore wealth held by early-adopters). Column "Shifting -lower bound" computes the ratio of our real estate effect over the offshore wealth decrease, taking the lower bound of the real estate effect (£82 billion). Column "Shifting -upper bound" does the same with the upper bound of the real estate effect (£94 billion).
Conclusion
The Common Reporting Standard closes some of the loopholes evaders could still exploit in previous enforcement policies to avoid their international reporting requirements. In particular, thanks to its multilateral feature, it makes the relocation of financial assets to non-cooperative tax havens difficult -and almost impossible over the long-term as more and more havens join the information exchange agreement. However, the policy leaves the door open for new evasion strategies to develop, as it only targets financial assets. In its current form, it creates incentives for non-compliant taxpayers to restructure their offshore portfolios away from financial assets and toward properties.
In this paper, we show that this new international transparency initiative played an important part in the growth of the offshore real estate market in the UK over the last decade. We show that it led to an inflow of investments of between £16 and £19 billion over the 2013-2016 period, which suggests that real estate investments to avoid the CRS reporting requirements were large at the global scale.
Our findings highlight the need for a more ambitious automatic exchange of information agreement.
To effectively curb down tax evasion, we need a truly global information exchange treaty covering all assets, including non-financial ones. The first step to achieve such an agreement is to systematically gather information about the ownership of assets on a national level. In the case of real estate, this means in particular that tax authorities must collect data on the ultimate beneficial owners of shell companies used to buy properties.
the company owning it. From that 2019 however, indirect disposals of interests in "property rich entities" 52 started to be subject to the non-resident CGT as long as the non-resident investors holds, or has held, a 25%
or greater interest in the company.
Income Tax. Regardless of who owns the property, any rental income will remain taxable in the UK. If the property is owned by an offshore company only the basic rate of UK income tax (20%) will apply regardless of the level of income. 53 This can result in substantial savings when compared with personal ownership under which the banded UK income tax rates (up to 50%) apply.
52 "Property rich entities" include any company that derives 75% or more of its gross asset value from UK property whether residential or commercial 53 While this is true for the period we consider in the paper, it no longer holds from 2020
A2. Prediction of missing prices
One limit of our dataset is that the purchase price is only specified for 36% of the transactions. Therefore, we predict missing prices using the sample of transactions where the price is available. Let us denote Z i a set of properties' characteristics that we observe in our sample. We can express the properties' (log) prices as
p i = βZ i + i ( 3
)
Where i is the price component not captured by the set of predictors that we assume to be orthogonal to Z i .
We estimate β from equation (3) using the subsample where the price is indicated, and the missing prices are then predicted using the resulting β as
pi = βZ i (4)
The prediction model is estimated by OLS, using 5-fold cross-validation. The set of predictors includes the property tenure (leasehold, freehold), a postcode fixed effect, a quarter fixed effect. Using the exhaustive dataset of all residential transactions in the UK by quarter (UK Price Paid Data), we also include as predictors for each property the number of sales that occurred in the same postcode area, and the average price. 54 The β are estimated on a training sample composed of 80% of the transactions, and the quality of the predictions is evaluated with a test sample built with the 20% remaining observations. Table 9 displays information on our out-of-sample fit (computed with our test sample). The adjusted R 2 is 0.59, the root mean square error (RMSE) is 1.128 and the mean absolute error (MAE) is 0.683.
RMSE
Rsquared MAE
1.128 0.589 0.683
T : P P -A
Notes: This table describes the quality of our price inference and gives the value of the root mean squared error (RMSE), the R 2 and the mean absolute error (MAE) obtained when we regress p i on Z i (equation 3) using the full OCOD sample.
An important characteristic missing in our dataset to accurately infer prices is property size. To improve the quality of our estimates, we follow Chi et al. (2019) and exploit the Domestic Energy Performance Certificates (EPCs) dataset in order to retrieve housing size information. Energy Performance Certificates are mandatory in the UK before selling or renting a property, and the Department for Communities and Local 54 In case the postcode x quarter of a given transaction in the OCOD data does not match to any transactions in the Price Paid data, we use information on average price by Ward, which is a greater level of aggregation. However, for most postcodes in the City-of-London, we do not have any in formations on the average price by ward. In this case, we approximate the price of the transaction based on the average price over all transactions with non-missing prices in the City-of-London that year. This is done for about 1000 transactions. Government compiles a register of these assessments. The Domestic Energy Performance Certificates contains data on the exact address of the property, its energy performance but most importantly for us, its size (i.e. total floor area). We also use additional property characteristics provided in the dataset: the property type (e.g. maisonette, flat, house) and the building type of the property (e.g. detached, semi-detached). We use the algorithm detailed in Chi et al. (2019) to match the Domestic EPCs to our dataset. 55 We are able to match 35% of our transactions, improving significantly the quality of of predictions for this sub-sample of matched properties.
Table 10 provides details on the out-of-sample fit in our matched sample. The R 2 is significantly higher and the RMSE much lower than when no information on the property size is available.
RMSE
Rsquared MAE 11 0.641 0.785 0.400
T : P P -EPC
Notes: This table describes the quality of our price inference and gives the value of the root mean squared error (RMSE), the R 2 and the mean absolute error (MAE) obtained when we regress p i on Z i (equation 3) using the subsample of properties that we are able to match to the EPCs data, hence when information on the property size is available.
We show in 11 and in 12 how respectively the RMSE and the R 2 evolve when we windsorize p i at different levels in (3). Ultimately, we pick the levels of windsorizing minimizing the RMSE of the prediction, i.e. a windsorization of the top tail of p i at 3% (and no windsorization at the bottom). For the subsample matched to the Domestic EPCs, we windsorize the top tail at 1%.
In figure 10, we plot the predicted against the observed price in log for each transaction from the test sample. The smaller the distance between a point and the 45 degree line, the better the prediction. The figure provides two important insights. First, when we observe the area of the property, the prediction is much more precise. Second, while some points are well above or below the 45 degree line, there is no evidence of systematic over-or underestimation of the true prices.
q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0 5 10 15 20 0 5 10 15 20
Predicted prices (log)
Observed prices (log)
Property size q q q q No Yes F : D
Notes: This scatterplot displays the predicted against observed prices (in log) in the test sample, depending on whether we have information on the property size or not. The test sample is selected by randomly picking 20% of the transactions for which we have price information. It is not used to estimate (3) but only to test the quality of our price prediction . If a point lies on the 45 degrees line, this indicates that the predicted and observed prices correspond exactly. q q q q q q q q q q q q 1.125 Out of sample RMSE Tail windsorized q q q q bottom top (a) Whole sample q q q q q q q q q q q q 0.64 Notes: This graphs shows the out-of-sample root mean squared error (RMSE) as a function of windsorizing various shares of the bottom and top tails of the price in the prediction (equation 3). The Out-of-sample RMSE is computed as the average RMSE obtained from 5-fold cross-validation. q q q q q q q q q q q q 0.575 0.580 0.585 0.590 0 1 2 3 4 5 % windsorized Out of sample Rsquared Tail windsorized q q q q bottom top (a) Whole sample q q q q q q q q q q q q 0.760 Notes: This figure shows the difference-in-differences coefficients comparing quarterly real estate investments from companies incorporated in highly-exposed havens to investments from companies incorporated in other havens. The flows are normalized at their value in 2013q2 and the estimation is based on the full data provided in the Land Registry OCOD. Each figure shows the difference-in-differences results for a different definition of the treatment group, varying the cut-off threshold. We show results when the treatment group is defined as tax havens for which more than 15% of the company beneficial owners come from early-adopting countries. We repeat the analysis for the 30%, 45%, 60% and 90% thresholds. Notes: This figure shows the difference-in-differences coefficients comparing quarterly real estate investments from companies incorporated in highly-exposed havens to investments from companies incorporated in the other havens. The flows are normalized at their value in 2013q2 and the estimation is based on the full data provided in the Land Registry OCOD. We compute the weights used to define highly exposed and less exposed havens using data from the Bank for International Settlements.
Joint
(
T : R
Notes: This table presents the results of a variation of equation 1, using a single Post-CRS dummy to capture the effect of the policy. We take the first semester of 2013 as the reference period. Column (1) presents the results for our main sample. Column (2) shows the results of the estimation without Jersey and Guernsey, column (3) with only the tax havens participating to the CRS in 2013 or 2014, and column (4) for purchases made in the region of Greater London only. Finally, column (5) shows the results of our estimation using the consensus list of tax havens compiled by Menkhoff and Miethe (2019) and column (6) using the havens list of Hines and Rice (1994). ) .
Notes: This figure shows the proportion of properties that are sold after one year, two years, etc, in the treatment group (highly-exposed havens) and in the control group (other havens). This figure is computed using our dataset from year 2015 to year 2020, as we have access to all the purchases and sells made by overseas companies during this period. (1959-2020), from the boroughs where individuals make the less purchases through foreign companies (quintile 1) to the boroughs where they make the most (quintile 5).
F : H .
A6. Companies buying UK properties
A7. Analysis of the matched sample
T : P L OCOD
Notes: This table shows the number of OCOD transactions location in London we manage to link with their ultimate beneficial owners' using the Bahamas Leaks, the Offshore Leaks, the Paradise Papers, the Panama Papers, the Pandora Papers, OpenLux and CNBIOM data. Columns 2 and 3 show the raw number of transactions and their value in the full and matched samples, while columns 4 and 5 show the corresponding percentages the matched transactions represent.
T : P
Notes: This table displays information on properties purchased indirectly, via the shares of the company owning the property. We give the number of properties purchased and the amount invested for the 10 most frequent countries involved in such investment schemes.
For simplicity, we do not take into account split ownership here. So, if a property is bought by two investors from two different countries, one transaction will be recorded for each country, and the total price of the property will be attributed to each country. If a property is bought through the shares of the holding company in 2014 and again in 2016, this will appear as two transactions.
A8. Additional elements
Joint Announcement Continued from previous page Country Amount -Direct Amount -Indirect Amount -Total Amount -Direct London Amount -Indirect London Amount - ). We only show direct ownership for countries that are not tax havens, but the row "All" aggregates the total value of foreign owned properties -including those owned from haven countries. Values in columns "Amount -Indirect" are inferred using the following method: we match the stock of OCOD properties in January 2018 to our tax-related data leaks in order to have information on the country of residence for some owners. We identify the owner of 3.31% of the properties owned at that date, which represents 4.28% of the total in value. Then we simply multiply the total value owned from each country in the identified sample by 1/0.0428 in order to allocate the ownership of all properties appearing in the OCOD to one country. So, the assumption we make here is that the distribution of ownership in the matched sample is similar to the distribution in the full stock of properties owned by foreign companies in January 2018. Columns "Amount Total" is simply the sum of "Amount -Direct" and "Amount -Indirect". For buyers from the United Kingdom, we put a zero in columns "Amount -Direct" as direct ownership of a UK property would not be considered as offshore real estate in that case. Columns 2-5 show these computations for the whole of England and Wales, while columns 6-8 show them for London only.
T : E CRS
Notes: this table compares the effect of the CRS on real estate investments we estimate in our paper to the amount of offshore financial wealth that left tax havens due to the transparency shock, as estimated in several papers from the literature. Column "Estimates" gives estimates of the CRS effect in terms of reduction in offshore wealth. The estimate for Casi et al. (2020) comes from column 1, table 4 from their paper, for Menkhoff and Miethe (2019), from column 2, table 5 and for Beer et al. (2019), from column "Model 4", table 3. Column "Wealth decrease" refers to the stock of offshore financial wealth owned by early adopters in 2013 that fled tax havens because of the CRS (it depends on the column "Estimates" and on the total stock of offshore wealth). Column "Shifting -lower bound" computes the ratio of our real estate effect over offshore wealth decrease, taking the lower bound of the real estate effect (£82 billion). Column "Shifting -upper bound" does the same with the upper bound of the real estate effect (£94 billion).
Country
T : L .
Notes: This table shows the lists of tax havens we use in our analysis. Our main list is the one of Menkhoff and Miethe (2019) in column (1), used in their analysis of the CRS on financial assets, held in and by tax havens. They obtain it by combining the lists of different sources, and it counts 58 countries. Column (2) shows the list used by Hines and Rice (1994), which excludes 18 countries categorized as tax havens by Menkhoff and Miethe (2019). Column (3) shows a "consensus" list of tax havens. This list is compiled by Menkhoff and Miethe (2019) by choosing the 29 countries that appear in most recent studies on tax evasion (see Menkhoff and Miethe (2019), appendix A.2). q q q q q q q q q q q q q q q q q q q q q q q q q q q b = 1.419(0. q q q q q q q q q q q q q q q q q q q q q q q q q q q b = 0.083(0. 68 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q b = 0.429(0. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q b = 0.126(0.166)
This figure shows the yearly count of purchases made by foreign firms in England and Wales (Panel A) and their aggregated value (Panel B), over the period 2000-2019. It is based on the Overseas Companies Ownership Dataset.
This figure shows the location of the purchases made by foreign firms in London, as recorded by the OCOD. The region of Greater London is composed of 32 boroughs and the City of London local government.
which displays the price distribution of properties bought in our sample and the price distribution of all residential properties bought in the English and Welsh real estate market. The distribution of prices among the properties bought by a shell company exhibits a much thicker right tail, indicating that very expensive properties are much more common than in the overall residential property market. This figure presents the distributions of prices of properties bought through foreign companies (from the OCOD) and of all residential properties bought in the UK (from the Price Paid Dataset) over the period 2000-2020. For better visibility, the prices are capped at 99.8% of the price distribution of the Price Paid Data.
(a) Most frequent tax havens used by world region of UBO Most frequent tax havens used by world region of UBO, over time
Figure 6
6 Figure 6 displays the coefficients βq estimated from equation 1. It confirms that the trends in real estate investments from highly-exposed havens and the other havens are not significantly different between 2000 and the second quarter of 2013, supporting our identification hypothesis. They diverge immediately after
This figure displays the distribution of observed and predicted prices in the test sample. This sample is selected by randomly picking 20% of the transactions for which we have price information. It is not used to estimate equation (3) but only to test the quality of our price prediction. Because the price distribution is heavily right-skewed, we cut the distribution at the 90th percentile in order to see what is happening lower down in the price distribution, where most transactions happen.
This graphs show the out-of-sample R 2 as a function of windsorizing various shares of the bottom and top tails of the price in the prediction (equation 3). The Out-of-sample R 2 is computed as the average R 2 obtained from 5-fold cross-validation. This graph is constructed using two additional datasets maintained by the Land Registry, data on property transactions completed in the UK and the Price Paid Data. The Land Registry produces summary statistics on property transactions completed in the UK with a value of £40,000 or above. More precisely, the dataset provides monthly estimates of the number of residential and nonresidential property transactions in the UK. The Price Paid Data provides information on all residential property sales in England and Wales (date of the transaction, address of the property bought, price paid). Panel A presents the ratio of OCOD transactions over all UK transactions as reported in the UK Property Statistics, by year. Because the UK Property Statistics does not provide information on property prices, we have to use Price Paid Data to construct Panel B. As Price Paid Data only covers residential transactions, we start by calculating how many transactions are missed in this dataset, by comparing yearly number of sales with figures from the UK Property Statistics. We correct yearly amounts invested in the UK as recorded in the Price Paid Data with this factor. Panel B presents the ratio of the value of the OCOD transactions over this total, by year. This figure shows the difference-in-differences coefficients comparing the quarterly number of purchases in England and Wales made through companies incorporated in highly-exposed havens to purchases from companies incorporated in the other havens, for different values of the purchases. The flows are normalized at their value of 2013q2. The estimation is based on the full data provided in the Land Registry OCOD.
This figure shows the detailed location of the purchases made by individuals through a shell company in London, as recorded in the OCOD. The map represents the region of Greater London, which is composed of 32 boroughs and the City of London local government. The boroughs are ranked in five quintiles according to the total number of purchases made by foreign firms during the entire period covered by the OCOD
The red dashed lines denote the upper and lower bounds of the excluded region around the threshold. The blue dashed line in the graphs estimating responses to a notch marks the upper bound of the dominated region. Absent optimization frictions, optimization theory predicts an empty hole between the cutoff and the blue line. b is our estimate of the excess mass just below the threshold scaled by average counterfactual frequency in the excluded range (standard errors in parentheses). Graphs and estimates built using the bunching package in R.
The red dashed lines denote the upper and lower bounds of the excluded region around the threshold. The blue dashed line in the graphs estimating responses to a notch marks the upper bound of the dominated region. Absent optimization frictions, optimization theory predicts an empty hole between the cutoff and the blue line. b is our estimate of the excess mass just below the threshold scaled by average counterfactual frequency in the excluded range (standard errors in parentheses). Graphs and estimates built using the bunching package in R.
This table shows the five most frequent countries of incorporation of companies buying English and Welsh real estate, over the period 2000-2020. Columns 1-4 show the ranking for companies incorporated in tax havens, in terms of value and volume of purchases. Columns 5-8 show the same but for companies incorporated in non-haven countries. The table is based on data from the Land Registry OCOD.
Havens Non-havens
Country Value of Country Number Country Value of Country Number
purchases of purchases purchases of purchases
(in billion £) (in billion £)
Jersey 54.2 Jersey 31,192 United States 3.9 Netherlands 2,377
BVI 39 BVI 30,369 Netherlands 3.9 United States 1,674
Guernsey 21.5 Guernsey 24,654 Germany 3.3 Germany 1,321
Luxembourg 18.9 Isle of Man 14,316 France 0.9 Australia 1,014
Isle of Man 14.5 Luxembourg 4,284 Sweden 0.9 United Arab Emirates 573
T : T E W , -
Notes:
This table details the characteristics of the seven corporate ownership datasets we exploit in our analysis. For each of the datasets, it presents the period covered, the number of companies and the number of unique beneficial owners we get information on.
Period covered 1919-2016 2007-2019 1907-2020 1918-2010 1936-2015 1980-2018 1865-2017 -United Kingdom (12% UBO) Number of companies 175,888 1,406 261,249 105,516 213,634 17,693 290,086 1,065,472 Nor. Am. & English-speak. world (12% UBO) Number unique benefi-of cial owners 6 927 70,795 75,948 238,055 26,083 133,555 545,369 Africa (3% UBO) Bahamas Leaks CNBIOM OpenLux Offshore Leaks Panama Papers Pandora Papers Paradise Papers Total T : C Asia-Pacific (23% UBO) Western Europe (21% UBO) South Asia (1% of UBO) Southern Europe (8% UBO)) Middle East -Rest (3% UBO) Eastern Europe (7% UBO) South & Central Am. (7% UBO) Bermuda BVI Cayman Is. Isle of Man Jersey Luxembourg Malta Panama Samoa Seychelles Notes: Arabian Peninsula (3% UBO) Others
The individuals creating the most shell companies are from the Asian-
Pacific region (23%), Western Europe (21%), North America and the English-speaking world (12%), and the
United Kingdom which represents 12% of the owners alone. Panel A presents the pooled-years distribution
of countries of incorporation. It highlights the heterogeneity of tax haven use, by world region. While owners
from the Arabian Peninsula, the Asia-Pacific area, South Asia and South and Central America incorporate
their shell companies mostly in the British Virgin Islands, European nationals and United Kingdom residents
seem to favor Luxembourg 19 and Malta, European havens. North America has a slightly more diversified
distribution of haven use, incorporating a roughly similar number of companies in the British Virgin Islands,
Luxembourg, Bermuda, Malta and other havens.
Table 5
5 displays, by data source, the number of companies we match, as well as the number of ultimate owners we are able to identify and the number of transactions they are involved in. In total, we identify roughly 3,000 investors owning UK properties through an offshore vehicle. As shown in the table, the most important data source for the matching is the Panama Papers which allow us to identify 1,846 ultimate owners. In Appendix table 14 (section A7), we compare the number of transactions and amount invested in the identified sample to the full OCOD dataset. We are able to identify the ultimate investors in 2.8% of the property transactions in our sample. These transactions are more expensive on average than the rest of the sample, as they represent 3.8% of the total amount invested in our data. When restricting our sample to London only, we recover the identity of the ultimate owner for more than 4.4% of all purchases made in London through shell companies (Appendix table15). This figure is relatively high, as it indicates that at least 4.4% of foreign companies buying in London are listed in one of the main offshore leaks of this last decade.34 The fact that we find more than twice as many companies buying in London than in the rest of the UK in the leaks data provides anecdotal evidence that real estate in the capital is a destination of choice for illegal flows of money(Tax Transparency International UK, 2015).
Source Number of companies Number of transactions Number of UBO
CNBIOM 48 203 62
OpenLux 185 545 220
Offshore Leaks 58 107 83
Panama Papers 1, 115 2, 128 1, 846
Pandora Papers 292 589 386
Paradise Papers 140 488 427
Total 1, 838 4, 060 3, 024
T : N ,
O L
Table7presents the estimated effect, according to the reference period we choose. Overall, our results indicate that an additional £16 to £19 billion were invested in the English and Welsh real estate market as a result of the CRS. This effect is substantial: over the period, it amounts to between 25% and 30% of all purchases made by companies (incorporated in both havens and non-havens) and to almost 1.5% of all real estate investments made in the UK.
Reference period Estimated effect of CRS
(in billion Pounds)
2012 q4 16.3
2013 q1 18.8
2013 q2 18.1
T : E CRS UK , -
Main spec. No Jersey/Guernsey CRS havens only London only Consensus listHines and Rice (1994) list
) (2) (3) (4) (5) (6)
Post x High 165.622 * * * 59.719 * * * 195.892 * * * 72.915 * * * 199.105 * * * 175.197 * * *
(32.662) (19.932) (59.281) (18.763) (52.227) (39.767)
Post 20.249 19.640 * * 48.668 16.623 * 51.468 27.442
(17.058) (9.726) (38.809) (9.799) (32.001) (22.622)
Observations 3632 3464 1760 3632 1961 2795
Country FE YES YES YES YES YES YES
Control for ER YES YES YES YES YES YES
This table shows the number of OCOD transactions we manage to link with their ultimate beneficial owners' using the Bahamas Leaks, the Offshore Leaks, the Paradise Papers, the Panama Papers, the Pandora Papers, OpenLux and CNBIOM data. Columns 2 and 3 show the raw number of transactions and their value in the full and matched samples, while columns 4 and 5 show the corresponding percentages the matched transactions represent.
Source Number of transactions Amount invested (in billion £) Fraction of total transactions Fraction of total amount invested
Full dataset 143, 634 180 100% 100%
Matched 4, 060 6.8 2.8 3.8
T : P OCOD
Source Number of transactions Amount invested (in billion £) Fraction of total transactions Fraction of total amount invested
Full dataset 62, 392 114.800 100% 100%
Matched 2, 739 5.3 4.4 4.6
Notes:
This table shows how ownership of real estate in England and Wales and in London is distributed across countries in January 2018. Values are in 2015 billion Pounds. Column "Amount -Direct" refers to direct ownership of properties in England and Wales. The information comes from recently published data by the Centre for Public Data -CFPD. a Prices are imputed based on mean prices paid for residential properties in England and Wales, by local authority and for the first quarter of 2018 (computed by the Office for National Statistics
Total London
Liberia .04 0 .04 .04 0 .04
Sudan .04 0 .04 .03 0 .03
Latvia .02 .02 .04 .01 0 .01
Croatia .04 0 .04 .03 0 .03
Gabon 0 .03 .03 0 0 0
Chile .03 0 .03 .01 0 .01
Uganda .03 0 .03 .02 0 .02
Sierra Leone .01 .02 .03 .01 .02 .03
Romania .02 .01 .03 .01 .01 .02
Namibia .01 .02 .03 0 .02 .02
Iceland .03 0 .03 .01 0 .01
Jamaica .02 0 .02 .02 0 .02
Venezuela .01 .01 .02 .01 .01 .02
Tunisia .02 0 .02 .02 0 .02
Estonia .01 .01 .02 .01 .01 .02
Algeria .02 0 .02 .01 0 .01
Georgia .01 0 .01 .01 0 .01
Guatemala 0 .01 .01 0 .01 .01
Cameroon 0 .01 .01 0 .01 .01
Costa Rica .01 0 .01 .01 0 .01
Kyrgyzstan 0 .01 .01 0 .01 .01
North Korea .01 0 .01 .01 0 .01
Nepal 0 .01 .01 0 0 0
Somalia 0 .01 .01 0 .01 .01
Haiti 0 .01 .01 0 .01 .01
Iraq .01 0 .01 0 0 0
Uruguay .01 0 .01 .01 0 .01
Colombia .01 0 .01 .01 0 .01
Mozambique .01 0 .01 .01 0 .01
Cambodia .01 0 .01 0 0 0
Slovenia .01 0 .01 0 0 0
Peru .01 0 .01 0 0 0
Angola 0 0 .01 0 0 0
Ethiopia .01 0 .01 0 0 0
All 110.09 109.32 219.42 69.39 72.93 142.32
T : O R E E W , C O
Notes:
The data are available at https://www.centreforpublicdata.org/property-data-overseas-individuals.
Paper Estimates Wealth decrease (billion Pounds) Shifting -lower bound Shifting -upper bound
O Reilly et al. (2021) 11% 334 24% 27%
Casi et al. (2020) 11.5% 352 23% 26%
Menkhoff and Miethe (2019) 31.8% 966 8% 9%
Beer et al. (2019) 29.6% 899 9% 10%
a
See[START_REF] Reilly | Exchange of Information and Bank Deposits in International Financial Centres[END_REF] for a detailed timeline of the expansion of tax transparency and the introduction of the CRS.
See Knobel and Meinzer (2014) for a detailed analysis of the CRS loopholes.
Note that the degree of substitution between financial and real estate evasion also depends on the relative attractiveness of the different alternative evasion strategies, which is directly impacted by the CRS.
In the UK, several voluntary disclosure schemes were available to evaders around 2014, when the UK committed to the CRS. The most "popular" was the Liechtenstein disclosure facility (LDF) introduced in 2009 and closed in December 2015. The number of new amnesty participants increased more in Norway than in the UK around 2014 (in relative terms), which can partly be explained by the fact that the Norwegian amnesty program is very generous. Indeed, Norwegian "disclosers" pay no penalties and suffer no criminal sanctions, while in the UK some penalties have to be paid. According to the British tax authorities, a total of 6,000 disclosures were made under the LDF for an average settlement of £180,000. See https://www.gov.uk/government/ publications/offshore-disclosure-facilites-liechtenstein/yield-stats. Note that other voluntary disclosure schemes were available around the CRS for UK taxpayers, namely the UK-Swiss Tax Cooperation Agreement, Jersey, Guernsey and Isle of Man Disclosures Facilities. In 2016, the World Wide Disclosure Facility was opened.
Where did the rest of the money go? Part of this money was repatriated in the context of voluntary disclosure schemes that were put in place in many countries. But evidence suggest that alternative evasion strategies other than investing in real estate assets have also been adopted by tax evaders after the CRS, which could explain part of the decrease in offshore deposits owned by countries participating to the agreements. Relocation of unreported assets to the US[START_REF] Casi | Cross-border Tax Evasion After the Common Reporting Standard: Game Over[END_REF] or the use of citizenship-by-investment program[START_REF] Langenmayr | Escaping the Exchange of Information: Tax Evasion via Citizenshipby-Investment[END_REF] seem to have been used by reluctant taxpayers. Using a trust is also a way to avoid any reporting under the CRS[START_REF] Knobel | The End of Bank Secrecy'? Bridging the Gap to Effective Automatic Information Exchange[END_REF]. See appendix table 20 for a list of strategies allowing to circumvent the CRS.
In other papers like[START_REF] Alstadsaeter | Who Owns Offshore Real Estate? Evidence from Dubai[END_REF], transactions where a foreign individual buys a property in their own name is also included in the definition of the offshore real estate market. We leave aside such transactions in our main analysis.
In the UK, a "non-domiciled" status can be given to foreign individuals living (i.e. resident for tax purposes) in the UK but domiciled (i.e. with their permanent home) in another country. This can lead to significant tax advantages for people whose foreign income is taxed only if repatriated to the UK, under the principle of "remittance basis". For a detailed analysis of the non-domiciled status and the reforms which affected it over the years, see[START_REF] Advani | The ukâs Global Economic Elite: a Sociological Analysis using Tax Data[END_REF].
8 The OCOD does not include transactions made by companies incorporated in the UK or by private individuals, whether British or
foreign.9 Freehold estates are held for an infinite duration, while leasehold estates have a fixed or maximum lease duration.
We do not include year 2020 as we do not have complete data for the year yet.
Prices are corrected using the UK House Price Index (HPI) computed by the Land Registry. We apply the UK HPI to all transactions in our dataset, regardless of their location.
We provide more information on the EPCs dataset in Appendix section A2.
There is no consensus on which countries should be considered as tax havens. We use the list of tax havens of[START_REF] Menkhoff | Tax Evasion in New Disguise? Examining Tax Havens' International Bank Deposits[END_REF], which is obtained by combining the lists of[START_REF] Gravelle | Tax Havens: International Tax Avoidance and Evasion[END_REF] and[START_REF] Johannesen | The End of Bank Secrecy? An Evaluation of the G20 Tax Haven Crackdown[END_REF]. They classify 58 countries as tax havens, which are listed in Appendix section A8 (Table19). We present robustness checks of our results using alternative lists of tax havens.
Note that in the list of[START_REF] Menkhoff | Tax Evasion in New Disguise? Examining Tax Havens' International Bank Deposits[END_REF] we follow, Netherlands is not considered as a tax haven.
The complete CNBIOM data have been extensively analyzed in[START_REF] Collin | What Lies Beneath. Evidence from Leaked Account Data on How Elites Use Offshore Banking[END_REF].
Note that we do not have information about the beneficial owners of all the companies listed in the leaks and in the OpenLux data; sometimes we only have access to the identity of the administrators, the directors, or no information at all about its ownership/management.
We draw on the groups defined in[START_REF] Badarinza | Home Away from Home? Foreign Demand and London House Prices[END_REF], who study foreign real estate investments in London, and we create finer sub-groups in order to reflect the importance of the countries we identify as buying UK properties in section 4.1. We therefore create 11 groups: the United Kingdom alone, the Arabian Peninsula, the rest of the Middle-East, North America and English world (including South Africa), Africa (excluding South Africa), Asia-Pacific, Western Europe, South Asia, Southern Europe, Eastern Europe and South and Central America.
As we have access to administrative data on companies incorporated in Luxembourg, but only to a sub-sample of companies incorporated in other havens with the leaks, it is likely that the share of Luxembourg for each region of the world is overestimated. However, the differential use of havens by individuals from different regions of the world is not affected by this bias.
20 Note that the two explanations can be combined: resident from country A might prefer to use corporate service provider B because B is specialized in the incorporation of companies in tax haven C.
To compute these figures, we only use companies incorporated before or during the second quarter of 2013; this is to take into account the fact that the CRS might lead to a change in tax haven preferences for the incorporation of shell companies. Reassuringly for our identification strategy, our results are robust to using weights computed using the full leaks data, suggesting that this was not the case.
Note that we use two quarters as our reference period instead of one as before, in order for our estimates not to be too dependent on the value of investments in a single quarter. The results are similar when using 2013q2 only as the reference period.
We windsorize at the 95% level in order to avoid extreme values due to very low pre-CRS investments.
Our results are robust to other pre-periods.
Note that this is not the case for the Bahamas Leaks, as they are a sample of files taken from the company register of the Bahamas.
There is a potential selection bias in the OpenLux data as well, for a different reason. The Luxembourg registry of beneficial ownership was made public in 2019, at which date a large number of entities were struck off the companies registry, indicating that some individuals closed down their company to avoid the reporting requirements. If residents from some countries were more likely to adopt this strategy than others, then the resulting preference distribution we get from the OpenLux data will be biased.
On the contrary, if the treatment group sells properties relatively more frequently than the control group, the effect of the CRS would be underestimated.
Appendix figure20in section A5 provides a detailed breakdown of the sales made from one year to another over the period.
For example, we replace all the occurrences of "Ltd" by "Limited", "Corp" by "Corporation" etc.
We have access to the company's status over the years.
In some cases, the owners of a given company do not remain the same throughout the whole period. For cases when a person owns a company through different layers of companies, we impose the restriction that the owner owns each company at least over the period 2013-2015, which is the key period of interest for the CRS.
We exclude UBOs with the following words in their name: "COMPAGNIE", "CORPORATION", "COMPANY", "INCORPO-RATED", "TRUST", "LTD", "BUSINESS", "LIMITED", "LLC", "FUND", "INTERNATIONAL", "EUROPE", "FONDATION", "FOUN-DATION", "INVESTMENT", "CAPITAL", "BANK", "INC", "LP", "ACTION", "ACTIVITY", "HOLDING", "GMBH", "LLP", "PLC".
33 We make the hypothesis that in these cases, the UBO is probably an intermediary or a second shell company and not an individual. Since our main tax havens list includes countries such as Austria, Ireland or Lebanon, which are likely to be the true residence country of company owners, we only discard UBOs linked to tax havens that have less than 2 million inhabitants, except Hong Kong, Panama, Singapore and Switzerland, which are all in the top fifteen of the Financial Secrecy Index of the Tax Justice Network (https://fsi. taxjustice.net/en/). Thus, we keep UBOs linked to Austria, Bahrain, Belgium, Chile, Costa Rica, Ireland, Jordan, Liberia, Malaysia, Lebanon and Uruguay.
The figure is actually higher than 4.4%, if we include companies identified in the leaks but with listed UBOs who are either companies or linked to tax havens.
We do have access to administrative companies data for Luxembourg. However, a very high number of companies were closed when it was announced that the beneficial ownership registry would be made public in 2019 -probably to avoid public reporting requirements. This means that we only have information on beneficial ownership for a reduced sub-sample of Luxembourg companies.
The 15% rate was introduced in 2012 and applied initially to purchases made by corporate bodies when the price exceeded £2M. It was extended to transactions above £500K in 2014.
The data are available at https://www.centreforpublicdata.org/property-data-overseas-individuals.
If we consider UK properties owned by UK residents through offshore companies, then the United Kingdom comes second in the ranking.
The windsorization is done on both tails of the price distribution. This means that all properties prices are capped at 0.5% and 99.5% of the price distribution.
For visibility, our analysis period starts in 2005 but starting in 2000 does not change the results at all.
The database we use for top marginal income tax rate across countries is the individual income tax rate table built by KPMG and available at https://home.kpmg/sa/en/home/services/tax/tax-tools-and-resources/tax-rates-online/ individual-income-tax-rates-table.html.
The median top marginal tax rate in the KPMG database is 30%. Results are qualitatively unchanged if we compute the median top marginal tax rate based only on the countries that effectively appear in our matched transactions data.
The complete results of this allocation are available in appendix table A.3 of their paper.
Out of the 67 countries from the G20 (excluding the US and Russia) or participating either to the Joint Announcement or to the OECD Declaration on Tax Matters, we only keep the 42 non-haven countries (using the list from[START_REF] Menkhoff | Tax Evasion in New Disguise? Examining Tax Havens' International Bank Deposits[END_REF]). We also do not keep Greenland and Faroe Islands as we do not have information on the amount of offshore financial wealth they own in[START_REF] Alstadsaeter | Who Owns the Wealth in Tax Havens? Macro Evidence and Implications for Global Inequality[END_REF]. Thus, we are left with 40 countries.
The algorithm is described in their Appendix tables B1 and C1. In order to be able to follow their matching process, we start by creating several variables from the Address string: PAON (Primary Addressable Object Name), SAON (Secondary Addressable Object Name), Street and Location.
Notes: This table shows the average price paid for a property in each London borough, as computed by the Land Registry in the House Price Index.
Notes: This figure shows a heatmap of the difference in percentage points of the percentage of sales made after one year, two years, etc, between the treatment and the control group. It is calculated over the 2015-2020 period. The x-axis shows the year of purchase of the property, while the y-axis shows the year it was sold. So for example, we see that the percentage of properties bought in 2017 and sold in 2018 was 5pp higher in the treatment group than in the control group.
Notes: This figure presents the aggregated amounts invested in England and Wales by companies incorporated in tax havens vs. companies incorporated in non-havens, normalized by their value in 2013q2. It is based on the Overseas Companies Ownership Dataset.
Matt Collin very kindly shared data he collected. This research was funded by a public grant overseen by the French National Research Agency as part of the Investissements d'Avenir program LIEPP (ANR-11-LABX-0091, ANR-11-IDEX-0005-02) and the Université de Paris IdEx (ANR-18-IDEX-0001), and benefited from the support of the EUR grant ANR-17-EURE-0001. 1
Appendix
A1. Tax advantages of buying a UK property through an offshore vehicle
Stamp Duty and Land Tax. If an individual buys a residential property in the UK, Stamp Duty and Land Tax (SDLT) is charged. The rate is progressive and has increased over time, with a top marginal rate of 12% in 2021. In 2012, a 15% rate is applied to purchases made by corporate bodies when the price exceeds £2M (£500K from 2014). The tax however does not apply in a number of cases, including when the property is used for property rental business. Moreover, a way to avoid the SDLT is to buy the property through a corporate structure and to buy the shares of the company instead of the property itself. In order to counterbalance this tax privilege, the Annual Tax on Enveloped Dwellings (ATED) is introduced in the Finance Act of 2013. It is an annual tax payable by companies owning UK residential property valued at more than £2M in 2013 (the threshold is now set at £500,000) and occupied rather than let out to an unconnected person. 49 The amount charged is progressive, lying from £3,700 (property value below £1M) to £237,400 (values above £20M) in 2021-2022.
Inheritance Tax. For non-UK residents and non-dom individuals, a common way to avoid inheritance tax on a UK property have been to hold it through an offshore company. Indeed, while the personal representatives or the beneficiaries of a non-dom individual owning UK property directly are liable to the inheritance tax in case of death (40% on the value of the property), no inheritance tax is applied to the shares of a foreign company -even though its sole asset is a UK property -and the inheritance tax can therefore be avoided. 50 This tax privilege was however drastically reduced in 2017, when companies which value are wholly attributable to a UK residential property interest (UK RPI) started to fall within the scope of Inheritance Tax. 51 Capital Gains Tax. The rules related to the taxation of capital gains arising from the indirect ownership of UK properties have evolved during the period we consider. In 2013, the ATED-related Capital Gains Tax (CGT) is introduced. It applies to properties also covered by ATED at a rate of 28%. In 2015, a new tax, called the Non-Resident CGT, starts to apply, under which all non-UK resident persons and companies will pay a 20% capital gains tax on any profit realised on their property after 6 April 2015. However, it seems to have been possible for non-residents to avoid these taxes until 2019 by selling a property through the shares of 49 Properties let to unconnected parties qualify for relief and are therefore exempt from the ATED charge. 50 This scheme does not work for UK residents. 51 If the company has other assets (e.g. located in France), Inheritance tax will only apply on the fraction of assets subject to the English Inheritance tax. Moreover, if the deceased person's stake in the company is too small (that is, less than 5% when combined with the stakes of persons connected to her) then the new rule doesn't apply. Notes: CRS exposure or treatment intensity in a given tax haven is computed based on the residence country of all individuals owning companies in that tax haven and on whether the residence country is an early adopter or not. A treatment intensity of e.g. 0.5 indicates that 50% of all company owners in the tax haven reside in countries participating to the Joint Announcement. Countries are labelled according to their importance in terms of flows of investments in the English and Welsh real estate market (e.g. in the less exposed group, British Virgin Islands companies invest more than Hong Kong companies, who invest more than Panama companies etc). There is no drop for the less exposed countries after the CRS which suggests that no country is "ejected" from the UK real estate market due to higher property prices after 2014.
A4. Geography of tax haven use
Australia
F : T UK '
Notes: This figure shows the most frequent tax havens used to invest in the UK real estate market, by region of the beneficial owner(s).
We construct it matching the Panama Papers and other leaks data to the OCOD. We display the percentages of owners from each world region we identify in our sample on top of the figure. To compute the percentages, we remove beneficial owners who are linked to a tax haven, and beneficial owners who are companies. The total might not add up to 100% because of rounding.
Assets and ownership type
Solution to avoid reporting under CRS Loophole that prevents the information from being reported Literature Direct ownership of financial assets Moving deposits to a non-participating country (the U.S.) Some countries are not part of the CRS (the U.S.) -CRS: The threshold to define a person ad "controlling person" of a company is typically 25% (even though this threshold might vary)
No
Direct ownership of financial assets
Holding assets via a discretionary trust with no distribution of income during the reporting period A beneficiary from a discretionary trust will be treated as a beneficiary of the trust if such person receives a distribution in the appropriate reporting period.
No Direct or indirect ownership of financial assets
Acquiring a residence certificate from a secrecy jurisdiction Some tax havens refuse that any data is ever collected about their tax residents. Becoming a (fake) resident of such a tax haven would prevent any reporting
No
T : S CRS
A9. Do property buyers bunch at different stamp duty tax thresholds?
The UK Stamp Duty and Land Tax (SDLT) is imposed on the purchase value of land and any construction on the land. The rate is progressive and has increased over time as new thresholds have been introduced.
Until December 2014, the rate of stamp duty is applied on the whole amount of the purchase, meaning that the stamp duty schedule exhibits notches -discrete jumps in tax liabilities -at thresholds of property prices. Best and Kleven (2018) show that buyers in the UK react strongly to these notches by "bunching" at different thresholds of the stamp duty schedule. After December 2014, the general stamp duty tax schedule evolves and the tax applies on increasing portions of the property price, i.e. a rate of 0% applies on the portion of the price up to £125,000, then a rate of 2% applies on the portion from £125,001 to £250,000 etc. The rate faced by investors purchasing UK real estate through offshore companies starts to differ from the standard rates from 2012 onward. Finance Act 2012 introduces a 15% rate of SDLT on the acquisition by certain non-natural persons -including foreign companies -of dwellings costing more than £2 million. In 2014, the threshold is reduced to £500,000.
We estimate bunching responses to the stamp duty tax based on transactions for which the price is available. We also exclude transactions when a bundle of properties are purchased at the same time. 56 We first look at bunching behaviors around the £250,000 threshold (figure 25). Until 2014, the proportional tax rate for residential properties jumps from 1% to 3% at this cutoff. From December 2014, the notch is replaced by a kink, so the incentives to bunch at the threshold decrease. Second, we estimate bunching around the £500,000 threshold (figure 26). Until 2014, the proportional tax rate jumps from 3% to 4% at this cutoff. From
March 2014 onward, the proportional tax rate reaches 15% beyond this threshold -as opposed to a marginal tax rate of 2% below. We do not study bunching responses around other thresholds of the stamp duty schedule for sample size reasons. The lower notch takes several values ranging from £60,000 to £175,000 during the period, we thus would have too few observations to estimate bunching responses at this cutoff. Moreover, we cannot estimate bunching responses at the £2 million threshold because the 15% rate only applies from 2012 to 2014 and thus we observe too few transactions around that price during this short period.
We follow a similar method to the one used by Best and Kleven (2018) in their paper. The counterfactual distribution used to compute the excess mass at the notch (or kink) points is estimated by fitting a flexible polynomial of order 7 to the empirical distribution of purchased prices excluding data in a range around the thresholds. We allow for round-number fixed effects for prices that are multiple of 25,000 and 50,000 in order to capture rounding in the price data. We group transactions into price bins of £5000. The regression used to estimate the counterfactual distribution around the threshold v is the same as equation ( 11) in Best 56 In this case, the price is only available for the total amount invested and not for each individual property bought.
and Kleven (2018) and is the following:
where c i is the number of transactions in price bin i, z i the distance between price bin i and the cutoff v and q is the order of the polynomial (q = 7 in our estimation). The second term of the equation accounts for round-numbers bunching, with R = {25, 000; 50, 000}, N the set of natural numbers and I{.} is an indicator function. The third term excludes a region {h - v , h + v } around the threshold that is distorted by responses to the tax. The estimate of the counterfactual distribution is defined as the predicted bin counts ĉi omitting the contribution of the dummies in the excluded range. The excess bunching is estimated as the difference between the observed and counterfactual bin counts in the part of the excluded range below the threshold.
Standard errors are obtained by bootstrapping the procedure 200 times.
Overall, our results suggest that property buyers do respond to the stamp duty tax. Looking at figures 25 and 26, panel (a), there are clear and statistically significant responses to the notch in the 2000-2014 period, which is roughly the same sample period as in Best and Kleven (2018) Note that this top rate only applies to residential properties and that we do not know whether a purchase is residential or commercial in our sample, in most of the cases. |
03112784 | en | [
"shs.langue"
] | 2024/03/04 16:41:20 | 2020 | https://shs.hal.science/halshs-03112784/file/HAL_Machines_Modicom.pdf | Qu'est-ce qu'une machine linguistique ? Épistémologie de la technique et linguistique instrumentée Pierre-Yves Modicom Le "
tournant empirique" de la linguistique comme tournant mécanique
Introduction
« Le corpus : la notion et l'objet risque d'être victime aujourd'hui en France de son succès. Plus une discipline, plus un comité scientifique, plus un chercheur qui n'y fasse référence ; plus un linguiste, surtout, qui ne le manipule, le caresse ou le maltraite. » C'est par ces mots que Damon Mayaffre, un des pionniers du traitement quantitatif des discours (politiques, en l'occurrence) en linguistique, ouvre un article de réflexion générale sur le statut du corpus dans la linguistique actuelle 1 . Au même moment, François Rastier lui fait écho en déclarant que le corpus est l'ensemble dans lequel « le texte », conçu comme unité de référence de la « linguistique évoluée », peut seul « prendre son sens » 2 .
Comme le relève Mayaffre dans son article, le renversement de la dichotomie saussurienne entre langue et parole, dichotomie à laquelle on pourrait d'ailleurs adjoindre les oppositions équivalentes plus pratiquées dans la sphère anglo-saxonne, par exemple entre système et usage ou entre compétence et performance, aboutit à un primat radical des données authentiques et débouche pour finir sur l'affirmation selon laquelle « il n'y a aucune linguistique qui ne soit de corpus 3 », ce qui revient à prendre le contrepied exact de la phrase de Chomsky selon laquelle « la linguistique de corpus ne veut rien dire 4 ».
Le sentiment d'un changement radical est aujourd'hui très répandu dans le champ des sciences du langage, ce qui se traduit par une floraison de littérature méthodologique tentant de cerner ce que beaucoup qualifient de tournant, ou même de changement de paradigme 5 . Dans ces débats, la notion de linguistique « outillée » ou « instrumentée » est souvent mobilisée pour illustrer la thèse d'une forme particulière, et potentiellement supérieure, de scientificité de la linguistique « de corpus » en tant que linguistique fortement liée à l'usage des machines, et donc supposément moins tributaire de la « théorie » ou de la spéculation. Le présent article entend justement analyser et questionner ce rapport de la linguistique de corpus à ses machines.
Place des grands corpus dans les revendications de scientificité des linguistes
Dans ce que nous qualifierions pour notre part davantage d'épistémologie spontanée des savants que de méthodologie, le corpus est fréquemment posé comme susceptible de connaître deux statuts plus ou moins antithétiques :
Tandis que la linguistique « sur » corpus (corpus-based) passe par des corpus d'illustration, qui permettent de mettre à l'épreuve une théorie, et par conséquent pourrait se revendiquer de la méthode hypothéticodéductive (selon un mouvement fréquemment dit descendant, top-down), la linguistique « de » corpus (corpus-driven) se revendique de la méthode inductive (selon un mouvement souvent dit ascendant, bottomup). C'est dans ce dernier cadre que s'est développée la plus grande partie de la réflexion sur le caractère supposément incontournable du recours aux corpus et aux statistiques. On en trouve un bon exemple dans les ouvrages de D. Glynn, figure majeure de la sémantique « de corpus » et auteur de nombreux travaux sur les problèmes de méthodes en linguistique « empirique » : « La recherche empirique quantitative est-elle une voie praticable pour la recherche en sémantique ? Plus spécifiquement, pouvons-nous utiliser des données de corpus pour produire des résultats testables et falsifiables de description sémantique ? »6 Dans ses formes les plus radicales, la linguistique de corpus suggère un parallélisme ou une équivalence entre les outils techniques et mathématiques à la disposition du linguiste et les mécanismes cognitifs (et donc, en dernière instance, du moins si l'on adopte la perspective spécifiquement neuronale qui prévaut souvent en science cognitive, cérébraux) qui caractériseraient le langage comme faculté, ou bien une langue singulière comme « code ».
« Ainsi, nous pouvons dire que la fréquence de co-occurrence, qui est fondamentale en recherche de corpus, est une opérationalisation quantitative des théories de base de la Linguistique Cognitivel'enracinement (entrenchment) et la catégorisation. Ces théories, l'enracinement et la catégorisation, expliquent la grammaire et le sens7 . » Même sans aller jusqu'à cette forme nouvelle de parallélisme psychophysique, on peut dégager, dans les discours sur le « tournant instrumenté » de la linguistique, un réseau d'arguments dont les deux plus significatifs nous paraissent être le recours à la notion de falsification et le réinvestissement de la catégorie d'« empirie » : tout d'abord, par la convocation, fût-elle tacite, de la figure de Karl Popper, le simple concept de falsifiabilité a déjà valeur de proclamation de scientificité au bénéfice de la méthode quantitative, fût-ce au prix de présupposés discutables (on ne saurait considérer le falsificationnisme poppérien comme une théorie épistémologique allant de soi, et en outre, le rapport entre reproductibilité et falsifiabilité, dans le cas d'une linguistique « de » corpus, appellerait un retour précis sur ces deux notions, qui interdit de les tenir pour acquises en droit). La notion de « tournant empirique » ou de « recherche empirique » trouve ici une valeur polémique et représente à nouveau, en creux, un argument pour la nouvelle linguistique ou plutôt contre l'ancienne, présentée comme théoriciste.
Chez certains auteurs, on retrouve également l'idée que le recours à l'analyse quantitative, comprise comme analyse mécanique, permettrait de minorer la part de spéculation du linguiste, et notamment du sémanticien, qu'impose l'analyse dite qualitative -ce qui implique effectivement de (se) poser des objectifs sensiblement différents dans la description des faits de langue. Cette idée, la linguistique quantitative la partage avec les approches ethnométhodologiques de l'interaction : la fidélité aux données brutes doit préserver le linguiste du danger de la spéculation interprétative. Au coeur du discours de la nouvelle linguistique outillée, il y a donc selon nous une revendication de fidélité absolue aux données. Or cette revendication semble liée à un postulat, celui de la transparence de l'instrument, ou de sa neutralité. Les discussions méthodologiques en linguistique de corpus portent volontiers sur les opérations statistiques mobilisables pour l'analyse et l'interprétation des « données », mais l'ontologie des données et celle des instruments utilisés ne semble avoir donné lieu à un nombre d'études nettement moins fourni. Parmi les quelques travaux méthodologiques s'aventurant dans ce domaine, on peut relever les travaux de Dalud-Vincent et Kalampalikis & Moscovici sur le logiciel Alceste 89 , et l'article plus général de Busse & Teubert sur la méthode empirique en sémantique historique des discours 10 ..
Pour une critique du neutralisme machinique
C'est cette ontologie des objets de la linguistique « de/sur corpus » en tant que linguistique outillée qui sera au centre de mon propos. Plus précisément, les deux opérations constitutives d'une linguistique de corpus étant l'annotation et l'exploration, la question centrale est celle du statut théorique des instruments techniques utilisés pour constituer et explorer un corpus.
Pour une ontologie de l'objet technique « corpus » en linguistique, il me semble souhaitable de se tourner vers les catégories proposées par l'épistémologue des techniques Gilbert Simondon dans son ouvrage Du mode d'existence des objets techniques 11 . Au coeur de la réflexion de Simondon, ou trouve l'idée du processus de concrétisation, c'est-à-dire de l'évolution d'un objet technique au fil de l'histoire, menant à une adéquation totale à sa ou ses fonctions, qui peut aller jusqu'à la surdétermination (par laquelle l'outil, devenu hyperspécialisé, a besoin du renfort d'autres outils pour assurer des fonctions que ses « ancêtres » remplissaient seuls). Dans cette perspective, l'outil « ne saurait être considéré comme un pur ustensile 12 ».
La thèse centrale défendue ici, en application des théories de Simondon sur l'ontologie des machines, sera que les corpus sont des milieux techniques mixtes, constitués par les textes du corpus et les objets techniques eux-mêmes complexes qui servent à leur exploration. Les « ensembles techniques » constitués par un corpus et son outil d'exploration sont l'objet d'un processus d'individuation technique propre, qui débouche sur une pluralité de configurations possibles, questionnant ainsi le postulat sous-jacent d'une neutralité de l'instrument technique conçu comme simple révélateur.
L'épistémologie de la machine chez Simondon
Le processus de concrétisation
La caractéristique majeure de l'individuation comme processus de concrétisation de l'objet technique est l'accroissement de son unité interne, c'est-à-dire que l'adéquation croissante de l'objet à sa/ses fonction(s) va de pair avec l'adaptation de ses composants les uns aux autres, tandis que les effets secondaires ou les pertes d'énergie sont peu à peu éliminés ; la concrétisation est un processus de réduction de la contingence et de resserrement des liens entre les composants : « Dans un moteur actuel, chaque pièce importante est tellement rattachée aux autres par des échanges réciproques d'énergie qu'elle ne peut pas être autre qu'elle n'est » 13 . L'exemple du moteur illustre en quoi la concrétisation de l'objet technique obéit à une dynamique historique ou généalogique : « Dans le moteur ancien, chaque élément intervient à un certain moment dans le cycle, puis est censé ne plus agir sur les autres éléments ; les pièces du moteur sont comme des personnes qui travailleraient chacune à leur tour, mais ne se connaîtraient pas les unes les autres 14 . » Dans cette dynamique généalogique, la technique est investie de part en part par la science 15 , qui se saisit des caractéristiques fonctionnelles des outils et des composants, pour en tirer tout ce qui peut en être tiré et en exclure tout ce qui n'est pas désiré. Ainsi, la mécanique comme branche de la physique (étude des mouvements et des résistances entre corps) informe la mécanique comme technique (de production et de mise en fonctionnement de machines). On retrouve là un topos déjà souligné par Kant dans Théorie et pratique : la technique est une dépendance de la science, et une technique moins pleine de science est une technique moins efficace à l'aune de ses propres critères de technicité. À cet égard, on peut d'emblée relever qu'il n'y a pas de sens à opposer une linguistique « outillée » et une linguistique « spéculative » si par « spéculative » on entend theory-driven : un outil, c'est de la théorie matérialisée, et en ce sens il n'y a donc rien de plus theory-driven qu'une étude conduite en s'en remettant à un outil.
L'objet technique et son milieu
Mais la concrétisation n'est pas qu'un processus de transformation interne. Elle a aussi un impact sur le rapport de l'objet technique à son dehors, qui est double : l'extérieur de l'objet technique est à la fois l'espace où il se trouve, avec ce qui emplit cet espace et qui présente des caractéristiques (pression de l'air, luminosité, environnement acoustique…), ce que Simondon appelle le « milieu géographique », et les autres objets techniques avec lesquels l'objet est susceptible d'interagir ou d'être combiné, le « milieu technique 16 Les outils, au sens restreint que le terme a chez Simondon (un marteau par exemple), et leurs cousins, les instruments (un microscope), sont des éléments techniques. « L'outil prolonge l'organe, et est porté par le geste. Le 18 e siècle a été le grand moment du développement des outils et des instruments, si l'on entend par outil l'objet technique qui permet de prolonger et d'armer le corps pour accomplir un geste, et par instrument l'objet technique qui permet de prolonger et d'adapter le corps pour obtenir une meilleure perception ; l'instrument est outil de perception 19 . » À ce titre, la linguistique de corpus n'est donc pas « outillée » mais bien « instrumentée », puisque les objets techniques que nous allons maintenant passer au crible sont bien censés permettre de mieux voir, ou percevoir, les données. Simondon est d'ailleurs très explicite quant au fait que pour lui, la pratique de la science repose sur le recours à des instruments 20 .
Dans ce qui suit, nous examinerons quatre instruments, en deux temps. Nous commencerons par deux instruments textométriques, c'est-à-dire des logiciels permettant de procéder à des relevés systématiques d'occurrences d'une forme dans un corpus et d'en dégager les schémas de cooccurrence (c'est-à-dire de lister les formes les plus susceptibles d'apparaître dans le voisinage du terme étudié). Nous nous pencherons dans un second temps sur deux corpus en ligne, où les « données » textuelles sont directement intégrées dans l'instrument d'exploration.
Implications pour les instruments textométriques de la linguistique de corpus
AntConc et TXM : présentation
AntConc, un concordancier AntConc est un logiciel pionnier de l'exploration quantitative des textes. Ce logiciel peut donner lieu, audelà de ses fonctionnalités de départ, à des usages additionnels via un travail supplémentaire en combinaison avec d'autres logiciels. Nous laisserons ces usages de côté : sans aller jusqu'à parler de bricolage, ces usages sont en effet extérieurs à la problématique de l'adaptation de l'objet technique aux fonctions en vue desquelles il a été créé.
AntConc nécessite le chargement d'un corpus composé d'un ou plusieurs fichiers .txt (le corpus). L'exploration s'effectue à partir de la saisie d'une requête (on tape une forme le moteur de recherche). On obtient alors :
(i) toutes les occurrences de cette forme (et les indications de fréquence et de distribution) dans le corpus .txt préalablement compilé et chargé par l'analyste21 ; (ii) la liste de l'ensemble des mots présents dans le corpus, classés par nombre d'occurrences22 ; (iii) les collocations les plus fréquentes pour la forme recherchée au départ 23 .
De ce fait, le logiciel présente trois caractéristiques qui seront pertinentes dans la comparaison avec TXM :
(i) c'est à l'analyste de charger directement les fichiers .txt ; à charge pour lui, par exemple, de vérifier l'encodage des caractères faute de quoi le logiciel rencontra un problème de traitement face aux signes diacritiques (pour le français : cédille, accents ; pour l'allemand : Umlaut essentiellement ; à noter également la rétivité du logiciel au ß) ; (ii) AntConc est en fait aveugle à la langue ; le logiciel isole les mots à partir des espaces et signes de ponctuation dans le fichier ; (iii) le corpus ne contient aucune annotation : il se réduit à un texte brut.
Le travail d'AntConc relève donc de la mise en ordre de données brutes et de la quantification d'occurrences. Tout travail interprétatif se fait en amont ou en aval de l'utilisation de l'instrument. TXM TXM peut à bien des égards être présenté comme un descendant d'AntConc : ce logiciel permet de lancer toutes les recherches vues pour AntConc... et davantage ; si l'on compare TXM à AntConc en gardant en tête les caractéristiques précédemment relevées pour celui-ci, la différence la plus importante concerne l'annotation : TXM travaille impérativement sur un corpus annoté. Si le corpus chargé n'est pas annoté, par exemple si c'est le même fichier .txt qu'AntConc peut explorer, TXM commencera par procéder lui-même à l'annotation (morphosyntaxique) lexicale avant qu'il soit possible de lancer la première requête et de commencer l'exploration du corpus. En l'occurrence, TXM va notamment réunir les différentes formes d'un même lexème sous un chapeau commun et étiqueter les formes par partie du discours : Nom, adjectif, verbe -en distinguant alors le plus souvent entre verbes pleins, auxiliaires, modaux… conjugués ou non. Nous écrivons « le plus souvent », car si le composant technique qui procède à l'annotation morphosyntaxique est à première vue toujours le même (c'est le logiciel TreeTagger, utilisé comme module), en réalité cette annotation ne peut bien sûr pas être « aveugle à la langue », comme nous l'écrivions pour AntConc, c'est-àdire qu'il existe au moins une version de TreeTagger par langue susceptible d'être traitée par TXM, et que le logiciel est inutilisable pour examiner les textes d'une langue pour laquelle il n'existe pas de version de TreeTagger. Or, d'une langue à l'autre, non seulement bien sûr le « dictionnaire » utilisé pour regrouper plusieurs formes d'un même lexème n'est pas le même, mais les étiquettes ( tags) morphosyntaxiques, notamment pour distinguer les parties du discours, ne sont pas les mêmes et les possibilités d'exploration peuvent varier.
Dès l'abord, il apparaît donc qu'avec TXM, le corpus est « digéré » par le logiciel d'exploration et d'analyse. Les recherches passent impérativement par un « langage de requête » qui impose de signaler à quel(s) niveau(x) d'annotation on se situe : on peut demander les occurrences d'une forme, d'un lemme24 , d'une partie du discours, ou d'une structure complexe associant des informations de différents niveaux, par exemple tous les passages du corpus où un adverbe est immédiatement suivi d'un verbe conjugué, ou tous les adjectifs apparaissant immédiatement avant le lemme Entscheidung dans le corpus, etc. Pour toutes ces requêtes, outre l'inventaire des occurrences, on peut obtenir des relevés de collocations, ou des indications de fréquence ou de distribution dans le corpus25 . L'image ci-dessous correspond ainsi au relevé des collocations pour la structure associant un verbe modal conjugué suivi à moins de cinq mots de là par un adverbe, sur mon corpus DRKORP26 . En réalité, même si l'on charge un fichier texte, le logiciel travaille sur un fichier .xml . Dans un usage « naïf » de TXM, on peut travailler avec le logiciel sans jamais accéder à ces fichiers tableurs ni les modifier. Mais il est également possible de rechercher dans l'ordinateur l'endroit où TXM stocke les fichiers XML correspondant à une base de fichiers .txt, pour intervenir directement sur ces fichiers XML.
{TXM_synt_colloc.tif}
A titre d'illustration, je reprends ici à Hardie27 l'annotation dans son standard minimal Modest XML de l'énoncé The cat sat on the mat. <w pos="ART" lemma="the">The</w>\\ <w pos="NOUN" lemma="cat">cat</w>\\ <w pos="VERB" lemma="sit">sat</w>\\ <w pos="PREP" lemma="on" >on</w>\\ <w pos="ART" lemma="the">the</w>\\ <w pos="NOUN" lemma="mat">mat</w>\\ <w pos="PUNC" lemma="." >.</w>\\ L'annotation à des niveaux supplémentaires (sémantique par ex.) se fait encore essentiellement à la main sur les fichiers XML, même s'il existe des systèmes balbutiants d'annotation sémantique automatisée à la manière de TreeTagger en syntaxe (cf. p.ex. le corpus SALSA à Sarrebruck).
Il existe également des logiciels d'annotation comme ANALEC ou ELAN/CLAN pour limiter la confrontation directe au code XML.
Du point de vue ontologique
Si l'on se pose la question du « milieu associé » de ces instruments, TXM présente un degré d'assimilation du milieu nettement supérieur à AntConc. Dans l'usage de TXM, il n'y a plus de privilège de la ligne de texte "première", qui fait l'objet de requêtes formulées de la même manière que n'importe quelle autre ligne d'annotation. Le système implique une transformation de l'input textuel en une mini-base de données codée spécifiquement pour la recherche sur corpus ; pour les fonctionnalités de base, cette transformation est opérée par l'outil lui-même. Enfin, la syntaxe des recherches implique la soumission aux catégories notionnelles de l'outil.
Une façon radicale de formuler cette situation serait de dire qu'avec TXM, il n'y a plus de « données » de corpus distinctes de l'objet technique, puisque le corpus ne se réduit plus au texte-source : le vrai corpus de TXM, c'est l'ensemble des données XML, qui ne distinguent pas entre le texte source et les annotations, lesquelles procèdent de choix qui sont soit ceux des programmateurs des modules utilisés pour une annotation automatique, soit ceux de l'annotateur individuel. Le corpus peut être qualifié de milieu associé de la machine, puisqu'il est modifié par celle-ci dans sa nature même.
Le degré d'individuation de TXM est également assez élevé du point de vue de la structure interne : le logiciel se caractérise par une forte co-adaptation à ses différents composants (modules d'annotation, d'exploration, d'exportation etc., qui sont intégrés dans l'interface d'utilisation alors qu'ils peuvent avoir été conçus séparément). Cela étant, le système n'est clos qu'en première instance, le corpus lui-même étant modifiable manuellement ad libitum.
Bases de données linguistiques (« corpus ») en ligne : DDD et DeReKo À côté des instruments permettant d'analyser un corpus que l'analyste constitue soi-même, l'une des évolutions contemporaines majeures est l'apparition des grandes bases de données textuelles en ligne (« grands corpus ») comme le Deutsches Referenzkorpus, le British National Corpus ou le Corpus of Contemporary American English. Dans ce qui suit, nous nous intéresserons à deux bases bien différentes, le corpus DeutschDiachronDigital et le Deutsches Referenzkorpus. Nous le verrons, l'étiquette de « corpus » est en un sens réductrice pour qualifier ces bases.
Généralités
DeutschDiachronDigital (DDD) est le nom d'une base de données sur le vieil-haut-allemand en accès libre, hébergée par le portail de corpus de la Humboldt-Universität à Berlin et adossée à l'outil d'exploration et d'annotation Annis, comme tous les corpus de la HU.
Le Deutsches Referenzkorpus (DeReKo), pour sa part, est un portail d'exploration, qui donne accès à une base de données textuelles en accès libre une fois franchie l'étape de l'enregistrement gratuit. Le DeReKo est développé et hébergé par l'Institut für Deutsche Sprache (Mannheim). Il est adossé à l'outil d'exploration Cosmas II. De façon assez caractéristique, de même que beaucoup de linguistes appellent « Annis » le portail de corpus de la HU, « Cosmas » est souvent traité comme le nom du corpus plus que comme celui de l'interface d'exploration. A chaque fois cette interface d'exploration préexiste au corpus. Le site est conçu pour elle. Les données textuelles annotées et/ou lemmatisées ne sont exportables qu'après une requête et à partir de l'interface.
DeutschDiachronDigital
Le DDD repose donc largement sur l'outil, ou plutôt l'instrument, Annis (pour « Annotation of Information Structure »). Cet outil d'exploration de corpus implémenté sur un navigateur web a été mis au point par l'ancien Sonderforschungsbereich berlinois en linguistique théorique, dont le site héberge également la banque de corpus en ligne où figure la base DDD.
Il faut toutefois relever qu'Annis est téléchargeable seul, et donc utilisable indépendamment du site. Il peut tout à fait être exploité pour d'autres corpus.
Annis permet une annotation syntaxique (sous la forme d'arborescences dépendancielles) et donc des recherches comme « donne-moi tous les GN incluant 3 constituants dont un GPREP » (pour une étude sur la valence nominale, par exemple). Mais toute recherche implique la maîtrise du langage de requête (Query Language) propre à Annis. Il n'y a pas d'assistant, juste une liste de requêtes-types à imiter, et un manuel.
L'une des questions posées dans la section précédente était celle d'un éventuel privilège (ou d'une antériorité) de la ligne de texte d'origine (notée edition). Ici, la seule spécificité de cette ligne de texte est qu'on peut y réclamer une forme directement sans recourir au langage de requête. Il s'agit en fait d'un artifice de programmation : cette ligne étant la seule qui puisse être explorée sans que l'on formule à quelle ligne d'annotation doit se trouver l'information recherchée, Annis interprète l'absence de consigne sur ce point comme une consigne de chercher dans la ligne « edition ». Toutes les lignes d'annotation peuvent indifféremment faire l'objet d'une requête.
À bien des égards, le DDD fait figure de pièce rapportée pour Annis : les fonctionnalités de recherche syntaxique (dépendancielles) ne sont pas encore disponibles dans toutes les bases de données, et en particulier pas pour le DDD. La recherche « donne-moi tous les GN incluant 3 constituants dont un GPREP » n'y est donc pas encore possible. Cela impliquerait qu'une équipe de linguistes diachroniciens et d'informaticiens mettent au point un système de traitement automatique des relations syntaxiques dans les textes en vieil-haut-allemand. Du point de vue du mode de coexistence de l'instrument et du corpus, il n'en demeure pas moins que l'on observe une forte incorporation du texte à l'instrument ; le corpus n'a plus d'existence en-dehors du système d'analyse et d'exploration. Il y a donc une forme d'asymétrie : l'instrument préexiste au texte et le texte doit s'adapter à l'instrument.
L'objet technique présente un fort degré de clôture, mais il est permis de se demander si le corpus est véritablement un milieu associé de l'individu technique, ou bien si l'on n'a pas affaire à un seul ensemble technique organisé autour de l'instrument d'exploration et dont le corpus serait un élément.
DeReKo -Cosmas II
Le DeReKo n'est pas à proprement parler « un » corpus, mais une base de corpus. Par défaut, lorsque l'on parle du DeReKo, il s'agit du W-Archiv, le corpus général de l'allemand écrit, qui sert de base aux autres corpus.
On Comme pour le DDD, le privilège du texte d'origine se manifeste par la possibilité de demander à Cosmas toutes les occurrences d'une forme donnée en se contentant de la taper. Mais il s'agit d'une fausse évidence, puisque même dans ce cas, l'interrogateur se voit proposé de choisir entre toutes les variantes typographiques de la forme en question. Autre artifice de programmation : une simple espace entre deux formes dans une requête est interprétée comme une façon de demander que ces deux formes soient immédiatement consécutives. Ici aussi, l'absence de métalangage est liée au fait que le non-recours à une méta-consigne concernant la succession des formes n'est possible que dans un cas de figure (la consécution immédiate), ce qui a permis de programmer Cosmas pour qu'il assigne à cette absence de consigne une valeur équivalant à une consigne.
Hormis ces requêtes élémentaires, toutes les requêtes incluant plusieurs termes ou plusieurs formes d'un terme impliquent un minimum de codage. Ici aussi, on peut uniquement s'appuyer sur un petit répertoire de requêtes-types à imiter, et un manuel.
Cosmas permet ainsi d'obtenir des listes d'exemples apparaissant au format KWIC28 exactement comme AntConc ou TXM29 , mais aussi des profils de co-occurrence.
Au sein du DeReKo, il convient d'isoler les « archives C et T », deux corpus reprenant une partie des textes de l'archive W mais avec une annotation syntaxique. Celle-ci est réalisée par deux instruments différents : soit Connexor (C), soit TreeTagger (T), que nous connaissons.
{dereko_treetagg_coocc.tif}
Figure III : table des cooccurrences pour une requête « verbe modal conjugué suivi d'un adverbe, avec un écart compris entre 0 et 5 mots » pour TreeTagger sur l'archive T du DeReKo, 7 juin 2017.
Ici, il y a un assistant dans la formulation des requêtes, qui permet de ne pas avoir à maîtriser le langage de requête comme avec TXM, si bien que le texte de la requête apparente ne sera pas le même pour le TreeTagger allemand sur TXM et pour le TreeTagger du DeReKo-T-Archiv. À noter, dans le même état d'esprit, que Connexor et TreeTagger ne permettent pas les mêmes requêtes : Connexor permet de faire des recherches par parties du discours en distinguant également certaines catégories grammaticales (tous adjectif vs. adjectifs au degré zéro vs. adjectifs au comparatif vs. adjectifs au superlatifs ; toutes formes verbales vs. infinitifs vs. participes vs. formes fléchies ; toutes formes verbales fléchies vs. fléchies à l'indicatif vs. fléchies au subjonctif 1 vs. fléchies au subjonctif 2 ; fléchies à tout temps de l'indicatif vs. fléchies au présent vs. fléchies au prétérit, etc.). TreeTagger pour DeReKo, de son côté, ne distingue pas les degrés de l'adjectif mais oppose les épithètes et les autres ; ou ne reconnaît pas les temps ni les modes du verbe, mais isole les verbes à l'impératif. Le choix de l'archive instrumentée (archive C ou archive T) sera donc déterminé par la nature de la requête visée.
Fonctionnalités avancées
Du point de vue de l'utilisation technique, la base de données textuelles et l'outil n'ont plus d'existence distincte et forment un seul outil. Cet outil ne se qualifie pas forcément pour autant comme individu technique : on voit mal ici ce qui jouerait le rôle de milieu associé. Il s'agit plutôt d'un seul ensemble technique faiblement individué comprenant une pluralité de composants.
En première instance, l'ensemble COSMAS / DEREKO maintient un fort privilège du texte-source, y compris dans sa matérialité typographique. Les « données » n'en sont toujours pas vraiment, puisqu'elles procèdent toujours des choix d'annotation conscients ou inconscients liés au recours à tel ou tel instrument, ou à la formulation de la requête, mais la part d'autonomie conservée par la base textuelle dans l'ensemble et le biais théorique en faveur de l'analyse des cooccurrences maintiennent le degré d'assimilation technique des textes à un niveau relativement bas comparé au DDD par exemple.
Conclusion : La linguistique, de l'atelier à la fabrique ?
Quelle leçon tirer de ce bref parcours ? Il nous semble ressortir de ces premiers coups de sonde que la linguistique instrumentée n'est ni plus ni moins « empirique » que celle reposant sur l'analyse dite qualitative d'exemples arbitrairement sélectionnés. Cela ne signifie pas qu'elle ne constitue pas un apport : elle permet effectivement un accroissement des types de recherche possibles, et permet de faire dire autre chose au corpus (et de constituer des corpus d'un nouveau type pour leur faire dire ces autres choses). Elle induit un nouveau rapport aux objets de la recherche, qui restent plus que jamais des construits, et non des « donné(e)s ». Nulle catastrophe à cela : la surdétermination théorique de la démarche du scientifique étant sans doute de toute façon inévitable, qu'il y ait ou non recours aux machines, l'essentiel est qu'elle soit consciente et avouée.
Mais il n'en demeure pas moins que la linguistique des machines induit effectivement une nouvelle position de l'analyste, qui s'intègre à son propre dispositif de recherche et abandonne (même sans s'en rendre compte) l'omni-intervention démiurgique qui caractérisait parfois l'exercice avant le recours aux machines. Pour nommer ce changement et conclure cet itinéraire, laissons parler une dernière fois Simondon : « Ce n'est pas essentiellement par la dimension que la fabrique se distingue de l'atelier de l'artisan, mais par le changement du rapport entre l'objet technique et l'être humain : [...] la fabrique utilise de véritables individus techniques tandis que, dans l'atelier, c'est l'homme qui prête son individualité à l'accomplissement des actions techniques. [...] L'ingénieur, engineer, l'homme de la machine, devient en fait l'organisateur de l'ensemble comprenant des travailleurs et des machines. Le progrès est saisi comme un mouvement sensible par ses résultats, et non en lui-même par l'ensemble d'opérations qui le constituent 30 . » 30 SIMONDON Gilbert, Du mode d'existence des objets techniques, op. cit., p. 163-164.
». Pour Simondon, la caractéristique de l'objet technique est justement d'être « au point de rencontre 17 » de ces deux milieux, et d'instaurer, dans, par et pour son fonctionnement, un milieu mixte associant les deux facettes de son dehors. Ce milieu mixte, lorsqu'il est stabilisé et pleinement intégré au fonctionnement de l'objet technique, Simondon l'appelle le « milieu associé ». « On pourrait dire que l'invention concrétisante réalise un milieu techno-géographique [...] qui est une condition de possibilité du fonctionnement de l'objet technique. L'objet technique est donc la condition de lui-même comme condition de ce milieu mixte. [...] L'évolution des objets techniques ne peut devenir progrès que dans la mesure où ces objets techniques sont libres dans leur évolution et non nécessités dans le sens d'une hypertélie fatale. Pour que cela soit possible, il faut que l'évolution des objets techniques soit constructive, c'est-à-dire qu'elle conduise à la création de ce troisième milieu techno-géographique, dont chaque modification est auto-conditionnée 18 . » L'individuation Sur ces bases, Simondon peut distinguer trois stades d'individuation de l'objet technique : 1) l'individu technique à proprement parler, celui qui présente un milieu associé sans lequel il ne peut pas fonctionner. L'objet technique instaure alors un certain régime d'existence de tout ou partie de son environnement, régime qui dépend du fonctionnement de l'objet et dont ce fonctionnement dépend en retour. 2) l'ensemble technique (ou « ensemble de formes techniques ») : il ne présente pas de milieu associé unifié. Corrélativement, il se compose d'objets techniques relativement autonomes les uns des autres et n'interagissant que faiblement. Il est donc supra-individuel et faiblement concrétisé. 3) Le composant technique, enfin, est infra-individuel. Il n'a pas non plus de milieu associé, mais peut être partie intégrante tant d'un ensemble que d'un individu technique.
Figure I : Liste de cooccurrents pour une requête « verbe modal conjugué suivi d'un adverbe avec un écart compris entre 0 et 5 mots » sur TXM, corpus DRKORP.
{DDD_Tatian1.tif}Figure II : Aperçu d'une fenêtre de travail sur le DDD, avec les annotations d'un énoncé du Tatian.
1 MAYAFFRE Damon, « Rôle et place des corpus en linguistique : réflexions introductives », Texto!, n°10, vol. 4, p.5. 2 RASTIER François, « Enjeux épistémologiques de la linguistique de corpus », Texto! Inédits, en ligne : http://www.revue-texto.net/Inedits/Rastier/Rastier_Enjeux.html (dernière consultation le 1 er juin 2018, 10:22) ; 1 re publ. in G. WILLIAMS (dir.), La linguistique de corpus, Rennes, Presses universitaires de Rennes, 2005, p. 31-46. 3 CHARAUDEAU Patrick, « Dis-moi quel est ton corpus, je te dirai quelle est ta problématique », Corpus n°8, 2009, p. 60 (Référence signalée par Sara Benoist, c. p.). 4 « Corpus linguistics doesn't mean anything. It's like saying suppose a physicist decides, suppose physics and chemistry decide that instead of relying on experiments, what they're going to do is take videotapes of things
happening in the world and they'll collect huge videotapes of everything that's happening and from that maybe they'll come up with some generalizations or insights. Well, you know, sciences don't do this. » (ANDOR József, « The master and his performance : An interview with Noam Chomsky », Intercultural Pragmatics n° 1, vol. 1, 2004, p. 97). 5 Ainsi Glynn parle-t-il de « major paradigm shift in linguistics, from theory-driven to empirical research. » (GLYNN Dylan, « Corpus-driven cognitive semantics: an introduction to the field », in Dylan GLYNN & Kerstin FISCHER (dir.), Quantitative methods in cognitive semantics: Corpus-driven approaches, Berlin, De Gruyter, 2010, p. 1-40)
observe ici une certaine coadaptation du corpus et de l'outil : Cosmas I puis II ont été mis au point à l'IDS pour le projet DeReKo ; Cosmas n'est pas en open source, n'est pas téléchargeable ; Cosmas I/II n'est donc utilisable que pour le DeReKo. Symétriquement, le DeReKo n'est interrogeable et exploitable que par Cosmas II.
"Is quantitative empirical research possible for the study of semantics? More specifically, can we use corpus data to produce testable and falsifiable results for the description of meaning?" (op.cit., p.1).
"Thus, we can say that frequency of co-occurrence, which is fundamental to corpus research, is a quantitative operationalisation of the basic theories of Cognitive Linguistics -entrenchment and categorisation. These theories, entrenchment and categorisation, explain grammar and meaning". GLYNN Dylan, « Corpus-driven cognitive semantics: an introduction to the field », art. cit., p.
8 DALUD-VINCENT, Monique, « Alceste comme outil de traitement d'entretiens semi-directifs : essai et critiques pour un usage en sociologie », Langage et Société, 135, 2011, p. 9-28. disp. sous https://www.cairn.info/revue-langageet-societe-2011-1-page-9.html (dernière consultation le 1 er juin 2018, 10h25). L'auteur remercie un relecteur anonyme pour cette référence et la suivante. 9 KALAMPALIKIS, Nikos & MOSCOVICI, Serge, « Une approche pragmatique de l'analyse Alceste », Cahiers internationaux de psychologie sociale 66, 2005, p. 15-24 ; disp. sous https://www.cairn.info/revue-les-cahiersinternationaux-de-psychologie-sociale-2005-2-page-15.htm (dernière consultation le 1 er juin 2018, 10h30). 10 BUSSE, Dietrich et TEUBERT, Wolfgang, « Ist Diskurs ein sprachwissenschaftliches Objekt ? Zur Methodenfrage der historischen Semantik », in Dietrich BUSSE, Fritz HERMANNS & Wolfgang TEUBERT (dir.), Begriffsgeschichte und
Ibid., p.64.
SIMONDON Gilbert, Loc. cit.
Ibid., p.69.
Ibid., p. 159.
Ibid., p. 160.
Voir annexe en ligne, figure 1. L'annexe en ligne est consultable soushttps://zenodo.org/record/1137788 (DOI 10.5281/zenodo.1137787 ; dernière consultation le 1 er juin 2018 à 11:01).
Voir annexe en ligne, figure 2.
Voir figure 4 de l'annexe en ligne.
Voir figure 5 dans l'annexe en ligne.
Voir MODICOM Pierre-Yves, L'énoncé et son double : recherches sur le marquage de l'altérité énonciative en allemand, Paris, Université Paris-Sorbonne, 2016.
HARDIE Andrew, « Modest XML for Corpora: Not a standard, but a suggestion », ICAME n°38, 2014, p. 73-103.
KeyWord In Context ou « mot-clé en contexte » : format de concordancier permettant de faire apparaître le terme recherché en pivot central de la fenêtre d'affichage, précédé et suivi de son contexte immédiat, défini par exemple comme l'énoncé ou comme les X mots graphiques précédant ou suivant le pivot. Le format KWIC fournit notamment une aide précieuse aux phases d'indexation et d'annotation manuelle des occurrences d'un terme dans un corpus.
Voir figure 6 dans l'annexe en ligne. |
04103666 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2022 | https://shs.hal.science/halshs-04103666/file/WP7.pdf | Ludvig Wier
email: [email protected]
Gabriel Zucman
email: [email protected]
Keywords: multinationals, profit shifting, factor shares, taxation H26, E25, F23
This paper constructs time series of global profit shifting covering the 2015-19 period, during which major international efforts were implemented to curb profit shifting. We find that (i) multinational profits grew faster than global profits, (ii) the share of multinational profits booked in tax havens remained constant at around 37 per cent, and (iii) the fraction of global corporate tax revenue lost due to profit shifting rose from 9 to 10 per cent. We extend our time series back to 1975 and document a remarkable increase of multinational profits and global profit shifting from 1975 to 2019.
Introduction
A body of evidence suggests that multinational companies shift profits to tax havens (e.g., [START_REF] Bolwijn | An FDI-Driven Approach to Measuring the Scale and Economic Impact of BEPS[END_REF][START_REF] Clausing | The Effect of Profit Shifting on the Corporate Tax Base in the United States and Beyond[END_REF][START_REF] Crivelli | Base Erosion, Profit Shifting and Developing Countries[END_REF]Tørsløv et al. 2022a). This phenomenon has attracted considerable attention from economists and policy-makers. In 2015, the OECD launched the Base Erosion and Profit Shifting process to curb tax avoidance possibilities stemming from mismatches between different countries' tax systems. In 2017, the United States cut its corporate tax rate from 35 to 21 per cent and introduced measures to reduce profit shifting by US multinational companies. In 2021, more than 130 countries and territories agreed to a minimum tax of 15 per cent on the profits of multinational firms, with implementation scheduled to begin in 2024 in some countries.
Yet despite these developments, we do not currently have a good sense of the dynamic of global profit shifting. Have corporations reduced the amounts they book in tax havens since 2015, or have they found ways to eschew the new regulations? A number of studies provide estimates of global profit shifting, but they typically do so for just one reference year. Moreover, because these studies rely on different raw sources and methodologies, their estimates are not directly comparable, making it hard to construct consistent time series by piecing different data points together. This limits our ability to study the dynamics of profit shifting and to learn about the effects of the various policies implemented to curb it.
This paper attempts to overcome this limitation by creating global profit shifting time series constructed following a common methodology. Our series allow us to characterize changes in the size of global corporate profits, the fraction of these profits booked in relatively low-tax places, and the cost of this shifting for governments of each country.
Our starting point is the estimates of Tørsløv, Wier, and Zucman (2022a), which are for 2015. Building on the same sources and applying the same methodology, we first extend the Tørsløv et al. (2022a) estimates to cover the years 2015 to 2019, a period that includes the Base Erosion and Profit Shifting (BEPS) process and the US tax reform of 2017. We then construct pre-2015 series back to 1975, which allows us to capture the financial and trade liberalization decades that saw a dramatic rise of multinational profits. Due to the lack of some of the input data required to implement the full Tørsløv et al. (2022a) methodology, these pre-2015 series are based on additional assumptions and have some margin of error. However the main quantitative patterns that emerge from these series are likely to be reliable.
Our main findings can be summarized as follows. First, global corporate profits have grown much faster than global income between 1975 and 2019. The share of profits in global income has increased by a third over this period, from about 15 per cent to close to 20 per cent. This increase is due both to the rise of the share of global output originating from corporations (as opposed to, e.g., non-corporate businesses) and the rise of the capital share of corporate output. The fast growth of corporate profits means that if the effective global corporate income tax rate had stayed constant, global corporate tax revenues (as a fraction of global income) should have increased by about one third since 1975. In reality, corporate tax collection has stagnated relative to global income-that is, the global effective corporate income tax rate has declined by about a third.
Second, there has been a large rise in multinational profits, defined as profits booked by corporations in a country other than their headquarters. The share of multinational profits in global profits has more than quadrupled since 1975, from about 4 per cent to about 18 per cent. This evolution reflects the rise of multinational firms, a well known development but for which a global quantification was lacking so far. The rise has been particularly pronounced since the beginning of the 21 st century. This evolution may explain why the issue of how to tax multinational firms has become more salient in the first two decades of the 21 st century. When foreign profits accounted for only about 5 per cent of global profits (as was the case from the 1970s through to the late 1990s), the tax revenues implications of properly taxing these profits were relatively small. With the rise of multinational profits, the revenue implications are significantly larger.
Third, there has been an upsurge in the fraction of multinational profits shifted to tax havens. By our estimates, this fraction has increased from less than 2 per cent in the 1970s to 37 per cent in 2019. Because multinational profits themselves have been rising much faster than global profits, the fraction of global profits (multinational and non-multinational) shifted to tax havens has risen from 0.1 to about 7 per cent. Consistent with these findings, we estimate that the corporate tax lost from global profit shifting has increased from less than 0.1 per cent of corporate tax revenues in the 1970s to 10 per cent in 2019. 1Fourth, in 2019-four years into the implementation of the BEPS process and two years after the Tax Cuts and Jobs Act-there was no discernible decline in global profit shifting or in profit shifting by US multinationals (which according to our estimates account for about half of global profit shifting) relative to 2015. Of course, it is possible that absent BEPS and the Tax Cuts and Jobs Act profit shifting would have kept increasing; we do not argue these initiatives had no effect. However, their effect seems, so far, to have been insufficient to lead to a reduction in the global amount of profit shifted offshore. This finding suggests that there remains scope for additional policy initiatives to significantly reduce global profit shifting.
The rest of this paper is organized as follows. Section 2 presents our methodology. Section 3 analyses the dynamic of global profit shifting over the 2015-19 period, while Section 4 presents our series back to 1975.
2
Definitions and methodology
Definition: multinational profits
We follow the conceptual framework laid out in Tørsløv et al. (2022a) We define as multinational profits the profits booked by corporations outside of their headquarter country.
In the preceding examples, profits booked by Microsoft in Germany and profits booked by Siemens outside of Germany are multinational profits, while profits booked by Microsoft in the United States are domestic profits. To be clear, multinational profits are not the same thing as multinationals' profits, which include both the profits booked by multinational companies in their headquarter country and outside of it. One of our main statistics of interest in this paper is the fraction of global corporate profits which are multinational profit, an important measure of financial globalization.
Definitions: shifted profits and tax havens
We define profit shifting as a tax-motivated and artificial transfer of paper profits within a multinational firm from high-tax countries to low-tax locales. Based on this definition we measure profit shifting to tax havens as the amount of multinational profits booked by companies in these havens above and beyond what can be explained by real economic activity (such as capital, labour, country characteristics, industry composition, and research and development [R&D] spending).
There are three forms of profit shifting (see [START_REF] Beer | International Corporate Tax Avoidance: A Review of the Channels, Magnitudes, and Blind Spots[END_REF][START_REF] Brandt | Illicit Financial Flows and Developing Countries: A Review of Methods and Evidence[END_REF]; or Heckemeyer and Overesch 2017 for a survey). First, multinational groups can manipulate intra-group exports and import prices: subsidiaries in high-tax countries can try to export goods and services at low prices to related firms in low-tax countries, and import from them at high prices. 3 Second, multinationals can shift profits using intra-group interest payments (see, e.g., [START_REF] Huizinga | Capital Structure and International Debt Shifting[END_REF]): affiliates in high-tax countries can borrow money (potentially at relatively high interest rates) from affiliates in low-tax countries. Last, multinationals can move intangibles-such as trademarks, patents, logos, algorithms, or financial portfoliosproduced or managed in high-tax countries to affiliates in low-tax countries, which then earn royalties, interest, or payments from final customers. 4 In theory, all of these channels of profit shifting could be curbed by rigorous enforcement of the so-called 'arm's length principle'. This principle states that all transactions within the multinational firms should be priced as they would have been in a transaction with an external third party. In practice, capacity-constrained tax agencies struggle to enforce the arm's length principle (see Tørsløv et al. 2022b), and in the case of intangible transactions the principle is often not conceptually well defined [START_REF] Devereux | Implications of Digitalization for International Corporate Tax Reform[END_REF].
The literature on profit shifting suggests that profit shifting between countries with moderate tax differentials is of second order compared to profit shifting from highly or moderately taxed countries to tax havens (see, e.g., [START_REF] Davies | Knocking on Tax Haven's Door: Multinational Firms and Transfer Pricing[END_REF]. Our work therefore focuses on profit shifting to tax havens solely. Tørsløv et al. (2022a) define tax havens as countries having excessive profitability of foreign firms (elaborated below) and an effective corporate tax rate below 15 per cent. 5 Albeit a series of Eastern European countries have emerged with low effective corporate tax rates since their study, none of See, e.g., [START_REF] Bernard | Transfer Pricing by U.S. Based Multinational Firms[END_REF]; [START_REF] Cristea | Transfer Pricing by Multinational Firms: New Evidence from Foreign Firm Ownerships[END_REF]; [START_REF] Liu | International Transfer Pricing and Tax Avoidance: Evidence from Linked Trade-Tax Statistics in the United Kingdom[END_REF]; [START_REF] Hebous | At Your Service! The Role of Tax Havens in International Trade With Services[END_REF]; [START_REF] Vicard | Profit Shifting through Transfer Pricing: Evidence from French Firm Level Trade Data[END_REF]; [START_REF] Wier | Tax-Motivated Transfer Mispricing in South Africa: Direct Evidence Using Transaction Data[END_REF]. [START_REF] Wier | Tax-Motivated Transfer Mispricing in South Africa: Direct Evidence Using Transaction Data[END_REF] includes a survey of the literature See [START_REF] Faulkender | Understanding the Rise in Corporate Cash: Precautionary Savings or Foreign Taxes[END_REF] for evidence suggestive of profit shifting by US multinationals through the relocation of intangibles in low-tax countries. See [START_REF] Langenmayr | Trading Offshore: Evidence on Banks' Tax Avoidance[END_REF] for evidence of profit shifting by German banks through the strategic relocation of financial portfolios in tax havens.
These tax jurisdictions are Andorra, Anguilla, Antigua and Barbuda, Aruba, The Bahamas, Bahrain, Barbados, Belgium, Belize, Bermuda, the British Virgin Islands, the Cayman Islands, Cyprus, Gibraltar, Grenada, Guernsey, Hong Kong, Ireland, the Isle of Man, Jersey, Lebanon, Liechtenstein, Luxembourg, Macau, Malta, Marshall Islands, Mauritius, Monaco, Netherlands, the Netherlands Antilles, Panama, Puerto Rico, Samoa, Seychelles, Singapore, St. Kitts and Nevis, St. Lucia, St. Vincent & Grenadines, Switzerland, Turks and Caicos, Vanuatu.
these show excessive profitability amongst foreign firms (yet), and we therefore keep the tax haven list from Tørsløv et al. (2022a) constant in the following.
Methodology to estimate global profit shifting
To estimate the amount of profit shifted to tax havens globally, we build on the methodology of Tørsløv et al. (2022a) and update it to more recent years. Tørsløv, et al. (2022a) estimate profit shifting in 2015, proceeding as follows. They compute the profits-to-wage ratio of foreign vs. local firms in tax havens, in non-haven OECD countries, and in large developing countries. The basic finding is that in tax havens, the profits-to-wage ratio is much higher for foreign firms than for local firms. In Ireland for example, the profits-to-wage ratio is around eight for foreign firms (reflecting profit shifting by foreign multinationals into Ireland) vs. about 0.5 for local firms. By contrast, foreign firms are slightly less profitable than local firms in non-haven countries. The top panel of Figure 1 reproduces this key result.
The amount of profits shifted into each haven is obtained by assuming that, absent profit shifting, the profits-to-wage ratio of foreign firms would be equal to the profits-to-wage ratio of local firms. For example, the amount of profit shifted into Ireland is obtained by assuming that the profits to wage ratio of foreign firms in Ireland would be around 0.5 if there was no profit shifting (instead of the recorded value of eight). Tørsløv et al. (2022a) discuss the conditions under which this methodology delivers accurate estimate and validate this approach in the case of US multinationals. In particular, they show that the excess profitability of foreign firms in tax havens relative to local firms in these havens cannot be explained by differences in capital intensity, sector composition, or R&D expenditures. They also show that this methodology does not double count profits, because the underlying data (national accounts and foreign affiliates statistics) do not. In 2015 data, they estimate that $616 billion in profits (corresponding to 36 per cent of global multinational profits) were shifted to tax havens.
In a second step of the methodology, Tørsløv et al. (2022a) use bilateral balance of payments data to allocate the shifted profits to source countries. They show that above-normal intra-group transfers from high-tax countries to tax havens in the form of royalty payments, management fees, and interest payments can fully explain the excess profitability of foreign haven firms relative to local haven firms.
Using bilateral data capturing the origin of these intra-group payments, the excess haven profits are allocated to their origin country, making it possible to estimate the tax losses caused by profit shifting for each country. Overall, high-tax countries are found to lose the equivalent of 9 per cent of their corporate tax receipts in 2015 as a consequence of profit shifting. The full methodology is described step-by-step in the Replication Guide of Tørsløv et al. (2022a).
We update the Tørsløv et al. (2022a) estimates as follows. First, thanks to the availability of more data, we are able to add eight countries to the database from 2016 on: Argentina, Egypt, Indonesia, Malaysia, Nigeria, Thailand, Venezuela, and Uruguay. With the addition of these countries, the database now covers 78 countries accounting for 92 per cent of the world economy and 70 per cent of the world population. Second, to ease the updating process, we moved many of the computations from the original paper from Excel to Stata. Our plan is to keep updating our estimates annually, as soon as statistical agencies publish the required input data. The updated annual estimates are posted at missingprofits.world.
Global profit shifting since 2015
This section analyses the updated series and estimates. We start by studying changes in the pattern of differential profitability found in 2015. As shown by the bottom panel of Figure 1, in 2019 foreign firms in tax havens are still much more profitable than local haven firms, while foreign firms in non-haven countries are still slightly less profitable than non-haven local firms-the original pattern uncovered in Tørsløv et al. (2022a) for the year 2015. Among tax havens, Puerto Rico still stands out with an exceptionally high profits-to-wage ratio of about 1,600 per cent for foreign firms. In Ireland, the profitsto-wage ratio of foreign firms dropped from about 800 per cent to less than 500 per cent over the 2015-19 period, while in Luxembourg it increased from about 450 per cent to 600 per cent. In absolute terms, as reported in Table A, Singapore narrowly surpasses Ireland as the world's largest recipient of shifted profits in 2019, with $132 billion in shifted profits compared $130 billion for Ireland.
Using this updated database, we estimate that $969 billion in profits were shifted to tax havens globally in 2019, the equivalent of 37 per cent of global multinational profits. In Table 1 we can see that global profit shifting did not decline between 2015 and 2019. The amount of shifted profits remained nearly constant as a share of multinational profits, increasing very slightly from 36 to 37 per cent. In other words, shifted profits grew at the same pace as multinational profits. As multinational profits grew by 52 per cent in nominal terms (compared to 17 per cent for global GDP), the absolute amount of profits shifted to tax havens increased by slightly more than 52 per cent, from $616 billion in 2015 to nearly $1 trillion in 2019. The growth in multinational profits also outpaced the growth in global corporate profits, and as as result, the share of multinational profits in corporate profits rose from 15 to 18 per cent.
The stability of global profit shifting (relative to multinational profits) is surprising, since 2019 was the third year of implementation for the BEPS project (OECD 2015). Our results thus suggest that so far this initiative has not been enough to lead to a reduction in profit shifting. The same appears to be true for the US corporate tax reform enacted at the end of 2017; see Garcia-Bernardo et al. ( 2022) for a detailed analysis of the effect of the Tax Cuts and Jobs Act on profit shifting by US multinational companies.
According to our estimates, the tax loss resulting from profit shifting increased slightly, from the equivalent of 9 to 10 per cent of global corporate tax receipts. This increase was driven by the rising share of multinational profits in global profits. Figure 2 and Table B detail the updated loss estimates for high-tax countries. While there are some national differences in the estimated tax loss, countries generally have seen a moderate increase in this loss. Two cases are worth highlighting. First, despite the reduction in the corporate tax rate (from 35 to 21 per cent) and the introduction of specific provisions aimed at reducing shifting out of the United States (e.g. the Base Erosion and Anti-Abuse Tax), the Tax Cuts and Jobs was not followed by a decline in the cost of profit shifting for the United States. In fact, we estimate a small increase in the tax loss for the United States, from 14 per cent of corporate tax collections in 2015 to 16 per cent in 2019. Second, in the United Kingdom, we estimate a gradual increase in profit shifting over the period 2015-19. Understanding the reasons for this increase (e.g. the potential role of Brexit) is a fruitful avenue for future research.
Global profit shifting back to 1975
This section presents our historical estimates of global profit shifting prior to 2015, back to 1975. To construct these series, we proceed as follows. We first collect available historical national accounts data on corporate profits and GDP to compute the share of global profits in global income. Second, we estimate multinational profits by using the global balance of payments compiled by the International Monetary Fund (IMF), which reports global direct investment equity income (i.e. profits made by firms more than 10-per-cent-owned by a foreign owner, which is close to our definition of multinational profits). 6 We assume that global multinational profits followed the evolution of (pre-tax) global direct investment equity income. Last, to estimate the fraction of multinational profits shifted to tax havens, we assume that the global annual growth rate in shifted profits has been equal to the global growth rate of profits shifted to tax havens by US multinational companies, for which long-run time series back to the 1960s exist [START_REF] Wright | The Exorbitant Tax Privilege[END_REF]. That is, we assume that the share of US multinationals in the amount of globally shifted profits has remained constant, at about 50 per cent. While this assumption introduces a margin of error, the main patterns we obtain are so marked that they are not significantly affected by relaxing it (i.e. by allowing for a falling or declining fraction of profits being shifted by US multinationals).
Figure 3 shows the evolution of global corporate profits (as a fraction of global income, i.e. global GDP minus capital depreciation) and of global multinational profits (as a fraction of global corporate profits) back to 1975. A number of patterns are worth noting. First, the share of corporate profits in global income has increased from 14 per cent in 1975 to 19 per cent in 2019. This reflects the fact that a growing fraction of global economic activity takes place in the corporate sector (as opposed to, e.g., non-corporate businesses) and that the capital share of corporate value added has been rising globally (see, e.g., [START_REF] Bachas | Globalization and Factor Income Taxation[END_REF]. Second, the share of multinational profits in global profits has been multiplied by a factor of more than four over that period. The increase was particularly fast in the late 1990s and early 2000s. Interestingly, the share of multinational profits has kept rising after 2015: by that metric, globalization keeps happening.
It is often noted that although statutory corporate tax rates have been cut in half between 1980 and the late 2010s globally (e.g. Tørsløv et al. 2022a), corporate tax revenues as a fraction of GDP have remained fairly stable (OECD 2021). One explanation sometimes put forward for this disconnect is that base broadening may have offset the effect of falling statutory rates, so that effective corporate tax rates might not have declined much. Our findings suggest that this explanation is quantitatively insufficient and highlight the importance of another factor: the rise in the share of corporate profits in global income. When this share rises, a constant (or even rising) ratio of corporate tax revenues to GDP can disguise a declining effective corporate tax rate. [START_REF] Bachas | Globalization and Factor Income Taxation[END_REF] estimate that the global effective corporate tax rate was 23 per cent in 1975, which compares to 17 per cent in 2019, a decline of roughly a third. This implies that any broadening of the corporate income tax base has not been large enough to offset the decline in statutory rates. Below we discuss how profit shifting has contributed to the decline in effective tax rates.
In Figure 4, we report our estimates of the amount of profit shifted to tax havens back to 1975. The patterns are striking. In the late 1970s, there was virtually no profit shifting. The seminal work by [START_REF] Hines | Fiscal Paradise: Foreign Tax Havens and American Business[END_REF] on profit shifting uses US data for the year 1982, when only 5 per cent of global multinational profits were shifted to tax havens. Then in the early 1980s, a trend of sustained and gradual increase in profit shifting begins; the share of multinational profits shifted to tax havens increases from 2 per cent in 1980 to 20 per cent in 1998. This increase can partly be explained by the rise of the tax avoidance industry in the 1980s (e.g. Saez and Zucman 2019) and US policies adopted in the mid-1990s that facilitated shifting from foreign high-tax countries to tax havens, known as checkthe-box regulations (e.g. [START_REF] Wright | The Exorbitant Tax Privilege[END_REF]. Importantly, the rise of profit shifting as a share of multinational profits did not mean much for corporate tax revenues, as multinational profits were still small until the late 1990s. By our estimates, corporate tax revenue losses stayed below 2.5 per cent of corporate tax receipts throughout the 1975-99 period.
In the 2000s profit shifting as a share of multinational profits plateaus at roughly 20 per cent, but the loss of corporate tax revenue more than doubles from 2.6 per cent in 2000 to 5.8 per cent. This leap is caused by the fast growth of multinational firms in the first decade of the 21 st century documented in Figure 3. The next leap occurs in the first half of the 2010s, when profit shifting as a share of multinational profits increases from roughly 19 per cent in 2010 to 36 per cent in 2015. One possible explanation for this jump is the fast growth in the profits of giant US tech companies, which, as documented in the literature, are known to use tax havens extensively, although this issue would deserve to be further investigated in future research. Corporate tax losses moved in tandem-as the share of multinational profits in global profits remained fairly constant-rising from 5.6 per cent in 2010 to 9.0 per cent in 2015.
Finally, from 2015 to 2019 profit shifting as a share of foreign profits stagnates at just below 40 per cent.
Seen in the light of the dramatic increase in profit shifting in the four preceding years, the lack of growth in profit shifting could be the result of the BEPS project and the US Tax Cuts and Jobs Act. That is, these initiatives may have been insufficient to reduce the fraction of multinational profit shifted to tax havens each year but might have stopped the growth of this fraction.
We estimate that absent profit shifting, corporate tax receipts would be 10 per cent higher in 2019 but nearly unchanged in 1975. [START_REF] Bachas | Globalization and Factor Income Taxation[END_REF] estimate that the global effective corporate tax rate fell 5 percentage points since 1975. The direct impact of profit shifting, according to our estimates, can explain 2 percentage points, i.e. about 40 per cent of this decline, keeping everything else constant. Of course, if there had been no profit shifting, then countries may have chosen other policy paths, e.g. some might have been less likely to cut their corporate tax rate and engage in the 'race to the bottom': our 40 per cent estimate neglects the indirect impact profit shifting may have had on global tax competition. It illustrates, however, that the revenue losses caused by profit shifting are a quantitatively important aspect of the decline in effective corporate income tax rates globally since the 1970s. (2022a). The pink bar shows the ratio of pre-tax corporate profits (after net interest payments and depreciation) to compensation of employees for foreign firms (firms more than 50%-owned by a foreign investor, i.e. typically affiliates of foreign multinational companies). The black bar shows the same ratio but for local firms-defined as all domestic firms that are not classified as 'foreign'.
Figures and tables
Source: top panel is taken from Tørsløv et al. (2022a), Figure 4 (data in Data Appendix online, Tables A7 andC4D); bottom panel data is from Data Appendix online, Backup Tables U1 andC4D. U1 andC4D.
Appendix
This appendix provides supplementary tables showing estimates of profit shifted and corporate tax revenue loss by country each year over the 2015-19 period. Note: the large percentage tax loss of Latvia is driven by a collapse of their recorded corporate tax revenue in 2019 (declining by 90% compared to the year before)-implying the effective corp. tax rate in Latvia was 1% in 2019.
Source: for 2015 figures: Tørsløv et al. (2022a), Data Appendix Tables A7 andC4D; for 2019 figures: Data Appendix, Backup Tables U1 andC4D.
Figure 1 :
1 Figure 1: Profitability in foreign vs. local firms in 2015 and 2019
Figure 2 :
2 Figure 2: Corporate tax revenue lost: 2015 vs 2019 (% of corporate tax revenue collected)
of inc.) and multinational profits (% of all profits)
Table C4D in Data Appendix online; 2019 figures from Data Appendix, Note: the blue line shows the evolution of the share of global corporate profits in global income (defined as global GDP minus global depreciation). The black line shows the share of global multinational profits (as defined in the text) in global corporate profits.
Figure 4: Multinational profits shifted to tax havens and corporate tax loss, 1975-2019
40% 11%
36% 10%
32% 9%
28% 8%
24% 7%
6%
20%
5%
16% 8% 12% 2% 3% 4% % of all
4% 1%
0% 0%
1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 2019
Backup Table C4D.
Figure 3: Global corporate profits and multinational profits, 1975-2019 16% 20% Corp. profits (% Corporate profits in
global income
12%
8%
4% Multinational profits in
corporate profits
0%
1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 2019
Source: for 1975-2015: Tørsløv et al. (2022a), Data Appendix Table C7; for 2016-19 figures: Data Appendix, Table 1.
corp. tax revenue lost % of multinational profits reported in tax havens Multinational profits in tax havens and corporate tax lost
Note: the blue line (left axis) shows the share of multinational profits (as defined in the text) shifted to tax havens. This share increased from about 2% in the late 1970s to about 37% in 2019. The black line (right axis) shows our estimate of the amount of corporate tax revenue lost due to profit shifting globally, expressed as fraction of global corporate tax receipts.Note: this table updates Table1ofTørsløv et al. (2022a). It reports the global totals in our database each year from 2015 to 2019. 'Multinational profits' include all the profits made by companies more than 50% owned by a foreign country.
Multinational profits
shifted to havens (left axis) TCJA
Corp. tax rev. lost
(right axis) BEPS
initiated
Table 1: Global profits: comparing 2015 and 2019 estimates
$Bn., current values 2015 2016 2017 2018 2019 Relative change ('19 -'15)
Global GDP 75,179 76,466 81,404 86,413 87,653 17%
Corporate profits 11,515 12,275 13,022 14,068 14,472 26%
Multinational profits 1,703 1,841 2,061 2,655 2,590 52%
Profits shifted 616 667 741 946 969 57%
Profits shifted (% of multinational profits) 36.2% 36.2% 36.0% 35.6% 37.4% 1.2 p.p.
Tax loss 188 195 212 243 247 31%
Tax loss (% of corp. tax rev.) 9.0% 8.8% 9.0% 9.9% 10.0% 1.0 p.p.
Source: for 2015 figures: Tørsløv et al. (2022a), Data Appendix Tables A7, C4D, and C7; for 2019 figures: Data Appendix,
Tables
Source
: for 1975[START_REF] Vicard | Profit Shifting through Transfer Pricing: Evidence from French Firm Level Trade Data[END_REF]: for : Tørsløv et al. (2022a))
, Data Appendix Table
C7
; for 2016-19 figures: Data Appendix, Table
1
.
Table A :
A Country estimates: profit shifting 2015-19
Shifted profits ($Bn.)
2015 2016 2017 2018 2019 Difference ('19-'15)
OECD countries
Australia 12.0 15.2 17.8 25.2 30.2 15.1
Austria 3.6 4.3 4.4 5.3 4.7 0.5
Canada 17.2 15.2 15.6 20.9 25.8 10.6
Chile 4.7 5.3 5.5 7.1 9.1 3.8
Czech Republic 1.8 2.2 2.4 3.0 2.8 0.6
Denmark 3.0 4.5 4.8 6.1 5.6 1.1
Estonia 0.2 0.3 0.3 0.4 0.4 0.1
Finland 2.7 3.2 4.0 5.2 4.8 1.5
France 32.1 36.0 40.0 46.7 42.6 6.6
Germany 54.9 65.4 65.9 83.2 75.6 10.2
Greece 1.0 1.7 1.8 2.2 1.9 0.2
Hungary 2.4 3.7 4.2 6.3 5.8 2.1
Iceland 0.4 0.5 0.5 0.6 0.7 0.1
Israel 0.6 2.4 2.5 4.1 4.9 2.5
Italy 22.7 24.0 26.5 31.7 28.9 4.9
Japan 9.0 11.8 13.8 17.3 17.6 5.8
Korea 4.4 4.7 6.2 8.4 10.0 5.4
Latvia 0.2 0.3 0.3 0.4 0.4 0.1
Mexico 12.1 11.1 11.0 16.7 20.4 9.3
New Zealand 1.4 1.9 2.0 2.7 3.2 1.2
Norway 5.0 6.2 5.9 7.2 8.1 1.9
Poland 3.7 4.2 4.3 5.9 5.4 1.1
Portugal 2.6 3.3 3.2 3.8 3.4 0.2
Slovakia 0.6 0.9 0.9 1.1 1.0 0.2
Slovenia 0.2 0.9 0.4 0.5 0.4 -0.5
Spain 14.4 14.7 17.0 23.1 21.1 6.4
Sweden 8.5 10.3 11.7 13.7 12.6 2.3
Turkey 4.6 3.9 3.7 5.0 5.7 1.9
United Kingdom 61.5 80.5 96.0 120.1 109.6 29.1
United States 142.0 152.1 161.9 186.4 165.3 13.2
Main developing countries
Argentina 2.5 3.2 4.3 4.9 2.4
Brazil 13.2 17.3 19.7 22.0 26.8 9.5
China 54.6 50.4 51.4 71.2 92.7 42.3
Colombia 1.3 1.0 1.4 1.9 2.2 1.3
Costa Rica 1.0 0.9 1.2 1.7 1.9 1.0
Egypt 3.0 3.4 4.6 5.5 2.5
Indonesia 7.0 7.5 9.9 12.0 4.9
India 8.7 9.6 11.3 15.6 19.1 9.5
Malaysia 3.8 4.2 5.8 7.0 3.2
Nigeria 3.1 2.4 3.5 4.3 1.2
Russia 11.3 12.7 13.8 17.3 20.3 7.6
India 3.8 4.0 11.3 15.6 19.1 15.0
South Africa 3.8 4.0 5.0 6.8 7.9 3.9
Thailand 4.7 6.4 8.9 11.0 6.3
Uruguay 1.0 0.9 1.5 1.7 0.7
Venezuela 0.9 0.9 0.9 1.1 0.2
Table B :
B Country estimates: profit shifting 2015-19
Corp. tax revenue loss/gain (% of collected)
2015 2016 2017 2018 2019 Difference ('19-'15)
OECD countries
Australia 7% 8% 7% 10% 14% 7%
Austria 11% 11% 11% 11% 10% -1%
Canada 9% 8% 7% 9% 9% 0%
Chile 11% 12% 12% 13% 18% 7%
Czech Republic 5% 6% 6% 7% 6% 1%
Denmark 8% 12% 10% 13% 11% 3%
Estonia 10% 16% 15% 14% 14% 4%
Finland 11% 12% 12% 15% 14% 3%
France 21% 24% 22% 26% 22% 1%
Germany 28% 28% 26% 29% 29% 1%
Greece 7% 10% 13% 15% 12% 5%
Hungary 21% 24% 13% 31% 25% 4%
Iceland 22% 20% 16% 18% 26% 4%
Israel 2% 6% 5% 8% 9% 7%
Italy 19% 19% 15% 20% 18% -2%
Japan 2% 2% 2% 3% 3% 1%
Korea 2% 2% 2% 3% 3% 1%
Latvia 7% 8% 9% 23% 144% 137%
Mexico 10% 9% 8% 12% 15% 5%
New Zealand 5% 6% 6% 7% 11% 6%
Norway 8% 10% 7% 6% 8% 0%
Poland 8% 9% 8% 9% 8% 0%
Portugal 9% 11% 9% 10% 10% 1%
Slovakia 5% 6% 6% 7% 7% 2%
Slovenia 21% 21% 8% 8% 7% -14%
Spain 14% 13% 14% 16% 18% 4%
Sweden 13% 16% 17% 18% 17% 4%
Turkey 8% 5% 5% 7% 8% 0%
United Kingdom 18% 22% 25% 28% 32% 14%
United States 14% 17% 19% 23% 16% 2%
Main developing countries
Argentina 5% 7% 9% 12%
Brazil 8% 10% 12% 14% 17% 9%
China 3% 3% 3% 3% 4% 1%
Colombia 2% 1% 2% 4% 5% 3%
Costa Rica 19% 17% 22% 28% 32% 13%
Egypt 4% 5% 7% 13%
Indonesia 8% 9% 11% 14%
India 8% 5% 5% 6% 6% -2%
Malaysia 5% 6% 6% 7%
Nigeria 24% 18% 25% 26%
Russia 5% 6% 5% 5% 6% 0%
India 6% 7% 5% 6% 6% 1%
South Africa 6% 7% 8% 10% 13% 7%
Thailand 5% 7% 8% 9%
Uruguay 16% 15% 22% 26%
Venezuela 15% 15% 16% 20%
Tax havens
Belgium 16% 16% 19% 55% 38% 22%
Ireland 58% 65% 67% 61% 59% -7%
Luxembourg 50% 54% 58% 49% 56% 2%
Malta 90% 88% 88% 36% 29% -59%
Netherlands 32% 30% 39% 28% 19% -10%
Caribbean 100% 100% 100% 100% 100% 0%
Bermuda
Singapore 41% 42% 30% 37% 29% -13%
Puerto Rico 79% 25% 35% 72% 30% 5%
Hong Kong 33% 24% 24% 56% 40% 16%
Switzerland 20% 28% 38% 39% 39% 11%
Other
The tax loss is slightly higher than the fraction of profits shifted to tax havens globally because the marginal rate on shifted profits is higher than the average rate.
The notion of control is used to classify firms as foreign in Eurostat (2012) guidelines. Control is 'the ability to determine the general policy of an enterprise by choosing appropriate directors, if necessary' (Eurostat 2012: 13). The ownership of more than 50 per cent of shares ensures control. In some cases, control can be exerted with a less than 50 per cent ownership, for instance if certain shares have more voting power than others.
Direct investment equity income is net of corporate income taxes paid, in contrast to multinational profits as defined in Section 2; therefore we gross up global direct investment equity income with an estimate of corporate taxes paid.
The views expressed in this paper are those of the authors, and do not necessarily reflect the views of the Ministry of Finance of Denmark, UNU-WIDER, the United Nations University, nor its programme/project donors. All data are available online at https://missingprofits.world. Zucman acknowledges financial support from the Stone Foundation, the Carnegie Foundation, the European Research Council, and the European Commission grant TAXUD/2020/DE/326.
Note: tables and figures at the end
Tax havens
Belgium
-13. |
04103673 | en | [
"info.info-pf"
] | 2024/03/04 16:41:22 | 2023 | https://theses.hal.science/tel-04103673/file/these_barbieri_emanuele.pdf | Keywords: Discrete-event system, Reinforcement learning, Markov processes, Decision making, Financial management, Artificial Intelligence, Explainability, Supervised learning, Stock market Volatility Système à événement discret, Apprentissage par renforcement, Processus de Markov, Prise de décision, Management financier, Intelligence artificiel, Explicabilité, Apprentissage supervisé, Volatilité des indices boursier XAI Explainable Artificial Intelligence
Markov Decision Process (MDP) models are widely used to model decision-making problems in many research fields. MDPs can be readily designed through modeling and simulation (M&S) using the Discrete Event System Specification formalism (DEVS) due to its modular and hierarchical aspects, which improve the explainability of the models. In particular, the separation between the agent and the environment components involved in the traditional reinforcement learning (RL) algorithm, such as Q-Learning, is clearly formalized to enhance observability and envision the integration of AI components in the decision-making process. Our proposed DEVS model also improves the trust of decision makers by mitigating the risk of delegation to machines in decision-making processes. The main focus of this work is to provide the possibility of designing a Markovian system with a modeling and simulation formalism to optimize a decision-making process with greater explainability through simulation. Furthermore, the work involves an investigation based on financial process management, its specification as an MDP-based RL system, and its M&S with DEVS formalism. The DEVSimPy Python M&S environment is used to implement the Agent-Environment RL system as event-based interactions between an Agent and Environment atomic models stored in a new DEVS-RL library. The research work proposed in this thesis focused on a concrete case of portfolio management of stock market indices. Our DEVS-RL model allows for a leverage effect three times higher than some of the most important naive market indexes in the world over a thirty-year period and may contribute to addressing the modern portfolio theory with a novel approach. The results of the DEVS-RL model are compared in terms of compatibility and combined with popular optimization algorithms such as efficient frontier semivariance and neural network models like LSTM. This combination is used to address a decision-making management policy for complex systems evolving in highly volatile environments in which, the state evolution depends entirely on the occurrence of discrete asynchronous events over time.
Résumé
Les modèles de processus de décision de Markov (MDP) sont largement utilisés dans de nombreux domaines de recherche pour modéliser les problèmes de prise de décision. Les MDP peuvent être facilement conçus par modélisation et simulation (M&S) à travers le formalisme de spécification de système à événements discrets (DEVS) grâce à ses aspects modulaires et hiérarchiques qui améliorent entre autre l'explicabilité des modèles. En particulier, la séparation entre l'agent et les composants de l'environnement impliqués dans l'algorithme d'apprentissage par renforcement (RL) traditionnel, tel que Q-Learning, est clairement formalisé pour améliorer l'observabilité et envisager l'intégration des composants de l'IA dans le processus de prise de décision. Notre modèle DEVS renforce également la confiance des décideurs en atténuant le risque de délégation aux machines dans les processus de prise de décision.
A cet effet, l'objectif principal de ce travail est de fournir la possibilité de concevoir avec une plus grande explicabilité un système Markovien à l'aide d'un formalisme de M&S pour optimiser, par simulation, un processus de prise de décision. En outre, le travail implique une étude de cas basée sur la gestion des processus financiers, sa spécification en tant que système RL basé sur MDP, et sa M&S avec le formalisme DEVS. L'environnement de M&S DEVSimPy est utilisé pour implémenter le système Agent-Environnement RL en tant que librairie DEVS-RL composée de modèles DEVS intéragissant par événements discrets pour mettre en oeuvre l'apprentissage. Le travail de recherche proposé dans cette thèse porte sur un cas concret de gestion de portefeuille d'indices boursiers. Notre modèle DEVS-RL permet de produire un effet de levier trois fois supérieur à certains des indices de marché naïfs parmi les plus importants au monde sur une période de trente ans et peut contribuer à aborder la théorie moderne du portefeuille avec une approche novatrice.
Les résultats du modèle DEVS-RL sont confrontés en termes de compatibilité et combinés avec les algorithmes d'optimisation les plus populaires tels que Efficient Frontier Semivariance et les modèles basés sur les réseaux de neurones tels que LSTM. 3.4 Atomic-based modeling approach with the DEVS atomic models of the RL Agent and RL Environment DEVS atomic models. The atomic model Observers can be inserted after the RL Agent to observe the mean of the Q matrix, which can be used to see its convergence. N Generator models are used to consider external events in the behavior of the Environment model. . 66 xvii 3.5 UML sequence diagram of the User-Environment-Agent interactions in the Q-Learning algorithm framework. The user starts the simulation by invoking the initial (init) phase of the Environment model that sends an event with the state/action map used to initialize the Agent model. The Agent then sends an action depending on its initial state in return that activates the δ ext function of the Environment model that immediately updates the state, the reward, and the done flag (which informs if the end state has been reached) to return to the Agent. The agent then updates its Q matrix according to the action received. This cycle (episode) is repeated until the end state is reached (the done flag is true). When the Q matrix is stable, the final policy can be outputted by the Agent model and the Environment model can print the simulation trace before becoming inactive. . . . . . . . . . . . . . . . . 68
3.6 Coupled-Based Modeling Approach with the Agent and Environment DEVS Coupled Models which improve the explainability of the AI embedded. Each observer is an atomic model that turns the signal into an interpretable feature. [START_REF] Gillemot | There's more to volatility than volume[END_REF] 3.7 Multi-agent DEVS reinforcement learning model with a supervisor. Each agent explores a subset of possible states and gives its optimal decision plot based on interactions with its own environment. The supervisor then proceeds to the final decision-making logic. . . . . . . . . . . . . . . . . . [START_REF] Goll | Minimax and minimal distance martingale measures and their relationship to portfolio optimization[END_REF] 3.8 RL library into the DEVSimPy software. The Properties panel of the Agent and Environment models allows one to configure these models. The algo property of the agent allows one to select the learning algorithm between Q-Leaning or SARSA. Properties γ, ε, and α are related to the Bellman equation (see Section 2.3.3). Concerning the Env model, the option goal_reward allows us to define the reward as a goal (when goal_reward = True is checked) or as a penalty (when goal_reward is uncheked) (see Section 2.3. In other words, to address the issue of risk aversion, traders trade a small portion of their portfolio at a regular pace, such as quarterly, monthly, or weekly. The supervised AI agent has a different reward system to reduce human biases and to produce a leverage effect (make more money) with a more sustainable long-term strategy. . . . . . . . . . . . . . . . . . . . . .
Efficient
Frontier for the IXIC, DJI, GSPC, and RUT indexes with no short sale in the period from January 1, 2019 to January 1, 2020. The optimal solution (red cross) obtained for an expected annual return equal to 30.0%. .
4.16
Leverage effect after DEVS-RL simulation from 100,000$ with N equal to 4,8 and 14 from January 1, 2019 to January 1, 2020. The leverage effect for N = 4 is 34.72% and the discrete allocation is (3 IXIC, 3 RUT, 3 DJI, 3 GSPC) that corresponds to one of the efficient frontier expected return points.
For N = 4 at the end of 2019, the current value of the portfolio is 153 188.002 for an initial capital investment of 100,000$. The initial capital represents 65.28% of the current value. The leverage effect for 2019 is 34.72% with a return ratio of 53.18% which represents the gain for N = 4 in 2019. . . . . 4.17 Family of paths obtained after simulations with different values of ε (noted on each arc). Depending on the selected ε, each path traced the evolution of the indexes. If ε is (resp. far from) 1.0, the path is obtained in a minimal (resp. maximal) time due to the exploitation (resp. exploration) policy executed by the agent (ε-greedy algorithm). . . . . . . . . . . . . . . . . . . . . . . . . List of tables
Introduction
The core work presented in this thesis deals with the resolution of Markov decision processes (MDP) using a discrete event modeling and simulation (M&S) approach to optimize a decision-making process, such as leverage effects in asset management to improve its understanding. The relationship between MDP and (M&S) plays an important role in the literature in a wide range of fields, i.e. clinical decision-making [START_REF] Bennett | Artificial intelligence framework for simulating clinical decision-making: A markov decision process approach[END_REF], sequential decisionmaking under uncertainty [5], aviation [START_REF] Mueller | Multi-rotor aircraft collision avoidance using partially observable markov decision processes[END_REF], job-shop scheduling [START_REF] Zhang | Real-time job shop scheduling based on simulation and markov decision processes[END_REF], cybersecurity [START_REF] Zheng | A markov decision process to determine optimal policies in moving target[END_REF], real-time energy management of photovoltaic-assisted electric vehicles [START_REF] Wu | Real-time energy management of photovoltaic-assisted electric vehicle charging station by markov decision process[END_REF] and many more.
The main focus of this work will be to provide the possibility of designing a Markovian system/algorithm with a M&S formalism with greater explainability, in order to optimize a decision-making process through simulation. Furthermore, the work involves an investigation based on financial process management, its specification as a MDP-based reinforcement learning (RL) system, and its M&S with the discrete event system specification system (DEVS) formalism. The results of the DEVS model are confronted and combined with the most popular optimization algorithms such as Efficient Frontier Semivariance and RNN models such as LSTM [START_REF] Wang | A garlic-priceprediction approach based on combined lstm and garch-family model[END_REF]. The research topics, objectives, methodological design, and outline of the document are discussed in this general introduction.
Research Topics and Objectives
Let us introduce the Research Topic section with a brief explanation of what a financial leverage effect is, as shown in Figure 1.1. Imagine that we invest 100e of our savings in a first company, that is, we sign corporate bonds for one year. After a year we receive 120e , 100e is the reimbursement of our capital invested and 20e is the dividend or gain (gray bubble in Figure1.1). We may say that we have a gain or a return of 20%. Our invested capital 100 represents 83% of the current capital 120e and 20e (the gain) represents 17% of our current capital that what we consider our leverage effect without borrowed capital. To calculate the leverage effect of the investment without borrowed capital, we use the following formula: Leverage e f f ect = 1 -Initial value Current value * 100
Now, imagine that we also invest 100e in a second company. To buy the second bond, we split the money invested in two parts: 10e from our savings and 90e from a loan at the bank. The loan has a 1% interest and we have to reimburse the capital at the end of the year. We receive from this second company 120e (black bubble in Figure1.1), 90e as a reimbursement of the capital invested, 10e as a reimbursement of our savings and 20e of dividends.We have to pay the bank 90e plus 0.9e of interest, and the rest are our gains, that is 19.1e . If we compare the 10e invested from our saving to the 19.1e of gain, we may say that we have a leverage effect with a borrowed capital of 52% from this second investment. This is an example of leverage effect that is usually based on financial leverage, i.e. the use of borrowed money from banks. To calculate the leverage effect of the investment with borrowed capital, we use the following formula: Our asset management MDP decision-making approach would play a role in reducing the risk of budget degradation by offering the opportunity to invest with our tool, stakeholder private party financing [START_REF] Węgrzyn | Stakeholder analysis and their attitude towards ppp success[END_REF] during, for example, the pre-trial phase of a EU funds application process that can last 8 to 12 months. The risk of budget degradation is particularly high in Corsica, which is one of the poorest regions of Europe [START_REF] Dall'erba | Distribution of regional income and regional funds in europe 1989-1999: an exploratory spatial data analysis[END_REF]. So, in terms of regional interest, our research effort, focused in the world of finance, specifically on how borrowed capital (backed by company internal finance) can produce a leverage effect on stock markets and offer an alternative solution to reduce the economic effect of budget degradation. Portfolio optimization using AI was a juvenile field in 2017, the year of the beginning of this research. Since 2017, the scientific literature on AI has offered more and more work on portfolio optimization, in particular using efficient frontier (EF) models [START_REF] Clark | A machine learning efficient frontier[END_REF], long-short-term memory (LSTM) [START_REF] Yu | A review of recurrent neural networks: Lstm cells and network architectures[END_REF], and generalized autoregressive conditional heteroskedasticity process (GARCH) [START_REF] Kakade | Forecasting commodity market returns volatility: A hybrid ensemble learning garch-lstm based approach[END_REF] models. In particular, EF and LSTM, two of the most explored methods in the literature, will be used in this research to verify the possibility of compatibility and combination with the results of the DEVS model. Our final goal is to offer a novel approach that combines discrete event simulation with artificial intelligence. To achieve this goal, we propose a Markovian approach to create value by taking advantage of the volatility of some of the world's stock market indices. The proposal aims to model and simulate an optimized portfolio policy [START_REF] Lee | A multiagent approach to q-learning for daily stock trading[END_REF] in terms of complete allocation and indices trends by combining MDP and discrete-event M&S domains.
A Markov chain [START_REF] Kemeny | Finite Markov Chains[END_REF] is a stochastic model used to describe a sequence of potential events under a condition where the probability of each event is only limited to the state established in the previous case. In probability theory, a Markov process (named after the Russian mathematician A. Markov) is a stochastic system that satisfies the Markov property, i.e. if one can make predictions for the future of the process based only on its current state. Markov chains are commonly defined as discrete or continuous Markov processes with countable state space (thus, independent of the nature of time), but it is also common to define a Markov chain as having discrete time in a countable or continuous state space. For example, Markov chains have many uses as statistical models of real processes such as speed control systems in automobiles, queues or lines of customers arriving at airports, rates of currency exchange, storage systems such as dams, and population growth of certain animal species. The algorithm known as PageRank [START_REF] Brin | The anatomy of a large-scale hypertextual web search engine[END_REF], which was originally proposed for the Google search engine on the Internet, is based on a Markov process. Markov processes are the basis of general stochastic simulation, which are used to simulate sampling from complex probability distributions and have found wide application in Bayesian statistics as introduced in 2.2.2.
Most leverage-effect financial instruments are future-oriented. The today leverage effect is the result of the decision-making policies of the past. But new decisions are taken following the environment market inputs on a quarter-on-a-day basis, but not on a sevenor ten-year basis. Stock price volatility is a highly complex non-linear dynamic system. The volume of trade affects the self-correlation and inertial effect of the stock, and the adjustment of the stock does not advance with a homogeneous time process, which has its own independent time to promote the process [START_REF] Liu | Stock transaction prediction modeling and analysis based on lstm[END_REF]. If the present state "the today" is Markov, the future will depend only on the present state. Therefore, an optimization process of financial assets responds to the Markov chain property, which considers that the next state of a system depends only on the present state. In other words, the history of past transitions does not influence the determination of the future state.
Markov decision processes (MDPs) are generally defined as a controlled stochastic process that fulfills Markov's properties and rewards state transitions; they are defined as controlling processes [START_REF] Bertsekas | Dynamic Programming: Deterministic and Stochastic Models[END_REF][START_REF] Puterman | Markov Decision Processes: Discrete Stochastic Dynamic Programming[END_REF]. MDPs make it possible to model the dynamics of the state of a system subject to the control of an agent within a stochastic environment. The agent follows a procedure to automatically choose at a given moment the action to be performed on and execute the action. A Markov decision-making problem is to search among a family of policies for those that optimize a given performance criterion for the Markov decision-making process under consideration. This criterion aims to characterize the policies ticks that will generate the largest possible reward sequences. In formal terms, this always amounts to evaluating a policy on the basis of a measure of the expected accumulation of instantaneous rewards along a trajectory. This choice of expected accumulation is, of course, important because it makes it possible to establish that the sub-policies of the optimal policy are optimal sub-policies [START_REF] Yu | On generalized bellman equations and temporal-difference learning[END_REF] (Bellman principle of optimality). This principle is at the base of many dynamic programming algorithms that allow us to solve MDPs efficiently.
The approach of modeling in the form of an MDP to represent a decision-making process applied to asset management appears to be most appropriate to the very volatile nature of financial markets. Because of the large number of states and actions to be considered in the Markov-based management of financial processes to achieve a possible leverage effect, their conduct often involves simulation, which allows one to resolve large state-space system in an iterative way. Furthermore, the simulation-based Markovian approach [START_REF] Fu | Handbook of Simulation Optimization[END_REF] corresponds to our simulation context in which decision making must be conducted with a high degree of uncertainty without the possibility of defining a state transition frame a priori.
The definition and the simulation of an MDP model requires a formal framework to represent the interaction (actions/rewards) between the environment and the agent of an MDP. The DEVS formalism (Discrete EVent System Specification) [START_REF] Zeigler | System entity structure basics[END_REF], is the appropriate mathematical framework for specifically specifying an MDP. In addition, DEVS offers the automatic simulation of these models and thus the simulation of artificial intelligence (AI) as RL algorithms (e.g. Q-Learning) [START_REF] Lake | Building machines that learn and think like people[END_REF] applied to MDPs in order to obtain the optimal solution in the framework of a decision-making problem with a high degree of uncertainty. Therefore, we point out that DEVS make an AI system more explicable by using its modular and hierarchical modeling specific approach and its building blocks and architectural patterns capabilities [START_REF] Zeigler | Devs-based building blocks and architectural patterns for intelligent hybrid cyberphysical system design[END_REF]. AI applications are booming in the world of today, and the weight of decisions is delegated to AI. Considering this enormous volume and also the scrutiny decisions that need to go through, having clarity on why a certain decision was made has become paramount. Explainability is the next data science superpower, in particular for AI based solutions and DEVS is a good candidate to improve the explainability aspect of AI systems.
To mitigate the risk of delegation to machines in decision-making processes, DEVS offers the opportunity to remain under observation, interactions, and separation between the agent and the environment involved in the traditional RL algorithm, such as Q-Learning. In RL, an agent seeks an optimal control policy for a sequential decision problem. Unlike in supervised learning, the agent never sees examples of correct or incorrect behavior. Instead, it receives only positive and negative rewards for the actions it tries. Since many practical real-world problems (such as robot control, game play, and system optimization) fall into this category, developing effective RL algorithms is important to the progress of AI. The Q-Learning algorithm needs to be formally separated in a DEVS Agent component that reacts with a dynamic DEVS Environment component. This modular capability of DEVS permits a total control of the Q-Learning loop in order to drive the algorithm convergence. Basically, an agent and an environment communicate in order to converge the agent towards a best possible policy. Due to the modular and hierarchical aspect of DEVS, the separation between the agent and the environment within the RL algorithms is improved. Our contribution is to propose a formalized way to drive RL algorithms into a discrete event system. A methodological approach is implemented to design DEVS-RL model. The process is shown in the following Figure 1.2.
The objective of this work is to propose a decision-making management policy for complex systems evolving in highly dynamic environments using a combination of the DEVS formalism and the RL technique. This work raises a number of questions relating to fundamental research issues, such as the following.
• What are the advantages of coupling DEVS simulation and AI techniques, particularly reinforcement learning, in the decision-making process of a complex system?
• How can the DEVS formalism improve understanding of AI algorithms? • How does the association of RL algorithms with DEVS simulation improve the management of leverage effects in financial management processes?
The research work proposed in this thesis contributes to answering these questions relying on a concrete case of optimizing the management of a portfolio of stocks using discrete event simulation. This thesis presents the benefits of DEVS formalism aspects to assist in the realization of ML models with a special focus of RL. The DEVS formalism is an ideal solution for implementing an RL algorithm such as Q-learning because it makes it possible to represent and carry out the learning of the system by simulation in a formal, modular, and hierarchical framework. Basically, in the RL model, an agent and an environment communicate to converge the agent towards the best possible policy. Due to the modular and hierarchical aspects of DEVS, the interaction between the agent and the environment within the RL algorithms in experimental frames can be improved in terms of explainability and observability. A new generic DEVS modeling of the Q-Learning algorithm based on two DEVS models has been proposed: the DEVS models Agent and Environment. An implementation in Python language with an object-oriented approach and a confrontation are presented to show the benefits of the proposed approach. The DEVSimPy [START_REF] Capocchi | DEVSimPy software[END_REF] M&S environment is used to implement the Q-Learning algorithm as event-based interactions between an Agent and Environment DEVS atomic models and facilitate the M&S of the proposed case study.
A validation has been proposed in a dynamic and uncertain environment that corresponds to one of the best application domains of the Markov chain property. Firstly, we decided to present our approach using the historical data of stock exchange markets because the stock option management perfectly fits to a very volatile environment where the next state of a system depends only on the present state. The Markov property is clearly identified by the fact that past financial gains are no guarantee of future gains. Second, we compare the DEVS model to validate by confrontation the similarity of the results with an algorithm commonly used to optimize portfolio such as the EF model. Third, due to the modularity of DEVS, we combine with RNN models largely used for portfolio prediction, such as LSTM or GARCH, to propose a novel approach combining AI and DEVS simulation. presented to assist in the realization of RL models. More specifically, the DEVS formalism makes it possible by specifying the Q-Learning algorithm using a set of interconnected atomic or coupled models reacting by its external transition function. We present the atomic and coupled-based approaches in detail with an additional approach that considers a multiagent version of an RL system. Observability and explainability aspects are considered to show how, when applied using the DEVS formalism, they improve the understanding of RL algorithms and outputs. Finally, a DEVS modeling of the Agent-Environment RL model is proposed with its implementation in the DEVSimPy environment, and a complete case study is presented.
Chapter 4: Case Study: Leverage Effects in Financial Asset Optimization Processes. This chapter presents the case study of the M&S with DEVS-RL of leverage effects in financial asset optimization processes. This case study highlights the interest of our approach based on the combination of DEVS and RL. The consideration of a very volatile environment, because it is composed of rapidly changing stock market indices, makes it possible to show the effectiveness of our M&S approach to improve observability, explainability, and decisionmaking in this complex dynamic system. A section dedicated to compatibility with the EF technique is presented, complete by a study of the combination of our approach with the LSTM approach.
Chapter 5: Conclusion and Perspectives. In this chapter, a summary of different contributions is proposed, and a discussion of the result obtained is presented. Finally, some interesting perspectives for future work are given.
Chapter 2
State of the Art
The combination between the DEVS formalism and the Markov decision process environment is, in general, a very juvenile matter in the scientific literature [14,[START_REF] David | Devs model construction as a reinforcement learning problem[END_REF][START_REF] Kessler | Hierarchical markov decision process based on devs formalism[END_REF]. Some authors have exploited the DEVS formalism in decision making, for example, in agriculture [4], in the smart grid issue or, more recently, in building a model to understand the spread of Covid 19 [START_REF] Cárdenas | Cell-devs models for the spread of covid-19[END_REF]. However, the contribution of the scientific community to this combination is still limited. We are probably pioneering in exploring the potential of the combination between the DEVS formalism and the Markov decision process environment applied to financial issues decision making based on market volatility. Therefore, our research does not aim to improve the results of work done by other authors or even try to prove by comparison that our research effort achieves better computational or financial results compared to other methods. We aim to prove that our research effort is valid and valuable and may become the benchmark for future studies that will combine the DEVS formalism and Markov decision processes environment to build solutions for the decision makers of stock market asset management. In the state-of-the-art, we detail, through literature analysis, the main concepts we effectively used or explored to get to our decision-making process based on the market volatility in a Markovian environment with a DEVS formalism. In this chapter, we will detail a review of the three domains exposed in this document.
Decision-Making Relevant to Finance
Introduction
First of all, we briefly introduce some key topics of our study case, such as volatility in decision-making, the Stock Market, and in particular, we detail Market Volatility a little more deeply, which plays a main role in the study case's process. To make those financial topics as accessible as possible to readers who are not particularly familiar with stock market issues, we detail market volatility by giving samples and literature on how and when traders take it into account. Second, we introduce two close topics related to volatility and decision making i.e. risk and returns [10] that will allow the reader to better understand why earning money (returns) as maximum as possible will be the reward of our agent. We give details of a parameter that at the end we do not take into consideration in our study case, that is, transaction costs, i.e. the sum we pay every time we trade on a trading platform. Taking into account the transition cost may be a future improvement of our model to make it more generic, and we give an example of the authors that deal with that parameter.
Third, we detail some of the most commonly used models and algorithms used by stock market decision-makers to predict or optimize an asset or a portfolio. We first introduce the GARCH model, even if we do not use it in the study case or in the comparison. Honestly, the literature gives evidence that GARCH is one of the most sophisticated models to characterize the volatility of the markets; we do not use it by a lack of know-how, but the GARCH DEVS combination would be one of the possible future developments of our research, in particular, if we want to take more into consideration the "time" in our decision-making path.
Next, we detail two groups of algorithms that use volatility to predict or optimize asset positions or portfolios such as LSTM and EF with the semivariance. Predictions and optimization results from LSTM and EF will be detailed in the comparison chapter to give evidence of the value of the decision-making driven by an agent in a Markovian environment with a DEVS formalism.
Volatility in a Stock Market
Volatility is a close representation of the concept of risk and returns and plays an important role in decision-making policies in different fields. The role of volatility is illustrated in the literature by the relationship between income volatility and health care decision making [3]. Self-adaptive systems, i.e. systems with the ability to adjust their behavior in response to their perception of the environment (a central part of our research effort) are also related in the literature to the concept of volatility [START_REF] Palmerino | Improving the decisionmaking process of self-adaptive systems by accounting for tactic volatility[END_REF]. In general, volatility plays a role in growth analysis [START_REF] Hnatkovska | Volatility and growth[END_REF], as well as in development policies [START_REF] Koren | Volatility and development[END_REF]. Volatility is also an important analysis factor in correlation forecasting [8]. Volatility may also be used to better understand Entrepreneur's decision-making ability [START_REF] Pan | Learning about ceo ability and stock return volatility[END_REF]. Since the 1990s, it has also been a fundamental parameter in studying the market [START_REF] Shiller | Market volatility[END_REF].
The term Stock Market refers to various exchanges where shares of listed companies are bought and sold. These financial activities are conducted through formal trading and over-the-counter (OTC) markets that operate within a defined set of regulations. The stock market allows buyers and sellers of securities to meet, interact, and negotiate. Markets provide information on company stock prices and act as a barometer of the overall economy. Buyers and sellers are assured of a fair price, a high degree of liquidity, and transparency as market participants compete on the open market [START_REF] Bollerslev | Good volatility, bad volatility, and the cross section of stock returns[END_REF]. The first stock exchanges issued and traded physical paper stock certificates. Stock markets today operate electronically.
Market volatility is the frequency and magnitude of price movements, up or down [START_REF] Tsang | Profiling high-frequency equity price movements in directional changes[END_REF]. The larger and more frequent price fluctuations, the more volatile the market is said to be [START_REF] Gillemot | There's more to volatility than volume[END_REF]. Market volatility is measured by determining the deviation of the price change over a period of time [START_REF] Ederington | Measuring historical volatility[END_REF]. The statistical concept of standard deviation makes it possible to see how much something differs from an average. Traders calculate standard deviations from market [START_REF] Preis | Quantifying trading behavior in financial markets using google trends[END_REF] values based on end-of-day trading values, change in values during trading sessions, intraday volatility, or potential future changes in values. Casual market observers are probably more familiar with the latter method, used by the Chicago Board Options Exchange Volatility Index, commonly referred to as VIX [150]. VIX, also called the 'fear index', is the most well-known indicator of volatility in the stock market [START_REF] Dhaene | Fix: the fear index-measuring market fear[END_REF]. Measures investors' expectations [START_REF] Lahmiri | Renyi entropy and mutual information measurement of market expectations and investor fear during the covid-19 pandemic[END_REF] of where stock prices will move in the next 30 days based on S&P 500 options trading. VIX indicates how much traders expect stock prices to S&P 500 change, up or down, over the next month [START_REF] Cheuathonghua | Extreme spillovers of vix fear index to international equity markets[END_REF]. Typically, the higher the VIX, the more expensive the options [START_REF] Fassas | Vix futures as a market timing indicator[END_REF]. The increase in the value of these put options becomes a warning sign of the expected decline in the market and, hence, volatility, as the stock market generally experiences more dramatic upward or downward value changes. Historically, normal VIX levels have been around 20, which means that S&P 500 is very rarely less than 20% above the average growth [START_REF] Dunn | Assessing risk through environmental, social and governance exposures[END_REF]. Most days, the stock market is fairly calm, with short periods of above-average volatility interrupted by short periods of calm. These times tend to make the average volatility higher than it would be on most days [START_REF] Doran | Is there information in the volatility skew?[END_REF]. In general, bull markets (bullish trend) tend to have low volatility, but bear markets (bearish trend) are typically accompanied by unpredictable price movements that tend to be to the downside [START_REF] Gonzalez | Two centuries of bull and bear market cycles[END_REF].
The main question for an investor or a portfolio manager is how to manage market volatility? There are numerous ways to react to fluctuations in a portfolio of stocks. For example, under Basel capital agreements, capital requirements (CR) for exposure to market risk of banks are value-at-risk (VaR) [START_REF] Szegö | Measures of risk[END_REF]. VaR is defined as the loss associated with the low percentile of the return distribution. Basel II Capital Agreements codifies VaR as the industry standard de facto for banking and insurance companies alike [START_REF] Framework | Revisions to the basel ii market risk framework[END_REF]. Risk exposure due to volatility is one of the main reasons why an investor tends to optimize its portfolio. One of the traditional ways to optimize is to balance the weight of each asset allocated in the portfolio, and one of the common ways to take into account the exposure i.e. the VaR is the optimization by the mean-variance (MVO) to determine a set of portfolios characterized by the optimal trade-off between the expected return and VaR.
More generally, experts caution against panic sales after a sharp drop in the market. According to analysts at the Schwab Center for Financial Research [START_REF] Davidow | A modern approach to asset allocation and portfolio construction[END_REF] since 1970, when stocks fell 20% or more, they have made the greatest gains in the first 12 months of recovery. Hence, if due to panic we sold our portfolio at once during the crash and waited for return, our investment would have lost significant gains and may never have recovered the value they had lost. Instead, when market volatility causes investors to be nervous, they can try one of these approaches:
1. Don't forget a long-term plan. Investing is a long-term game and it is true that a well-balanced and diversified portfolio was constructed in times like these with this approach in mind [START_REF] Schmidt | Private equity-, stock-and mixed asset-portfolios: A bootstrap approach to determine performance characteristics, diversification benefits and optimal portfolio allocations[END_REF]. If we need your funds in the near future, they should not be in the market, where volatility can affect our ability to get them out quickly. But for long-term goals, volatility is part of the race for meaningful growth [12]. This can help mentally manage market volatility by thinking about how many stocks we can buy while the market is in a bearish state [166] [165]. During the 2020 bear market [119], for example, we could have bought shares in an S&P 500 index fund at approximately a third of the price a month earlier, after more than a decade of steady growth. At the end of the year, our investment would have increased by approximately 65% from the minimum and 14% since the beginning of the year.
2. Maintain a healthy emergency fund. Market volatility is not an issue, unless we need to liquidate an investment [START_REF] Haugen | The efficient market inefficiency of capitalization-weighted stock portfolios[END_REF], as we may be forced to sell assets in a bear market. For this reason, it is particularly important for investors to have an emergency fund equal to three to six months of out-of-pocket expenses. If you are close to retirement, you should recommend an even bigger safety net, up to two years of non-market-related activity [START_REF] Campbell | Trapped in America's Safety Net: One Family's Struggle[END_REF]. This includes bonds, cash, life insurance cash values, home equity lines of credit, and home equity conversion mortgages.
3. Rebalance the portfolio if necessary. As market volatility can cause large swings in the value of investments, your asset allocation can deviate from the desired fractions after periods of intense swings back and forth [START_REF] Kashyap | The circle of investment: Connecting the dots of the portfolio management cycle[END_REF]. During these periods, we must rebalance our portfolio to meet our investment goals and meet the level of risk you want [13]. When rebalancing, we sell part of the asset class that has moved to a larger part of our portfolio than we want and use the proceeds to buy more of the asset class that has become too small. It is a good idea to rebalance when the allocation deviates by 5% or more from the original target mix [START_REF] Michaud | Efficient asset management: a practical guide to stock portfolio optimization and asset allocation[END_REF]. We may also want to rebalance if we see a gap greater than 20% in an asset class. For example, if we want emerging market equities to make up 10% of our portfolio and, after a severe market downturn, we find that emerging markets are more like 8% or 12% of our portfolio, we may want to change our resources.
Approaches (1), ( 2) and (3) have been used to influence the behavior of our agent in the case study.
Risk versus Returns in portfolio management is related to MDP, which will be detailed in the next section. Risk aversion in MDPs is a topic that has been addressed by many authors [START_REF] Moldovan | Risk aversion in markov decision processes via near optimal chernoff bounds[END_REF] and leverage aversion [START_REF] Jacobs | Leverage aversion and portfolio optimality[END_REF] can have a large effect on the choice of a portfolio of an investor [START_REF] Jacobs | Leverage aversion, efficient frontiers, and the efficient region[END_REF].
As described by the authors in [START_REF] Gębka | The dynamic relation between returns, trading volume, and volatility: Lessons from spillovers between asia and the united states[END_REF]: Actually, trading is a decision-making activity that has a dynamic relationship with three fundamental parameters: risk, returns and volatility. Risk is the main determinant of investment return, which is why an equilibrium approach to risk and returns is generally applied to decision-making policies [START_REF] Whitelaw | Stock market risk and return: An equilibrium approach[END_REF]. To illustrate an equilibrium approach, imagine a simple case of a portfolio composed of two assets (securities). The portfolio return can be calculated as follows.
E (rp) = WA.E (ra) +W B.E (rb) ,
where E (rp) expected return of portfolio P, E (ra) expected return of security A, E (rb) expected return of security B, WA proportion of portfolio invested in security A, W B proportion of portfolio invested in security B.
More generally, the relationship between risk and return plays an important role in the decision-making literature [START_REF] Mcnamara | Risk and return in organizational decision making[END_REF]. The balance of risk and policy return is taken into account in many industrial fields, i.e. the effective deployment of photovoltaics in Mediterranean countries [START_REF] Lüthi | Effective deployment of photovoltaics in the mediterranean countries: Balancing policy risk and return[END_REF]. The analysis of risk and return is also crucial in public policy investment [START_REF] Bertelli | Public policy investment: Risk and return in british politics[END_REF] or in sustainability analysis [START_REF] Sudha | Risk-return and volatility analysis of sustainability index in india[END_REF].
The expected return [START_REF] Martin | What is the expected return on a stock?[END_REF] (E (rp) ) is often the objective function of choice in planning problems, where the results depend not only on the actor's decisions, but also on random events. Expectations are often there, a natural choice, because the law of large numbers [START_REF] Peng | Law of large numbers and central limit theorem under nonlinear expectations[END_REF] ensures that the average returns to many races will converge in waiting. In addition, the linearity of expectations can often be exploited to obtain efficient algorithms. Some experiences, however, can only take place once, either because they take a lot of time (invest for the withdrawal), because we do not have the possibility to try again if we lose (parachuting, crossing the street), or because experience versions are not available like in the stock market. In this context, we cannot longer use the law of large numbers to ensure that the return is close to its expectations with high probability [START_REF] Ross | The arbitrage theory of capital asset pricing[END_REF], so the expected return may not be the best target to optimize. If we were pessimistic, we could assume that anything that can go wrong will go wrong and try to minimize losses based on this assumption. A general approach would include minmax optimization and expectation optimization [START_REF] Goll | Minimax and minimal distance martingale measures and their relationship to portfolio optimization[END_REF], corresponding absolute risk aversion and risk ignorance, respectively [START_REF] Kinney | Risk aversion and elite-group ignorance[END_REF], but would also allow for a range of policies between these extremes.
Standard theory suggests that investors must be compensated for the risk that they take, so we can attribute to each asset an expected compensation (i.e., a prior estimate of returns). This is quantified by the market-implied risk premium, which is the excess return of the market R -R f divided by its variance σ :
δ = R -R f σ 2 .
To calculate the market-implied returns, we then use the following formula:
∏ = δ ∑ ω mkt ,
where ω mkt denotes the market cap weights. This formula is used to calculate the total amount of risk contributed by an asset and multiply it by the market price of the risk, resulting in the market-implied return vector ∏.
Most traders can quickly define the required win rate, expected average return, risk levels, and position sizes that are necessary for their success [START_REF] Gârleanu | Dynamic trading with predictable returns and transaction costs[END_REF]. These metrics often vary from trader to trader, but should be as accurate as possible over the long term to ensure success. Slight deviations in these numbers can mean the difference between success and failure. Performance truly depends on a fine balance. If a trader executes his orders incorrectly, he can find the source of his shortcomings and remedy them by changing his approach, strategy, frequency, etc.
On the other hand, an inevitable problem will always remain embedded transaction costs [6], such as fees and the cost of the spread. Of these two frequently encountered problems, one is a little more difficult to solve: fees. An example of such an unavoidable situation is when a person trades CFDs or contracts for difference. The exchange may not advertise fees, but that is only because they are already built into the spread. A trader might pay upwards of 1.5% to move a position back and forth. This means that before even considering the possibility of a favorable or unfavorable development of a trade, there is already a loss to enter and exit. This may seem like a small value, but now consider the cumulative effect this can have, especially on an active trader. Most people are already sensitive to trading costs in some way or another, but the largest traders are hit the hardest [START_REF] Berkowitz | The total cost of transactions on the nyse[END_REF]. Consider a trader who scalps momentums on short time frames [START_REF] Men | Design and backtesting of a trading algortihm with scalping day trading strategy for xau/usd fx market for individual traders[END_REF]. For this example, let us say that this trader benefits from a trading approach with positive expectations. Without even talking about the returns or losses of his strategy, let us imagine that he is trading with an account of 10,000. As a trader on short timeframes, it is likely that he will move back and forth on his positions a lot. To simplify to the extreme, our trader risks 1% of his account in each transaction and trades with the total value of his account. Scalers can execute between 50 and 300 trades per week [START_REF] Thompson | The execution cost of trading in commodity futures markets[END_REF]. For the purposes of this experiment, let us take a low random number and say that our scalper executes 75 trades per week [START_REF] Kondrin | What is your trading style?[END_REF], or about 10.71 trades per day. This represents approximately 37 back-and-forth, that is, opening a position to buy and then selling this position to close, and vice versa. In this case, our broker is operating on an extremely popular exchange that charges a maker fee of 0.10%, or 10 basis points. In this example, this trader lost 722 fees alone for the week. Of course, once again we assume that all his trades were closed at breakeven. Now imagine how this could be incorporated into a system that would additionally suffer losses and how it could exacerbate any negative combination that already includes transaction costs by default. You are rightly assuming that, especially in the case of traders who operate at higher frequencies, fees can completely eat away at any return.
In general, a transaction cost is known in advance and some authors take it into account as a simple deterministic and controllable state transition [START_REF] Neuneier | Enhancing q-learning for optimal asset allocation[END_REF]. We have chosen not to take transaction costs into consideration in our case study, for the simple reason that our beta tester, BCC Felsinea Bank (Italy), suggested that we not apply any transition cost. Actually, to widen the windows of possible final users, it would have been preferable to take into account the transition costs.
Portfolio Management Decision-Making Methods
The scientific literature on AI offers more and more works on portfolio optimization. The section introduces three of the most widely exploited in the scientific literature, such as generalized autoregressive conditional heteroskedasticity process (GARCH) [START_REF] Kakade | Forecasting commodity market returns volatility: A hybrid ensemble learning garch-lstm based approach[END_REF] models, efficient frontier (EF) [START_REF] Clark | A machine learning efficient frontier[END_REF], and long-short-term memory (LSTM) [START_REF] Yu | A review of recurrent neural networks: Lstm cells and network architectures[END_REF].
Generalized Autoregressive Conditional Heteroscedasticity Process
Generalized autoregressive conditional heteroscedasticity process (GARCH) is a sophisticated econometric model that has been developed to characterize the volatility of financial markets. Financial institutions use the model to estimate the volatility of returns from investment vehicles [START_REF] Hamid | Volatility modeling and asset pricing: Extension of garch model with macro economic variables, value-at-risk and semi-variance for kse[END_REF]. These models assume that the conditional volatility of future returns depends on shocks in the current volatility or other state variables, whereas the unconditional volatility is constant over time. In other words, we can say that GARCH shares with Markov decision processes the property that the future depends only on the present. More in detail, heteroskedasticity describes the irregular pattern of variation of an error term or variable in a statistical model. Basically, when there is heteroskedasticity, the observations do not conform to a linear pattern. Instead, they tend to cluster together. The result is that the conclusions and predictive value drawn from the model will not be reliable. GARCH is a statistical model that can be used to analyze different types of financial data, such as macroeconomic data. As we have already highlighted, financial institutions typically use this model to estimate the volatility of stock, bond, and stock index returns. They use the resulting information to determine prices and judge which assets will potentially offer higher returns, and predict current investment returns to aid in asset allocation, hedging, risk management, and optimization decisions.
The general process of a GARCH model has three steps. The first is to estimate a more suitable autoregressive model. The second consists in calculating the autocorrelations of the error term. The third step is to verify the meaning. Two other widely used approaches to estimate and predict financial volatility are the classic historical volatility method (VolSD) and the exponentially weighted moving average volatility method (VolEWMA).
The GARCH model was introduced by Engle in 1982 [START_REF] Andersen | ARCH and GARCH models[END_REF]. Particularly popular is its simplest version of the GARCH(1,1) model:
r t = µ + ε t ,
where r t is the return of an asset at time t, µ is the average return, and ε t are the residual returns, defined as ε t = σ t z t . GARCH processes differ from homoskedastic models, which assume constant volatility and are used in basic ordinary least squares (OLS) analysis. The objective of OLS is to minimize deviations between data points and a regression line to fit those points. With asset returns, volatility appears to vary during certain periods and depends on past variance, making a homoskedastic model suboptimal [START_REF] Bollerslev | Generalized autoregressive conditional heteroskedasticity[END_REF].
GARCH processes, because they are autoregressive, depend on past squared observations and past variances to model the current variance. GARCH processes are widely used in finance due to their effectiveness in modeling asset returns and inflation. GARCH aims to minimize forecast errors by taking into account previous forecast errors and improving the accuracy of current forecasts [START_REF] Bollerslev | Generalized autoregressive conditional heteroskedasticity[END_REF]. To forecast the volatility of the stock price index, GARCH-type models are usually integrated with a recurrent neural network such as LSTM or models such as EF or semiVariance [START_REF] Kim | Forecasting the volatility of stock price index: A hybrid model integrating lstm with multiple garch-type models[END_REF].
Long Short-Term Memory
Long short-term memory (LSTM) networks are one of the most widely used deep learning methods, especially for time series analysis [START_REF] Sutton | Learning to predict by the methods of temporal differences[END_REF]. Since this pioneering work, LSTMs have been modified and popularized by many researchers [START_REF] Gers | Learning to forget: Continual prediction with lstm[END_REF].
Due to the limited capacity of the single LSTM cell in handling engineering problems, LSTM cells have to be organized into a specific network architecture when processing practical data. LSTM networks can store the state information in memory cells or specially designed gates. Information available at the current time slot is received at the input gates. Using the results of aggregation at the forget gates and the input gates, the networks predict the next value of the target variable. The predicted value is available at the output gates. As a better version of recurrent neural networks (RNN) [START_REF] Bishop | Neural Networks for Pattern Recognition[END_REF], LSTM has been used in many fields. In particular, in the prediction of the stock price, LSTM had been proposed and applied by some notable researchers, such as Murtaza et al. [START_REF] Roondiwala | Predicting stock prices using lstm[END_REF] and Nelson et al. [START_REF] Nelson | Stock market's price movement prediction with lstm neural networks[END_REF]. In general, the LSTM architecture is designed to accurately forecast future stock prices. After the elapsed period of time since portfolio construction, the actual returns yielded by the portfolios and those predicted by the LSTM models are computed. The actual and predicted returns are compared to evaluate the accuracy of the LSTM model [START_REF] Sen | Stock portfolio optimization using a deep learning lstm model[END_REF].
The root mean squared (RMSE) was used to measure the difference between the predicted and practical data. RMSE has been calculated as follows:
RMSE = ∑ n i=1 (y i -y ′ i ) 2 n ,
where Y = (y 1 , y 2 , . . . , y n ) is a vector of actual observations,
Y ′ = (y ′ 1 , y ′ 2 , . . . , y ′ n
) is a vector of predicted values, and n is the number of observations.
LSTM is an advanced soft computing method [START_REF] Hochreiter | Long short-term memory[END_REF] introduced for the first time to address the limitation found in the conventional RNN method, especially to solve problems with the long-term dependency issue [START_REF] Hansun | Predicting lq45 financial sector indices using rnn-lstm[END_REF]. The output of an LSTM at a particular point in time depends on three things:
• The current long-term memory of the network (the Cell state noted C t-1 )
• The output at the previous point in time (the previous Hidden state noted H t-1 )
• The input data at the current time step.
LSTM uses a series of gates that control how information in a sequence of data enters, how it is stored, and how it leaves the network. As shown in 2.1, there are three gates in an Fig. 2.1 LSTM architecture with forget, input, and output gates. X is the pointwise multiplication, + is pointwise addition, sigmoid activated NN, Tanh (yellow) pointwise Tanh not NN, Tanh (green) activated NN LSTM: forget gate, input gate, and output gate. These gates can be thought of as filters and each have their own neural network. A LSTM network consists of several LSTM cells that are self-connected and are used to store the temporal state of the networks using the three gates. What happens in an LSTM cell can be summarized with the following steps. Readers are invited to look at figure 2.1 moving from left to right.
• The first step in the process is forget gate. Based on the previous hidden state and the new input data, it is decided which bits of the cell state are important for the long-term memory of the network. That is why the new input and the previous hidden state data are fed into a neural network. The forget gate decides which pieces of long-term memory have less weight and should now be forgotten.
• Given the previous hidden state and the new input data, the second step corresponds to the input gate step that determines what new information should be added to the cell state. The sigmoid activated network is the input gate, which first checks if the new input data is even worth remembering and then acts as a filter, identifying which components of the new memory vector are worth retaining. The sigmoid function is useful to filter because it can only output values between 0 and 1, so the knowledge that is no longer needed is forgotten, as it is multiplied by 0 and, consequently, it is dropped out. Tanh activated neural network is a new memory network which has learned how to combine the previous hidden state and new input data to generate a new memory update vector. This vector essentially contains information from the new input data given the context of the previous hidden state.
• The third step. First, apply the tanh function to the current cell state pointwise to obtain the squished cell state, which now lies in [-1, 1]. Second, pass the previous hidden state and current input data through the sigmoid activated neural network to obtain the filter vector. Third, apply this filter vector to the squished cell state by point-wise multiplication. Finally, output the new hidden state.
• The steps above are repeated many times. For example, if we want to predict the value of an asset based on the previous 150 values, the steps will be repeated 150 times. In other words, the model have produced iteratively 150 hidden states to predict the value of tomorrow or to any given horizon.
For future values of stock price prediction, we present in figure 2.2 a classic example of a schematic design of an LSTM model. The model uses daily close prices of the stock of the last 150 days as input. The input data for 150 days with a single characteristic (that is, close values) have a shape (150, 1). The input layer receives the data and transmits them to the first LSTM layer of 256 nodes, which is made up of the input layer. The LSTM layer yields a shape of (150, 256) at its output. This implies that 256 features are extracted by each node of the LSTM layer from every record in the input data. A dropout layer is used after the first LSTM layer that randomly switches off the output of 30% of the nodes in the 256 LSTM to avoid model overfitting. Another LSTM layer with the same architecture as the previous one receives the output of the first and applies a dropout rate of 30%. A dense layer with 256 nodes receives the output of the second LSTM layer.
Efficient Frontiers Methods
In modern portfolio theory (MPT), efficient frontier (EF) (or portfolio frontier) [START_REF] Markowitz | The early history of portfolio theory: 1600-1960[END_REF] is an investment portfolio that occupies the "efficient" parts of the risk-return spectrum. Formally, it is the set of portfolios which satisfy the condition that no other portfolio exists with a higher expected return but with the same standard deviation of return (i.e. the risk). For instance, a portfolio manager has a set of assets to which he would like to allocate a certain sum of money, and to take his decision, he has estimates the expected returns of each asset and the covariance relationships amongst the assets. As shown in Figure 2.4 EF methods depend on two key intuitions [START_REF] Krokhmal | Portfolio optimization with conditional value-at-risk objective and constraints[END_REF]: The first is that investors seek returns, but are risk averse. The risk is measured by the volatility of the portfolio of assets. The second is that a rational investor who receives two portfolios exhibiting the same expected return would choose a portfolio that is less volatile. However, we are subject to the following constraints:
• Capital must be allocated in full. This is an equality constraint.
• A minimum portion of our capital must be allocated to each asset. This is an inequality constraints.
• The maximum allocation must not exceed one asset. This is an inequality constraint.
This kind of inequality constraint has a strong disadvantage for portfolio managers. In fact, they cannot short the poor performing assets and use the proceeds to leverage their positions in the high performing assets [START_REF] Best | The efficient frontier for bounded assets[END_REF]. Fig. 2.5 Markowitz Efficient Frontier [START_REF] Ivanova | Application of markowitz portfolio optimization on bulgarian stock market from 2013 to 2016[END_REF]. .
.5 shows the complete set of investment opportunities, which represents the set of all possible combinations of risk and return offered by portfolios made up of assets in different proportions. The combination of specific risky assets in the portfolio plotted on an efficient frontier represents the lowest possible risk of the portfolio for the desired level of expected return for an acceptable risk level. The line along the upper edge of this region is known as the efficient frontier, sometimes called the 'Markowitz bullet' [START_REF] Read | The theory of an efficient portfolio[END_REF]. Combinations along this line represent portfolios (explicitly excluding the risk-free alternative) for which there is the lowest risk for a given level of return. On the contrary, for a given amount of risk, the portfolio on the efficient frontier represents the combination that offers the best possible return. Mathematically, the efficient frontier is the intersection of the set of portfolios with minimum risk and the set of portfolios with maximum return [START_REF] Haugen | Dedicated stock portfolios[END_REF].
In other words, the efficient frontier in Figure 2.4 for a given sector represented the contour of a large number of portfolios in which the returns and the risks are plotted along the y axis and x axis, respectively. Points on an efficient frontier have the property that they are the portfolios that yield the maximum return on a given risk. They may introduce the minimum risk on a given return. The point at the leftmost point on the efficient frontier depicts the minimum-risk portfolio.
To concretely detail Figure 2.5 with a simple example, suppose that a portfolio consists of five stocks with, respectively, an evaluation of the risk and the return. MPT is an investment theory developed by Harry Markowitz and published under the title "Portfolio Selection" in the Journal of Finance in 1952 [START_REF] Markowitz | Portfolio theory: as i still see it[END_REF]. There are a few underlying concepts that can help anyone understand MPT. An acronym in data science and optimization research is known as "TANSTAAFL". It is a famous acronym for "There Ain't No Such Thing As A Free Lunch". This concept is also closely related to the 'risk-return trade-off' in finance [START_REF] Grossmann | The macroeconomics of tanstaafl[END_REF].
MPT is a practical investment selection technique to maximize the overall return with an appropriate level of risk [START_REF] Iyiola | The modern portfolio theory as an investment decision tool[END_REF]. Investment portfolio theories guide the way a portfolio decision maker allocates money and other capital assets within an investment portfolio. An investment portfolio has long-term objectives independent of daily market fluctuations. Due to these goals, investment portfolio theories aim to help portfolio decision makers with tools to estimate the expected risk and return associated with investments [START_REF] Iyiola | The modern portfolio theory as an investment decision tool[END_REF]. Passive portfolio theories, on the one hand, combine an portfolio decision maker's goals and temperament with financial actions. Passive theories propose minimal input from the investor; instead, passive strategies rely on diversification, buying many stocks in the same industry or market, to match the performance of a market index. Passive theories use market data and other available information to forecast investment performance [START_REF] Fernholz | Stochastic portfolio theory and stock market equilibrium[END_REF]. Active Portfolio Theories come in three varieties. Active portfolios can be patient, aggressive, or conservative. Patient portfolios invest in established, stable companies that pay dividends and earn income despite economic conditions. Aggressive portfolios buy riskier stocks, those that are growing, in an attempt to maximize returns; because of the volatility to which this type of portfolio is exposed, it has a high turnover rate. As the name implies, conservative portfolios invest with an eye on long-term yield and stability [START_REF] Goto | Academic theories meet the practice of active portfolio management[END_REF]. Some authors propose to bridge the gap between portfolio theory and machine learning [START_REF] Clark | A machine learning efficient frontier[END_REF].
One of the EF techniques is mean-variance optimization which represents an important approach in financial decision making, especially for static (one-stage) problems [START_REF] Mannor | Mean-variance optimization in markov decision processes[END_REF]. In general, the optimization procedure is robust and provides strong mathematical guarantees with the correct inputs. On the other hand, optimization of mean variance requires knowledge of expected returns [START_REF] Yudin | Essential financial tasks done with python[END_REF]. In practice, these are rather difficult to know with any certainty. Therefore, the best we can do is to make estimates, e.g. by extrapolating historical data. This is the main flaw in mean-variance optimization and is also the reason many authors have attempted to cross mean-variance and MDPs [START_REF] Benhamou | Bridging the gap between markowitz planning and deep reinforcement learning[END_REF].
When we use mean-variance optimization, we assume that portfolio optimization problems are convex. This is not true in many cases [9]. However, the convex optimization problem is a well-understood class of problems that are useful in finance. A convex problem has the following form:
minimise x f (x) subject to g i (x) ≤ 0, i = 1, . . . , m h i (x) = 0, i = 1, . . . , p
This notation describes the problem of finding x ∈ R n that minimizes f (x) among all x satisfying g i (x) ≤ 0, i = 1, ..., m and h i (x) = 0, i = 1, ..., p. The function f is the objective function of the problem, and the functions g i and h i are the inequality and equality constraint functions.
There are basically two things that need to be clarified: optimization constraints and optimization objective. For example, the classic problem of portfolio optimization is to minimize risk on a return basis (i.e., the portfolio must return more than a certain amount). From the implementation point of view [START_REF] Martin | Pyportfolioopt: portfolio optimization in python[END_REF], there are few differences between the objectives and the constraints. The role of risk and return can be changed. One of the main advantages of the mean-variance is that it has a simple and clear interpretation in terms of individual portfolio choice and utility optimization, although some of its drawbacks are nowadays well known. Li and Ng [START_REF] Zhou | Continuous-time mean-variance portfolio selection: A stochastic lq framework[END_REF] introduced a technique to tackle the multi-period mean-variance problem, with market uncertainties reproduced by stochastic models, in which the key parameters, expected return and volatility, are deterministic. An EF object contains multiple optimization methods that can be called with various parameters. For example, it may be used to optimize for minimum volatility or to maximize the return for a given target risk or, on the contrary, to minimize the risk for a given target return [START_REF] Kaczmarek | Building portfolios based on machine learning predictions[END_REF]. The mean-variance optimization methods described previously can be used whenever we have a vector of expected returns and a covariance matrix. The objective and constraints will be some combination of portfolio return and portfolio volatility.
However, we may want to construct the EF for an entirely different type of risk model, that is, one that does not depend on covariance matrices, or optimize an objective not related to portfolio return, for example, tracking error [START_REF] Boasson | Portfolio optimization in a meansemivariance framework[END_REF]. The one we have tested is efficient semivariance.
Efficient semivariance is characterized by the fact that, instead of penalizing volatility, mean semivariance optimization seeks to only penalize downside volatility, since upside volatility may be desirable. The mean semivariance optimization can be written as a convex problem [START_REF] Estrada | Mean-semivariance optimization: A heuristic approach[END_REF], which can be solved to give an exact solution. For example, to maximize the return for a target semivariance s * (long only), we would solve the following problem [START_REF] Markowitz | Avoiding the downside: A practical review of the critical line algorithm for mean-semivariance portfolio optimization[END_REF]:
maximise w w T µ subject to n T n ≤ s * Bw -p + n = 0 w T 1 = 1 n ≥ 0 p ≥ 0
where, w is the weight of each stock, µ, is the expected return, n and p are the decision variables in the optimization problem, that is, n the total number of observations below the mean and p is the mean portfolio return. T is the estimation window, that is, the realizations of the returns of stocks n. B is the normalized return (the benchmark), and here B it is the T ×N (scaled) matrix of excess returns T xN (scaled), B = (returns-benchmark) √
(T )
. Additional linear equality constraints and convex inequality constraints can be added.
Conclusion
In this section dedicated to decision making relevant to finance, we briefly introduced some key topics of our study case, such as market volatility and risk and returns, which plays a major role in the study case's process. Portfolio return and GARCH (1,1) return, average and residual returns, were concepts that inspire the way we define the reward and the agent behavior in our study case. We detail some of the most commonly used models and algorithms used by stock market decision-makers to predict or optimize an asset or portfolio. We introduced the GARCH model and two other asset portfolio optimizers, LSTM and EF, that we implemented to make a comparison with the model we present in the case study.
Markov Decision Process
Introduction
Markov decision processes (MDPs) are an extension of Markov chains [START_REF] Hölzl | Markov chains and markov decision processes in isabelle/hol[END_REF]. In the literature, many authors have already addressed the question of MDPs and the role of MDPs in reinforcement learning [START_REF] Sutton | Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning[END_REF]. In this paragraph, we try to point out the way we got into MDP and the process we carried out to understand this environment step by step, starting from the Markov chain to the MDP through the Markov reward process (MRP). We want to present the reader with our learning/exploitation decision-making path. At the same time, we will also address the question of reward, discount factor, value of a state, value of an action, and use of the Bellman equation to calculate the state-action function. We will use some simple and hand-crafted examples to illustrate the state-of-the-art.
Markov Chains
The applications of Markov chains span a wide range of fields in which models have been designed and implemented to simulate random processes. Many aspects of life are characterized by events that occur randomly. It seems that the world does not work as perfectly as we hope it would. In an effort to help quantify, model and forecast the randomness of our world, the theory of probability and stochastic processes has been developed and may help answer some questions about how the world works [START_REF] Feldman | Applied probability and stochastic processes[END_REF]. The focus of this paragraph is on one type of stochastic process known as Markov chains.
Markov chain theory was created in the early 20th century by the Russian mathematician Andrei Andreyevich Markov. Markov's interest in the Law of Large Numbers and its extensions eventually led to the development of what is now called the theory of Markov chains, named after Andrei Markov himself [START_REF] Ching | Markov chains. Models, algorithms and applications[END_REF][START_REF] Sinai | Andrei andreyevich markov[END_REF].
To simply illustrate a Markov chain, let us consider a stochastic process.
{X n }, n ∈ R + 0,∞ ,
that takes on a finite or countable set M.
Let X n be the close price of a financial asset on the nth day which can be M = {10, 11, 9, 8}.
One may have the following realization:
X (0) = 10, X (1) = 11, X (2) = 10, X (3) = 9, X (4) = 8, . . .
An element in M is called a state of the process. Suppose there is a fixed probability P i j independent of time such that
P(X n+1 = i|X n = j, X n-1 = i n-1 , . . . , X (0) = i 0 ) = P i j , n ≥ 0 where i, j, i 0 , i 1 ,. . .,i n-1 ∈ M .
Then this is called a Markov chain process.
In other words, a Markov chain is simply a sequence of random variables that evolve over time i.e. a sequence of random states S 1 , S 2 , . . . with the Markov property. Markov chains are stochastic processes that are characterized by their memoryless property, where the probability of the process being in the next state of the system depends only on the current state and not on any of the previous states. This property is known as the Markov property: "The future is independent of the past given the present" [START_REF] Feldman | Applied probability and stochastic processes[END_REF] and, in other words, it means that the present state captures all relevant information from the history completely characterising the process and once the present state is known, the history can be thrown away, and that means that the present state is a sufficient statistic of the future.
A state S t is Markov if and only if:
P [S t+1 |S t ] = P [S t+1 |S 1 , . .
. , S t ]
A state space is the set of all values which a random process can take [START_REF] Tierney | Introduction to general state-space markov chain theory[END_REF]. Furthermore, the elements in a state space are known as states and are a main component in the construction of Markov chain models. With these three pieces, along with the Markov property, a Markov chain can be created and can model how a random process will evolve over time. The changes between states of the system are known as transitions, and probabilities associated with various state changes are known as transition probabilities [START_REF] Sherlaw-Johnson | Estimating a markov transition matrix from observational data[END_REF]. A Markov chain is characterized by three pieces of information: a state space, a transition matrix with the entries being transition probabilities between states, and an initial state or initial distribution across the state space. For a Markov state s and successor state s ′ , the state transition probability is defined by:
P ss ′ = P [S t+1 = s ′ |S t = s]
The following State transition matrix P defines transition probabilities from all states s to all successor states s ′ : P = P 11 . . . P 1n P n1 . . . P nn This is a simple example of a Markov Chain related to a scalper daily routine which have three states: (i) S1 to buy stocks (i) S2 to sell stocks and (iii) S3 to wait. If we are in S3, figure 2.7 shows that the probability to move for example from S3 to S2 only depends on S3, and it does not depend on the way how you previously get into S3 i.e. the probability to move from S3 to S2 are not different if you get into S3 from S1 or from S2.
Fig. 2.7 Scalper day-life routine Markov chain.
There are many interesting applications of Markov chains in academic disciplines and industrial fields [START_REF] Douc | Markov chains[END_REF]. For example, Markov chains have been used in Mendelian genetics to model and predict what future generations of a gene will look like. Another example of where Markov chains have been applied is the popular children's board game Chutes and ladders. For example, in Chutes and Ladders, at each turn, a player is residing in a state in the state space (one square on the board), and from there the player has transition probabilities of moving to any other state in the state space. In fact, the transition probabilities are fixed since they are determined by the roll of a fair die. Nevertheless, the probability of moving to the next state is determined only by the current state and not by how the player arrived there and is therefore capable of being modeled as a Markov chain. Markov chains have been applied to areas as disparate as chemistry, statistics, operations research, economics, music, and, of course, finance. Markov chain is the first necessary step to get to the MDP. To build the link between them, we need to briefly introduce the Markov reward processes (MRP) and the reward function.
Markov Reward Processes
Let us start with the same scalper daily life example, but with the introduction of the rewards. As we can see in Figure 2.8, moving or staying in a state is associated with a value R. In the following paragraphs, we will introduce the value of a state (state value) and the value of an action (action value) presented at the end of this section. Now, let us first introduce MRP. MRP is a tuple (S, P, R, γ) with two more values than the Markov chain (S, P): R is a reward function and γ is a discount factor γ ∈ [0, 1]. The rewards and discount factor allow us to calculate the return G t by going deeper into the topic of risk and the returns introduced at the end of subsection 2.1.2. We can deal with a short-horizon also called a "myopic" evaluation or with a long-horizon also called a "farsighted" evaluation. Return G t is the total discounted reward from step t.
G t = R t+1 + γR t+2 + . . . = ∞ ∑ k=o γ k R t+k+1 • The discount γ ∈ [0, 1]
is the present value of future rewards.
• The value of receiving reward R after k + 1 time-steps is γ k R.
• This values immediate reward above delayed reward:
-Close to 0 leads to "myopic" evaluation.
-Close to 1 leads to "far-sighted" evaluation. Now, let's detail why Markov reward processes are discounted [1]. First of all, human behaviour shows preference for immediate reward. Secondly, the discount is useful to avoid infinite returns in cyclic Markov processes. Thirdly, it reduce the risk that uncertainty about the future may not be fully represented. Fourthly, in our study case where the reward is financial, immediate rewards may earn more interest than delayed rewards. Fifth it is mathematically convenient to discount rewards. Last but not least, it's still possible to use undiscounted Markov reward processes i.e. γ = 1.
Let's introduce two samples of returns applied to the Scalper daily routine MRP: Starting from S1 = Buy stocks with γ = 1 2 with two different scalpers behaviours:
• A lazy scalper:
S1 → S2 → S3 and G 1 = 5 + 2 • 1 2 + 1 • 1 4 = 6.25
• An active scalper:
S1 → S2 → S1 → S2 → S1 → S2 → S1 → S3 and G 1 = 5 + 5 • 1 2 + 5 • 1 4 + 5 • 1 8 + 5 • 1 16 + 5 • 1 32 + 2 • 1 64 + 1 • 1 128 = 9.88
As we can see, entry in or exit from S1 or S2 do not produce the same exact reward overtime. The issue is to understand the long-term value of each state, this is called the value function v(s):
v(s) = E G t |S t = s .
In figure 2.8 and with
γ = 1, v(s) the value function in state S1 is v(S1) = 5 + 0.3 • -2 + 2 + 1 • 0.3 = 6.7.
On our example this value will evolve overtime at every state transition untill the end of the day, that is the reason why we calculate the state-value function by using the Bellman equation (introduced below).
The value function can be decomposed into two parts:
• immediate reward R t+1 ,
• discounted value of successor state γ.v(S t+1 ), and the value function can be formulated as:
v(s) = E G t |S t = s = E R t+1 + γv(S t+1 )|S t = s .
Using matrices the Bellman equation can be expressed concisely as:
v = R + γPv,
where v is a column vector with one entry per state.
The Bellman linear equation can be solved directly and it is a direct solution only for small MRP. For large MRP we may use other iterative methods such as Temporal-Difference learning or Dynamic Programming [START_REF] Sutton | Introduction to Reinforcement Learning[END_REF].
Markov Decision Processes Formulation
To go straight to the point, we detail in few items the MDP by comparing with an MRP.
First, a MDP is a deterministic system, satisfying the Markov property [START_REF] Shi | Does the markov decision process fit the data: Testing for the markov property in sequential decision making[END_REF], when for each state and action we specify a new state and it is a controlled stochastic processes when for each state and action we specify a probability distribution over next states and we assign rewards to state transitions [START_REF] Bertsekas | Dynamic Programming: Determinist. and Stochast. Models[END_REF][START_REF] Puterman | Markov decision processes: discrete stochastic dynamic programming[END_REF].
Second, it is an environment or a framework in which all states are Markov. Third, A MDP is a tuple (S,A,P,R,γ) that is an MRP (S,P,R,γ) with decisions (which are actions) with:
• S is a finite set of states or a state space in which the process evolves.
• A is a finite set of actions that controls the dynamics of the state.
• P is a state transition probability matrix.
• P a ss ′ = P [S t+1 = s ′ |S t = s, A t = a].
• R is a reward function on transitions between states,
R a s = E [R t+1 |S t = s, A t = a]. • γ is a discount factor γ ∈ [0, 1].
A distribution over actions given states is a policy π. The policy plays an important role because it fully defines the behaviour of the agent. Obviously, MDP policies depend on the current state not on the past (Markov property). Policies are time-independent (stationary),
π(a|s) = P [A t = a|S t = s].
So, given an MDP, M = ⟨ S, A, P, R, γ⟩ and a policy π, the state sequence S 1 , S 2 , • • • is a Markov chain ⟨S, P π ⟩ and the state reward sequence is an MRP (S,P π ,R π ,γ) where:
P π s,s ′ = ∑ a∈A π(a|s)P a ss ′ , R π s = ∑ a∈A π(a|s)R a s ,
As same as in MRP, we try to understand what is the value of a state in MDP. The difference between MDP and MRP is that in MRP the state-value function v π π(s) is the expected return starting from state s, following policy π:
v π π(s) = E π G t |S t = s .
In MDP, the action-value function qπ(s, a) is the expected return starting from state s, taking action a, and then following policy π:
q π π(s, a) = E π G t |S t = s, A t = a .
As same as for MRP we can use the Bellman equation. The state-value function can again be decomposed into immediate reward plus discounted value of successor state.
v π π(s) = E R t+1 + γv π (S t+1 )|S t = s .
Another difference between MRP and MDP is the action-value function. This function can similarly be decomposed:
q π π(s) = E π R t+1 + γq π (S t+1 , A t+1 |S t = s, A t = a
In the case of a large-scale simulation-based MDP, the Bellman equation [START_REF] Yu | On generalized bellman equations and temporal-difference learning[END_REF] allows us to determine the best possible policy without generating a probability transition matrix, as is the case in the traditional approach.
Find the "best" solution for MDP is to find the "best path" through the Markov chain. The optimal state-value function v(s) is the maximum value function over all policies v(s) = max π vπ(s). This is in not necessarily the best policy, but it is the way to get the maximum reward from the system.
The optimal action-value function q(s, a) is the maximum action-value function over all policies [START_REF] Sato | Model-free reinforcement learning for financial portfolios: a brief survey[END_REF]. Namely, the maximum quantity of reward starting at state s and taking the action a considering all the reward taking the action a and onward. This means that the optimal value function specifies the best possible performance in the MDP:
q(s, a) = max π qπ(s, a).
If we can determine q, we have found the optimal way to behave the MDP. Namely, an MDP is "solved" when we know the optimal value function. But even if an MDP is solved the Agent does not know yet how to behave in the system. The optimal value functions are recursively related by the Bellman optimality equations:
v(s) = max a q(s, a).
The bellman equation may be solved with a Temporal-Difference methods such as the Q-learning algorithm (introduced below).
Conclusion
This paragraph allowed to present the Markov chain, the Markov reward processes, the Markov decision processes and the concatenation between the three to address the question of the reward, the discounted factor the value of a state/action and the use of the Bellman equation to perform the state-action function. After this brief introduction of the Markov decision processes we will introduce RL, that is the way how our agent learns in our study case. Actually, MDP formally describes an environment for RL where the environment is fully observable. MDP is a key formalism for RL, where it allows us to model the way the agent learns from an environment. So in the next subsection, we introduce RL, our artificial intelligence learning approach aimed at learning policies that maximize the expected cumulative discounted reward in MDP.
Reinforcement Learning
Introduction
As same as for MDP, in this paragraph, we will present the reader the way we got into reinforcement learning and all the concepts we exploit in the case study. We want to present the reader our learning/exploitation decision-making path. In this paragraph we will introduce methods, functions and algorithm that help readers to understand the core of our work. We will briefly address the following methods: temporal-difference, Model-based, Model-free, On-policy, Off-policy. We will also address some functions: the reward, optimal value, optimal action-value. We will introduce more in detail the Q-Learning algorithm and the the very simple idea behind Bellman equation and its role in dynamic programming.
Artificial Intelligence Approaches
In artificial intelligence, there are three approaches to solve a decision-making problem depending on the availability of learning data as presented in figure 2.9. Fig. 2.9 Three types of Machine Learning: supervised/unsupervised/reinforcement learning with their application domain.
The supervised approach used labeled features designed to train algorithms to classify data or predict results accurately for classification and regression. Classification and regression are two loss functions that are used to evaluate the degree to which the specific algorithm models the given labeled features. The unsupervised approach is based on algorithms to analyze and group unlabeled data sets (features) for clustering, association, and reduction of dimensionality, which are patterns in the data. The latter is reinforcement learning (RL), which is similar to how humans learn to maximize the expected cumulative discounted reward to find the best policies. In fact, many of the RL algorithms are inspired by biological learning systems. RL focuses on learning how good it is for an agent to be in a state over the long run, called a value of state (such as in MRP and MDP), or how good it is to take an action in a given state over the long run, called a value of action.
In RL, there are two main approaches to estimate state-action values: model-based and model-free. Let us briefly introduce the model-based and model-free methods. First, we specify what a model is. The model signifies the transition function and the reward function in an MDP. The model simulates the dynamics of the environment and allows inference of how the environment will behave. When the sequential decision problem is modeled as a Markov decision process (MDP) [15], the agent's policy can be represented as a mapping of each state it can encounter to a probability distribution of available actions. In some cases, the agent can use its experience in interacting with the environment to estimate an MDP model and then compute an optimal policy using off-line planning techniques such as dynamic programming [START_REF] Hammer | Dynamic programming[END_REF]. When learning a model is not feasible, the agent can still learn an optimal policy using temporal-difference (TD) methods, such as Q-Learning [START_REF] Sutton | Learning to predict by the methods of temporal differences[END_REF] (more detailed below), one of the widely used model-free methods. Q-learning was developed by Christopher John Cornish and Hellaby Watkins [START_REF] Watkins | Learning from delayed rewards[END_REF]. According to Watkins, 'it provides agents with the ability to learn to act optimally in Markovian domains by experiencing the consequences of actions, without requiring them to build domain maps.'
Model-Based and Model-Free Methods
As said just before, Model-based methods, require a model of the environment, such as dynamic programming, and learn the transition and reward models from interaction with the environment and then use the learned model to calculate the optimal policy by value iteration. The model of the environment is learned from experience, and the value functions are updated by value iteration on the learned model [START_REF] Dewanto | Averagereward model-free reinforcement learning: a systematic review and literature mapping[END_REF]. By learning a model of the environment, an agent can then use it to predict how the environment will respond to its actions, i.e. predict the next state and the next reward given a state and an action. If an agent learns an accurate model, it can obtain an optimal policy based on the model without additional experience in the environment. Model-based methods are more sample-efficient than model-free methods, but extensive exploration is often necessary to learn a perfect model of the environment [START_REF] Hester | Generalized model learning for reinforcement learning in factored domains[END_REF]. Since a model mimics the behavior of the environment, it allows us to estimate how the environments will change in response to what the agent does. However, learning a model allows the agent to perform a targeted exploration. If some states are not visited enough or are uncertain enough to learn a model correctly, this insufficiency of information drives the agent to explore more of those states. Thus, optimistic value initialization is commonly used for the exploration method. If the agent takes an action a in state s, the action value Q(s, a) is updated using the Bellman equation.
Model-free methods improve the value function directly from observed experience and do not rely on transition and reward models. Value functions are learned through trial and error. These methods are simple and can have advantages when a problem is complex, making it difficult to learn an accurate model. However, model-free methods learn without a model, such as temporal difference (detailed below) and require more samples than modelbased methods to learn value functions [START_REF] Huang | Model-based or model-free, a review of approaches in reinforcement learning[END_REF]. In particular in model-free methods, where the improvement of the value function comes directly from observed data and through interactions with the environment by the agent itself, the quality of the data collected is crucial. More crucial, for example, is supervised learning, where the data set is fixed. This dependence can lead to a vicious circle. If the agent collects poor-quality data. For example, referring to our study case, if the agent had collected portfolios with no associated rewards, then it would not improve and it would continue to accumulate bad portfolios.
Temporal-Difference Methods
Just before detailing the temporal difference (TD), we need to briefly introduce the on-and off-policy methods that we will use in the TD presentation. In on-policy methods, the agent learns the best policy and uses it to make decisions. In off-policy methods separate it into two policies. That is, the agent learns a policy different from what is currently generating behavior. The policy about which we are learning is called the target policy, and the policy used to generate behavior is called the behavior policy. Since learning is from experience 'off' the target policy.
TD learning is an unsupervised technique to predict a variable's expected value in a sequence of states. TD uses a mathematical trick to replace complex reasoning about the future with a simple learning procedure that can produce the same results. Instead of calculating the total future reward, TD tries to predict the combination of immediate reward and its own prediction of rewards at the next moment in time. Then, when the next moment arrives with new information, the new prediction is compared to what it was expected to be. If they are different, the algorithm calculates how different they are and uses this 'temporal difference' to adjust the old prediction to the new prediction. By always striving to bring these numbers closer together at every moment in time, matching expectations with reality, the entire chain of predictions gradually becomes more accurate. TD methods are guaranteed to converge in the limit to optimal action-value function, from which an optimal policy can be easily derived.
An example of a temporal difference method is the off-policy methods such as Q-learning [START_REF] Watkins | Q-learning[END_REF], the behavior policy, used to control the agent during learning, is different from the estimation policy, whose value is being learned. The advantage of this approach is that the agent can employ an exploratory behavior policy to ensure that it gathers sufficiently diverse data while still learning how to behave once exploration is no longer necessary. However, an on-policy approach, in which the behavior and estimation policies are identical, also has important advantages. In particular, it has stronger convergence guarantees when combined with the function approximation, since off-policy approaches can diverge in that case [START_REF] Baird | Residual algorithms: Reinforcement learning with function approximation[END_REF][START_REF] Boyan | Generalization in reinforcement learning: Safely approximating the value function[END_REF][START_REF] Gordon | Stable function approximation in dynamic programming[END_REF] and it has a potential advantage over off-policy methods in its online performance, since the estimation policy, which is iteratively improved, is also the policy that is used to control its behavior. By annealing exploration over time, on-policy methods can discover the same policies in the limit as off-policy approaches.
Learning Agent Algorithm
In RL, an agent learns from the ongoing interaction with an environment to achieve an explicit goal. Such an interaction produces a lot of information on the consequences of the behavior, which helps to improve its performance. Whenever the learning agent takes an action, the environment responds to its action by giving a reward and presenting a new state. The objective of the agent is to maximize the total amount of reward that it receives. Through experience in its environment, it discovers which actions stochastically produce the greatest reward and uses this experience to improve its performance for subsequent trials. In other words, the agent learns how to behave to achieve the goals [START_REF] Odell | Modeling agents and their environment[END_REF]. As we already mentioned, RL focuses on learning how good it is for the agent to be in a state over the long run, called a value of state, or how good it is to take an action in a given state over the long run, called a value of action. As shown in Figure 2.10, RL has different parts: Agent, Agent's action, Environment within which an agent takes actions, State or observation and rewards that an agent obtains as a consequence of an action it takes.
A reward is given immediately by the 'critic' in the environment as a response to the action of the agent, and a learning agent uses the reward to evaluate the value of a state or action. The best action is selected by the values of the states or actions because the highest value brings about the greatest amount of reward in the long run. Then the learning agent can maximize the cumulative reward it receives. A model represents the dynamics of the environment. A learning agent learns value functions with (Model-based) or without (Modelfree) models. These value functions can be represented using tabular forms, but, in large and complicated problems, tabular forms cannot efficiently store all value functions. In this case, Fig. 2.10 Interaction between the Agent and the Environment in RL. The Environment sends rewards and new states in response to actions from its Agent. Adapted from Sutton [START_REF] Sutton | Reinforcement Learning: An Introduction[END_REF].
the functions must be approximated by using a parameterized function representation for large problems.
The state action value function the reward function is also crucial. In the literature, there is a question about the importance of expert knowledge in defining the reward function. In fact, to achieve the desired behavior, expert knowledge is often required to design an adequate reward function, even if expert knowledge is incomplete and sometimes biased [START_REF] Wang | Meta reinforcement learning with generative adversarial reward from expert knowledge[END_REF]. At this point, we may briefly distinguish between reward engineering and feature engineering. If we take into account the distinction between RL and supervised learning, where we know from the start what we want to optimize, it may be considered true that we do not know a priori the best solution to an RL problem. Thus, defining a reward function by expert knowledge of the study case field can bias the agent toward what and reduce the possibility of developing a solution as general as possible like in a black box [START_REF] Kanade | Sleeping experts and bandits with stochastic action availability and adversarial rewards[END_REF]. However, the definition of a reward function should not be compared with feature engineering in supervised learning [START_REF] Khurana | Feature engineering for predictive modeling using reinforcement learning[END_REF]. Instead, a change in the reward function is more similar to a change in the objective function [START_REF] Wang | Planning with general objective functions: Going beyond total rewards[END_REF]. This does not mean that feature engineering is not used in RL [START_REF] Zhang | Automatic feature engineering by deep reinforcement learning[END_REF]. In RL, feature engineering is about how you represent state and action spaces.
There are at least two ways to define the notion of reward in an RL problem, the representation by goal or by penalty. In the representation goal-reward, the agent is rewarded only when it reaches a final state (goal state). This representation was used by [START_REF] Koenig | The effect of representation and knowledge on goal-directed exploration with reinforcement-learning algorithms[END_REF]: r(s, a) = 1 if succ(s,a) ∈ G, 0 otherwise. In the penalty representation, the agent is penalized for each action it takes r(s, a) = -1. This representation has a denser reward structure than the one based on the reward per goal (the agent receives more zero reward in the latter case) if the goals are numerous.
In the next section, we introduce the Bellman equation, which is probably one of the most widely used in RL and dynamic programming.
Bellman Equation
The idea behind Bellman equation is very simple; we take the sequence of rewards from the first time step we consider, and we can break it into two parts. Namely, the value function can be decomposed into two parts: Immediate rewards R t+1 Discounted value of the successor state γ(S t+1 ) at the end, all these rewards are added. The Bellman equation (Equation 2.1) states that the expected long-term reward for a given action is equal to the immediate reward of the current action combined with the expected reward of the best future action taken in the next state. This means that the value Q for one or more states and actions (a) should represent the current reward (r) plus the expected future maximum reward (γ) expected for the next state (s ′ ). The discount factor γ makes it possible to estimate at more (γ = 1) or in the medium term (γ < 1) future values of Q. According to the Temporal Difference [START_REF] Sutton | Introduction to Reinforcement Learning[END_REF] learning technique, the matrix Q can be updated as follows:
Q(s, a) = Q(s, a) + α[r + γ(max a ′ Q(s ′ , a ′ )) -Q(s, a)] (2.1)
with s ′ ∈ S, a ′ ∈ A, γ ∈ [0, 1] the discount factor, and α ∈ [0, 1] the learning rate. The learning rate determines how much new calculated information will outperform the old one. The discount factor γ determines the importance of future rewards. 0 would make the myopic agent consider only the current rewards, while a factor close to 1 would also involve the more distant rewards.
The variable Q allows us to decide how much future rewards can be compared to the current reward. In fact, it is a matrix composed of a number of rows equal to the number of states and a number of columns equal to the number of actions considered. The Q-Learning algorithm is used to determine, by iteration, the optimal value of the variable Q to find the best possible policy. Maximization allows us to select only the action a from all possible actions for which Q(s, a) has the highest value.
Q-Learning Algorithm -The Brain of the Agent
There are a multitude of algorithms to resolve the Bellman equation that can be classified according to the criteria presented above. The Sutton and Barto book [START_REF] Sutton | Reinforcement Learning: An Introduction[END_REF] is probably one of the most cited reference sources for accessing these algorithms. SARSA and Q-learning are two of the most widely used temporal-difference algorithms.
Why do we decide to embark on a Q-learning algorithm in our agent instead of a stateaction-reward-state-action (SARSA) algorithm? The difference may be considered subtle, but first we need to introduce and briefly stress the difference between two main approaches to learning action values: on-policy and off-policy.
This question still usually represents an important challenge of RL, the exploration / exploitation dilemma. The agent has to learn the optimal policy while not acting optimally, that is, by exploring all actions. On the other hand, off-policy methods separate it into two policies. That is, the agent learns a policy different from what is currently generating behavior. The policy about which we are learning is called the target policy, and the policy used to generate behavior is called the behavior policy [START_REF] Kumar | Conservative q-learning for offline reinforcement learning[END_REF]. Since learning is from experience 'off' the target policy.
The difference is very subtle: For Q-learning, which is the algorithm of the off-policy, when passing the reward from the next state (s, a) to the current state, it takes the maximum possible reward from the new state (s) and ignores whatever policy we are using. For SARSA, which is on-policy, we still follow the policy (e-greedy), compute the next state (a), and pass the reward corresponding to that exact a back to the previous step. To reiterate, Q-learning considers the best possible case if you get to the next state, while SARSA considers the reward if we follow the current policy at the next state. Therefore, if our policy is greedy, SARSA and Q-learning will be the same. But we are using e-greedy here, so there is a slight difference. The greedy policy is characterized by an agent always choosing an action with the maximum expected return. The e-greedy policy is characterized by an agent that takes actions using the greedy policy with a probability of 1ε and a random action with a probability of ε. This approach ensures that all of the action space is explored.
The Q-learning update rule [15] is just a special case of the expected SARSA update rule [START_REF] Sutton | Learning to predict by the methods of temporal differences[END_REF] for the case where the estimation policy is greedy.
Another good/bad reason may be that SARSA only looks up the next policy value, while Q-learning looks up the next maximum policy value. But from an algorithmic standpoint, it can be a mean, max, or best action depending on how we choose to implement it.
The two methods have advantages. We decided to use Q-Learning, which is outside the policy, because we don't need the advantages of the on-policy method, but we need the algorithm to converge to optimal Q-values as long as exploration occurs. Furthermore, in [START_REF] Van Seijen | A theoretical and empirical analysis of expected sarsa[END_REF] the authors state that SARSA and Q-learning are expected to outperform SARSA.
At this point, we need to briefly introduce how Q-learning is implemented [START_REF] Dann | Guarantees for epsilon-greedy reinforcement learning with function approximation[END_REF]: is initialized as a mxn matrix with all its values set to zero and where m is the size of the state space and n is the size of the action space.
1. Initialize Q-Table The Q-Table
2. Define ε-Greedy Policy. Depending on the selected epsilon parameter, the ε-greedy policy selects the action with the highest Q-Value for the given state or a random action. An epsilon of 0.50 means that 50% of the time an action will be chosen randomly, while an epsilon of 1 means that the action will always be chosen randomly (100% of the time). Theoretically, the value ε can vary during the training, exploration, and exploitation phases. However, for our study case, training is carried out with a constant epsilon value.
3. Define the execution of an Episode. For each episode, the agent will complete as many time steps as necessary to reach a final state. After executing it, the agent will observe the new state reached and the reward obtained, information that will be used to update the Q values of its Q table. This process is repeated for each time step until the optimal Q values are obtained. The Q-value is updated by applying the Bellman equation.
4.
Training the Agent. At this point, only the algorithm hyperparameters must be defined, which are: learning rate α, discount factor γ, and ε. Additionally, the number of episodes that the agent must perform is specified to consider the completed training. Training execution will consist of running an execute_episode() function for each training episode. Each episode updates the Q-Table until optimal values are reached.
5.
Evaluate the Agent. To assess the agent, rewards for each training episode, visual representations of the trained agent performing an episode, and metrics from several training events are used for the trained agent. The reward obtained during training shows the convergence of the rewards with optimal values. In our study case, as the rewards of each timestep are 0 except for the goal state.
As briefly introduced above, the mind of the agent in Q-learning is a table with the rows as the state or observation of the agent from the environment and the columns as the actions to take. Each of the cells of the table will be filled with a value called the Q-value, which is the value that an action brings considering the state in which it is in. This table is called the Q-table. The Q-table is actually the brain of the agent. What is the Q-value then? Basically, the Q-value is the reward obtained from the current state plus the maximum Q-value from the next state. The agent has to get the reward and the next state from its experience in the memory and add the reward to the highest Q-value derived from the row of the next state in the Q-table, and the result will go into the row of the current state and the column of the action, both obtained from the experience in the memory. In our study case, all states except one bring to the agent a reward equal to 0. Figure 2.11 shows the Q-Learning algorithm of [START_REF] Sutton | Introduction to Reinforcement Learning[END_REF]. In line 3, the start state is defined before the loop (line 4) in charge of trying to reach the goal state (line 9). For each step, the Q matrix is updated according to Equation 2.1 by considering a new action a, a reward r and state s'. When the goal state is obtained, a new episode occurs. Regarding the convergence of the two algorithms:
• SARSA will learn the optimal ε-greedy policy, i.e., the Q-value function will converge to an optimal Q-value function but only in the space of the ε-greedy policy (as long as each action pair will be visited indefinitely). We expect that in the limit of ε decreasing to 0, SARSA will converge to the optimal global policy. In [START_REF] Sutton | Reinforcement Learning: An Introduction[END_REF] by Sutton&Barto: The convergence properties of the SARSA algorithm depend on the nature of the dependence of the policy on Q. For example, we could use ε-greedy or ε-soft policies. According to Satinder Singh (personal communication), SARSA converges with probability 1 to an optimal policy and an action value function as long as all state-action pairs are visited an infinite number of times and the policy converges in the limit to greedy policy (which can be arranged, for example, with ε-greedy policies by setting ε = 1 T ), but this result has not yet been published in the literature.
• Basically, and as pointed out in [START_REF] Even-Dar | Learning rates for q-learning[END_REF], the Q-learning algorithm converges in polynomial time depending on the value of the learning rates. However, if the discount factor is close to or equal to 1, the value of Q may diverge [START_REF] Russell | Artificial Intelligence: A Modern Approach[END_REF].
So, for the reasons described above, we use Q-learning, one of the algorithms tailored for this domain that has been used to solve many MDP problems. To welcome our decision, let us briefly present a simple example of a Q-learning problem in an MDP environment.
Our study case is Discrete Actions -Single Process Automate. Actually, in these cases, Q-learning or its RN-version deep Q-learning (DQN) and extensions or actor-critic with experienced replay (ACER) are the recommended algorithms [START_REF] Sutton | Reinforcement Learning: An Introduction[END_REF].
DQN are usually slower to train (with respect to wall clock time), but are the most sample-efficient (because of its replay buffer). In addition, our environment is a reward environment for the goal; the choice of the algorithm depends on our action space. The difference between Q-learning and DQN is the brain of the agent. The agent's brain in Q learning is the Q table, but in DQN, the agent's brain is a deep neural network. We implement our research with the Q-learning algorithm, but our approach is not restrictive and can be implemented with DQN.
Two other techniques are also essential for the formation of the DQN:
• Experiment Replay: Since the training samples in a typical RL setup are highly correlated and less data-efficient, this will make convergence more difficult for the network. One way to solve the sample distribution problem is to adopt experience proofreading. Essentially, the sample transitions are stored, which will then be randomly selected from the 'transition pool' to update the knowledge.
• Separate Target Network: The Q target network has the same structure as the value estimator. Each step of C, according to the above pseudocode, the target network is reset to another. Therefore, the fluctuation becomes less severe, resulting in more stable formations.
Although DQN has achieved enormous success in higher-dimensional problems (such as the Atari game), the action space remains low-key. However, in many tasks of interest, especially physical control tasks, the action space is continuous. If the action space is discretized too finely, the action space becomes too large. For example, suppose that the degree of free random system is 10. For each degree, divide the space into 4 parts. You end up having 4 10 = 1048576 shares. It is also extremely difficult to converge for such an action space. The deep deterministic policy gradient (DDPG) [START_REF] Silver | Deterministic policy gradient algorithms[END_REF] borrows the ideas of replay of the experience and separates the target network from the DQN. A problem with DDPG is that it rarely performs action exploration. A solution to this is to add noise on the parameter space or on the action space.
Explainable Artificial Intelligence in Reinforcement Learning
AI applications are blossoming in today's world, and the weight of decision-making decisions delegated to AI is an all-time high. Given this large amount and the scrutiny decisions that must be made, it is extremely important to be clear on why a particular decision was made. explainable artificial intelligence (XAI), that is, the development of more transparent and interpretable AI models, has gained greater traction over the past few years.
Why is explainability so crucial? First, explainability is critical to building confidence in disruptive technologies for decision makers, academics, and the public and has been identified as a key component in both increases in user participation. So, there is one psychology-related reason: "if users do not trust a model or a prediction, they will not use it" [START_REF] Ribeiro | why should i trust you?" explaining the predictions of any classifier[END_REF]. Explainability is key for decision makers interacting with AI to understand AI's conclusions and recommendations. Trust is an essential prerequisite for the use of a model or system [START_REF] Israelsen | i can assure you... that it's going to be all right..." a definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships[END_REF]. There are situations where decision-makers may not have full access to the decision-making process that an AI may undergo. To be clear, we use an example financial investment algorithm. AI offers recommendations for portfolio management. These recommendations need to be transparent, and decisions need to be justifiable. Decision makers should be able to adequately explain why these recommendations are made and what data are used to make them, as well as the reasoning behind the recommendation, to provide sufficient explanations for them. Transparency also justifies the decisions of the system and makes them fair and ethical. And even confidence does not mean that the decision is based on reasonably learned data. Therefore, decision-making processes must be reviewable, especially if AI is working with highly uncertain data. In terms of reviewability, there is also a legal element that needs to be taken into account in terms of reviewability; the EU general data protection regulation (GDPR) [START_REF]European commission, parliament: Regulation (eu)[END_REF], which came into effect in May 2018, aims to ensure a 'right to explanation' [START_REF] Goodman | European union regulations on algorithmic decision-making and a "right to explanation[END_REF] regarding automated decision making and profiling. It states that "[...] such processing should be subject to suitable safeguards, which should include [...] the right to obtain human intervention [...] and an explanation of the decision reached after such assessment" [START_REF]European commission, parliament: Regulation (eu)[END_REF]. Furthermore, the European Commission established an AI strategy with transparency and accountability as important principles to be respected [START_REF] Commission | The european commission: Communication from the commission to the european parliament, the european council, the council, the european economic and social committee and the committee of the regions[END_REF], and in its Guidelines on trustworthy AI [START_REF] Commission | The european commission: Independent high-level expert group on artificial intelligence set up by the european commission[END_REF], they state seven key requirements, with transparency and accountability as two of them.
Second block of reasons, AI technologies have become an important part in almost cyber-physical systems (CPSs) domains. CPSs are systems in which all behaviors originate from machine learning, including RL. In this second block of reason, we can include the aim for increased efficiency and the need to accommodate volatile parts of today's critical business such as porfolio management or Smart Grid such as a high share of energy sources. Over time, AI technologies expanded from being an added input to an otherwise thoroughly defined control system to increase the state of awareness of CPSs. AlphaGo is probably the most widely known representative of the last category [START_REF] Silver | Mastering the game of go without human knowledge[END_REF]. Moreover, despite the increasing efficiency and versatility of AI, its incomprehensibility reduces its usefulness, since "incomprehensible decision making can still be effective, but its effectiveness does not mean that it cannot be faulty" [START_REF] Lee | Complementary reinforcement learning towards explainable agents[END_REF] We also have to bear in mind and consider the fact that, nowadays, AI can act increasingly autonomous, explaining and justifying the decisions is now more crucial than ever, especially in the domain of RL where an agent learns by itself, without human interaction. XAI methods can be categorized based on two factors; first, based on when the information is extracted, the method can be intrinsic or post hoc, and second, the scope can be global or local. Global and local interpretability are meant to refer to the scope of an explanation; global models explain the entire behavior of the model, and local models explain specific decisions. Global models try to explain the entire logic of a model by inspecting the structure of the model [START_REF] Neto | Explainable matrix-visualization for global and local interpretability of random forest classification ensembles[END_REF]. Local explanations attempt to answer the question: 'Why did the model make a certain prediction/decision for an instance/for a group of instances?' [2]. They also try to identify the contributions of each characteristic in the input towards a specific output [START_REF] Du | Techniques for interpretable machine learning[END_REF]. Furthermore, global interpretability techniques lead to users trusting a model, while local techniques lead to trusting a prediction [START_REF] Du | Techniques for interpretable machine learning[END_REF]. Post hoc versus Intrinsic Interpretability depend on the time when the explanation is extracted/generated; An intrinsic model is an ML model that is designed to be inherently interpretable or self-explanatory at the time of training by constraining the model complexity, for example, decision trees [START_REF] Du | Techniques for interpretable machine learning[END_REF]. Fig. 2.12 XAI toxonomy in RL (adapted from [2]).
In contrast, post hoc interpretability is achieved by analyzing the model after training by creating a second, simpler model, to provide explanations for the original model [START_REF] Du | Techniques for interpretable machine learning[END_REF], such as surrogate models or saliency maps [2].
The two most representative post hoc models in the literature are LIME [START_REF] Ribeiro | why should i trust you?" explaining the predictions of any classifier[END_REF] and SHAP [START_REF] Lundberg | A unified approach to interpreting model predictions[END_REF], and they are based on two completely different mechanisms. While LIME uses feature perturbations to build a linear surrogate model out of it (such as a decision tree), SHAP is based on game theory, where predictions are explained by assuming that each feature value of the instance is a 'player' in a game where prediction is the payment. SHAP uses Shapley values, which is a method of fairly distributing 'payment' between different characteristics [START_REF] Molnar | Interpretable machine learning[END_REF].
Just like the models themselves, these interpretability models also suffer from a transparencyaccuracy-trade-off; intrinsic models usually offer accurate explanations, but, due to their simplicity, their prediction performance suffers. In contrast, post hoc interpretability models usually keep the accuracy of the original model intact, but are harder to derive satisfying and simple explanations from [START_REF] Du | Techniques for interpretable machine learning[END_REF].
Another distinction, which usually coincides with the classification into intrinsic and post hoc interpretability, is the model-specific or model-agnostic classification. Techniques are model-specific if they are limited to a specific model or class of models [START_REF] Molnar | Interpretable machine learning[END_REF], and are modelagnostic if they can be used on any model [START_REF] Molnar | Interpretable machine learning[END_REF]. As you can also see in Figure 2.12, intrinsic models are model-specific, while post hoc interpretability models are usually model-agnostic.
Conclusion
This paragraph allowed us to address the question of methods such as temporal difference, model-based, model-free, on-policy, and off-policy, which will be necessary for our study case and for future work. We also addressed some crucial functions such as reward, reward function, optimal value, and optimal action value that we will exploit in our study case. As well, we introduced in detail Q-Learning and the decision why we preferred Q-Learning to SARSA algorithm as well as the very simple idea behind Bellman equation and its role in dynamic programming. Finally, we detail explainable artificial intelligence in RL, which will be a useful concept for comparing DEVS decision-making models with the efficient semivariance model and the combination between DEVS and LSTM.
In the next section, let us introduce our decision-making formalism, which is the discrete event system specification (DEVS).
Discrete-Event System Specification Formalism
When one wants to represent a complex system, it is customary to define a model. The model makes it possible to formalize the behavior of a system by specifying a set of rules, most often using mathematics. The modeling of a system leads to making it manipulable by the human being because he then has a model with which he can experiment ad infinitum. The advantage of owning a model is that it can be simulated. These simulations make it possible to put the system under experimental conditions in order to observe its behavior. A valid model is a model which, when simulated, perfectly reproduces the behavior of the system it represents.
Modeling a system requires the use of a description formalism. The choice of formalism is conditioned by the representation of space, time, and the states of a system. During the modeling process, it is important to know whether the time and states representing the system are considered discrete or continuous. In the same way, it is important to know if the concept of space is to be taken into consideration in the evolution of the system and if so if it is modeled in a discrete or continuous way. When a system is described by considering its states and time in a continuous way, it can be modeled using a formalism such as classical differential equations. When the notion of space must also be taken into account, it is preferable to use partial differential equations. When the change of state of a system is discrete and time is considered continuous, the formalism used can be that of discrete events regardless of how the space is considered (discrete, continuous, or absent).
The choice of formalism (and, therefore, the way in which states, time and space are considered) is often conditioned by the scientific culture of the modeller. Traditionally, a complex system is modeled by considering its evolution continuously in time (or in space), and its behavior is described from differential equations which are solved analytically or numerically depending on the complexity of the model. In this type of approach, the evolution of the system (of these states) is modeled independently of the time scale, and it can be said that the resolution of the model is done with a continuous simulation approach. This modeling method applies very well to systems whose states evolve continuously in time (or space) and whose states must be observable at any instant. However, some systems evolve in time only at specific instants and it is not necessary to know precisely their behavior between these instants. They evolve in a discrete way, and only their observations matter at these precise moments when they change state. In this case, discrete event modeling and simulation can be used to study these systems. It is important to note that the choice of formalism also depends on the discrete or continuous nature of the system that one wishes to model. Continuous simulation will be preferred in the case of the study of a model of population dynamics in an ecosystem due to the presence of continuous state variables such as the speed of movement of individuals. On the other hand, discrete event simulation will be preferred for modeling an industrial production chain system, in which it is important to know the evolution of a product by stages.
The theory of modeling and simulation (TM&S) and the DEVS formalism were introduced by Zeigler in the 1970s [START_REF] Zeigler | Theory of Modeling and Simulation[END_REF] to model discrete event systems in a hierarchical and modular way. Combining a formal approach and general theory of systems, it had a wide echo in the 1990s and 2000s in the French community. Today, there are around 10 French-speaking laboratories that develop tools and applications based on the (TM&S) formalization, namely the DEVS formalism and its extensions. This formalism is based on the general theory of systems [START_REF] Von Bertalanffy | General system theory[END_REF]. It is common for the DEVS formalism, in its original form Zeigler [START_REF] Zeigler | Theory of Modeling and Simulation[END_REF], to be adapted and extended to be replaced in more specific contexts of an application domain. This is, for example, the case when it comes to modeling differential equations [START_REF] Kofman | Discrete event based simulation and control of continuous systems[END_REF]. Professor Zeigler has proposed a conceptual architecture for modeling and the simulation of systems, particularly suited to the DEVS formalism. As shown in Figure 2.13, this architecture has three entities:
• The system: it is the phenomenon observed in a given environment. The environment provides the specifications of the conditions under which the system is being operated and allows for its experimentation and validation.
• The model: it is the representation of the system generally based on the set of definitions of instructions, rules, equations, and constraints that allow the generation of a behavior after simulation. The model defines the behavior and structure of a system that evolves in a given environment.
• The simulator: it is an entity that is responsible for interpreting the model (executing these instructions) to generate its behavior.
These entities are linked by two relationships:
• The modeling relationship: it is made up of construction rules and model validation.
• The simulation relationship: it is made up of the model execution rules that ensure that the simulator generates the expected behavior of the system from the model.
The explicit separation between the entities allows one to benefit from several advantages such as simulating a model with different types of simulators or different types of environment. The enthusiasm for object-oriented programming in the early 1980s led Professor Zeigler to use an object approach to define his formalism. DEVS formalizes what a model is, what it must contain and what it does not contain (experimentation and simulation control parameters are not contained in the model). Moreover, DEVS is universal and unique for discrete-event system models. Any system that accepts events as input over time and generates events as output over time is equivalent to a DEVS model. DEVS allows for automatic simulation on multiple different execution platforms, including those on desktops (for development) and those on high-performance platforms (such as multicore processors). With DEVS, a large system model can be decomposed into smaller component models with couplings between them. DEVS formalism defines two kinds of model: (i) atomic models that represent the basic models providing specifications for the dynamics of a subsystem using function transitions and (ii) coupled models that describe how to couple several component models (which can be atomic or coupled models) together to form a new model. This hierarchy inherent to the DEVS formalism can be called a description hierarchy, allowing the definition of a model using hierarchical decomposition. It should be pointed out that this kind of hierarchy does not involve any abstraction-level definition since the behaviors of all implied models are defined at the same level of abstraction. However, as a hierarchy of description, the hierarchical decomposition in DEVS may still be regarded as a kind of abstraction: In top-down design, modelers can consider couplings and interfaces between models without considering details of internal components. • δ int : S → S is the internal transition function that will move the system to the next state after the time returned by the time advance function.
DEVS Atomic Model
• δ ext : Q × X → S is the external transition function that will schedule changes in the states in reaction to an external input event.
• λ : S → Y is the output function that will generate external events just before the internal transition takes places.
• t a : S → R + ∞ is the time advance function that will give the life time of the current state.
An atomic DEVS model can be considered as an automaton with a set of states and transition functions that allow the state to change when an event occurs or not. When no events occur, the state of the atomic model can be changed by an internal transition function, as noted δ int . When an external event occurs, the atomic model can intercept it and change its state by applying an external transition function as noted δ ext . The lifetime of a state is determined by a time advance function called t a . Each state change can produce an output message via an output function called λ .
The dynamic interpretation is the following.
• Q = {(s, e)|s ∈ S h , 0 < e < t a (s)} is the total state set.
• e is the elapsed time since the last transition and s the partial set of states for the duration of t a (s) if no external event occur.
• δ int : the model being in a state s at t i , it will go into s ′ = δ int (s) if no external events occur before t i + t a (s).
• δ ext : when an external event occurs, the model being in state s since the elapsed time e enters s ′ . The next state depends on the elapsed time in the present state. At every state change e is reset to 0.
• λ : the output function is executed before an internal transition before emitting an output event, and the model remains in a transient state.
• A state with infinite lifetime is a passive state (steady state), otherwise it is an active state (transient state). If the state s is passive, the model can evolve only with the occurrence of an input event.
We will give the DEVS specifications of an atomic model named EXEC which represents a system that becomes 'active' (s 1 ) when it receives input x i on an input port p 1 , processes these data for a time 'proc', and then generates outputs y i on an output port p 2 . If an external event occurs while the system is in the active state, its lifetime will be reduced by the time elapsed since the last change in state (e). If no event occurs, the system remains in a passive state (s 2 ). The system is in an initial 'passive' state. ) is replaced by the manipulation of an attribute of the atomic model called sigma (σ ). The description above is then written in a more condensed way for δ int , δ ext , and t a :
• X = {(p 1 , x i )|i ∈ R + } • Y = {(p 2 , y i )|i ∈ R + } • S = {s 1 , s 2 } • δ int (S) : 1. If s
• δ int (S) : 1. if s is s 1 then 2. s ← s 2 during σ ← ∞ 3. else pass • δ ext (QxS) : 4. if s is s 2 then 5. s ← s 1 during σ ← proc 6. else 7. t a (s 1 ) ← t a (s 1 ) -e
• t a (S) : return σ It is quite common to represent the behavior description using an automaton (Figure 2.15). The automaton starts in state s 2 with an infinite lifetime. When an event arrives at the input port, the state changes in s 1 for a time proc. If an external event occurs during this time, the state does not change, but the lifetime is updated by considering the time that has passed since the last change in state (e). The state trajectory allows us to trace the changes in state according to the input events (Figure 2.16). When an event occurs at time t 1 , the system goes from the passive state s 2 to the active state s1 for a time proc. When time t 1 + proc elapses, the model generates an exit event. The input event that occurs at time t 3 does not change the state of the system (s 1 ), but the lifetime of the latter is updated according to e.
DEVS Coupled Model
The DEVS coupled model CM is a structure:
CM =< X,Y, D, {M d ∈ D}, EIC, EOC, IC >
where:
• X is the set of input ports for the reception of external events.
• Y is the set of output ports for the emission of external events.
• D is the set of components (coupled or basic models).
• M d is the DEVS model for each d ∈ D.
• EIC is the set of input links that connects the input of the coupled model to one or more of the inputs of the components that it contains.
• EOC is the set of output links that connect the output of one or more of the contained components to the output of the coupled model.
• IC is the set of internal links that connect the output ports of the components to the input ports of the components in coupled models.
In a coupled model, an output port from a model M d ∈ D can be connected to the input of another M d ∈ D, but cannot be connected directly to itself.
Consider a coupled model CM 1 composed of three atomic models AM 1 , AM 2 and AM 3 and another coupled model CM 2 composed of two atomic models AM 4 and AM 5 . To simplify the example, we assign a single pe 1 input port and a single ps 1 output port to each of the models. Figure 2.17 shows the coupled model CM 1 . Fig. 2.17 An example of a DEVS coupled model.
The DEVS specification of the coupled model of figure 2.17 is as follows:
• X = {(p 1 , x i )|i ∈ R + } • Y = {(p 2 , y i )|i ∈ R + } • D = {AM 1 , MA 2 , MA 3 , AM 4 , AM 5 ,CM 2 }
• EOC = {((AM 2 , ps 1 ), (CM 1 , ps 1 )), ((AM 3 , ps 1 ), (CM 1 , ps 1 ))}
• IC = {((AM 1 , ps 1 ), (AM 3 , pe 1 )), ((CM 2 , ps 1 ), (AM 2 , pe 1 ))}
• EIC = {((CM 1 , pe 1 ), (CM 2 , pe 1 )), ((CM 1 , pe 1 ), (AM 1 , pe 1 ))}
• select = (CM 2 , AM 1 , AM 2 , AM 3 )
In the example above, during a possible execution conflict, the coupled model CM 2 has priority over AM 1 which has priority over AM 2 which has priority over AM 3 . This order of precedence is implemented in the select function.
The DEVS formalism ensures that any coupled model is equivalent to a single-atomic model. In fact, an atomic model can be decomposed into several atomic submodels organized into several coupled models. The modeler organizes these models hierarchically as he wishes in models coupled with the desired level of description.
DEVS Simulator
A simulator is associated with the DEVS formalism to exercise the instructions of coupled models to actually generate its behavior. The architecture of a DEVS simulation system is derived from the abstract simulator concepts associated with the hierarchical and modular DEVS formalism. Parallel DEVS (PDEVS) essentially extends classic DEVS by allowing bags of inputs to the external transition function. Bags can collect inputs that are built on the same date and process their effects on the outputs that will result in new bags. This formalism offers a solution to manage simultaneous events that could not be easily managed with classic DEVS. In PDEVS, the notion of event bag has been added. This notion integrates the fact that several events that occur at the same time can be grouped together in a bag. The set is denoted X b . Similarly, an atomic model can output multiple events at the same time. The output set is then denoted by Y b . The function δ conv (SX b ) is introduced to resolve the runtime conflict between the inner and outer transition functions of an atomic model with the particular case where δ conv (S) = δ int (S). The association of these two notions (bag and δ conv ) makes it possible to manage collisions between the internal and external transition functions and at the same time process several events arriving at the same time on an atomic model. In the classical DEVS formalism, each time an event arrives, the external transition function is invoked. Therefore, there were as many invocations as there were simultaneous messages on the ports of an atomic model. With this extension, simultaneous events are available in the single invocation of the transition function.
DEVSimPy Environment
DEVSimPy [START_REF] Capocchi | DEVSimPy: A collaborative python software for modeling and simulation of DEVS systems[END_REF] (DEVS Simulator in Python language) is an open source project (under GPL V.3 license) supported by the SPE team of the university of Corsica Pasquale Paoli. This objective is to provide a GUI for modeling and simulation of PyDEVS [START_REF] Li | A testing framework for devs formalism implementations[END_REF] models. PyDEVS is an application programming interface (API) that allows the implementation of the DEVS formalism in Python language. Python is known as an interpreted, very high-level, object-oriented programming language widely used to quickly implement algorithms without focusing on code debugging [START_REF] Perez | Python: An ecosystem for scientific computing[END_REF].
The DEVSimPy environment has been developed in Python with the wxPython [START_REF] Rappin | WxPython in action[END_REF] graphical library without strong dependence on the scientific Python libraries than the scientific Python libraries Scipy [START_REF] Jones | Scipy: Open source scientific tools for python[END_REF] and Numpy [START_REF] Oliphant | Python for scientific computing[END_REF] scientific Python libraries. The basic idea behind DEVSimPy is to wrap the PyDEVS API with a GUI allowing significant simplification of handling PyDEVS models (like the coupling between models or their storage). The user can instantiate the models using drag-and-drop functionality. The right part of Figure 2.18 shows the modeling part based on a canvas with the interconnection of instantiated models. This canvas is a diagram of atomic or coupled DEVS models waiting to be simulated. DEVSimPy capitalizes on the intrinsic qualities of the DEVS formalism to automatically simulate models. The simulation is carried out by pressing a simple button, which invokes an error checker before the building of the simulation tree. The simulation algorithm can be selected among hierarchical simulators (default with the DEVS formalism) or direct coupling simulators (most efficient when the model is composed of DEVS coupled models).
Simpy ?? is an open source process-based DES package available in Python widely used for M&S. Another example of an open source Python-based library is OR-Gym ??, which provides a Python-based RL environment consisting of old-fashioned operations and optimization optimization problems. Commercial software packages have embraced modeling features with RL in recent years but there is a lack of M&S software for the RL system. Our work also contributes to the literature by demonstrating how to connect DEVSimPy, to a RL model. This work will create a way of research to model and simulate a RL agent-environment in a discrete event manner with DEVS.
Conclusion
We assert that, in terms of explainability and modularity, the DEVS formalism offers a real advantage. It allows us to describe the system from an unambiguous semantics of modeling (model) and simulation (simulator). In contrast, modeling a heterogeneous complex system may require the use of several differential or algebraic equations, an automate or Petri network, continuous and discrete evolution over time.
Thanks to its properties of building models by composition, its hierarchy of description and abstraction, and its explicit separation between a model and simulator, which is generated automatically, DEVS allows an M&S of systems with an effective formal and generic approach approved for more than 40 years in several fields of application.
DEVS allows an approach to building its models that can be based on design patterns (set of DEVS atomic or coupled models defined to model/simulate systems) and building blocks (well-known notion of directly reusable coupled models and containing a set of interconnected submodels). These two notions (in addition to the management of state duration specific to DEVS) are highlighted in [START_REF] Zeigler | Devs-based building blocks and architectural patterns for intelligent hybrid cyberphysical system design[END_REF] and make DEVS an excellent language for Internet of Things (IoT) systems, for example.
Conclusion
This paragraph underlined how our research field, which combines the DEVS formalism and Markov decision processes, is juvenile in the scientific literature. It also pointed out that the DEVS formalism has been underexploited in the decision-making literature. It also outlined the hypothesis that, in the relevant finance decision-making literature, our research may represent a new benchmark. Furthermore, this paragraph sought to review the relevance of volatility in asset management. The paragraph also contributes to the detailed concepts, methods, and algorithms related to financial decision making when volatility plays a relevant role. This paragraph also stressed some topics from the literature that would be useful to explore in depth in future research.
Chapter 3 DEVS-Based Reinforcement Learning System Modeling and Simulation
Introduction
M&S, and AI, are two domains that can complement each other. For example, AI can help the "simulationist" in modeling complex systems that are impossible to represent mathematically [START_REF] Nielsen | Application of Artificial Intelligence Techniques to Simulation[END_REF]. On the other hand, M&S can help AI models that did not handle complex systems due to the lack of simple or unworkable heuristics.
Systems that already use AI, such as digital supply chains, "smart factories", and other industrial processes in Industry 4.0, will inevitably need to include AI in their simulation models [START_REF] Foo | Systems theory: Melding the AI and simulation perspectives[END_REF]. For example, with simulation analysis systems, AI components can be directly integrated into the simulation model to enable testing and forecasting. In [START_REF] Meraji | A machine learning approach for optimizing parallel logic simulation[END_REF] the authors use a recursive learning algorithm (Q-Learning) to combine a dynamic load balancing algorithm and a bounded-window algorithm for discrete-event simulation of VLSI circuits (Very Large Scale Integration) at the gate level.
Machine learning is a type of AI that uses three types of algorithms (Supervised Learning, Unsupervised Learning, Reinforcement Learning) in order to build models that can get an input set to predict an output set using statistical analysis. Although simulation and ML help to model and provide optimal solutions in various problems, they never work together. The main reason is that simulation is process-centric, while ML is data-centric. To build a simulation model, knowledge of the process (system behavior) is essential, and it is also necessary to get closer to the people in charge of the process because only they know how it works. It is also necessary to acquire the data of the system and observe its evolution to be modeled. To build an ML model, only obtaining the data is essential. Often, data is analyzed to find a correlation before applying ML algorithms. Very little (if any) situational awareness is required from the modeler.
Therefore, building simulation models and ML models will provide different skills from scientists. Why combine ML and simulation? The two successful techniques will probably solve problems that have been impossible to solve separately so far. Several ideas can be proposed:
• A factory does not have documented rules and policies to run its operations. Everyone tells you otherwise. It is impossible to model heuristics with the traditional explicit simulation approach. Use a decision tree model to indicate the rules by looking at historical data.
• People or process docs describe how they behave, but they do not tell you their actual behavior. Again, an ML model can be used to determine historical heuristics.
• Path finding algorithms are not good enough to navigate in a simulation environment.
Reinforcement learning helps to find optimal paths. Reinforcement learning is an ML training method based on rewarding desired behaviors and/or punishing undesired ones.
• It is difficult to match the output of the simulation model with real performance indicators. An ML model can "steer" the simulation to match reality.
• The real system implements advanced ML algorithms to make decisions. To simulate such a system, you must be able to reproduce the ML algorithms in the simulation itself.
There are probably many more possibilities, and as with most innovations, some will only reveal themselves when we start experimenting with them.
Basically, three ways to integrate ML techniques inside simulation models can be observed:
• ML before the simulation: ML algorithms can be used as input data for the simulation model. The most obvious approach would be to develop data-driven decision heuristics that agents can apply.
• ML inside the simulation: Make the simulation learn certain aspects and apply this learning directly. This coupled approach makes sense when training ML algorithms on specifics of the simulation model itself, such as pedestrians trying to find a path in the environment of the simulation model. There could be three ways to further split:
(1) train the ML algorithm as part of the simulation, (2) reuse previously trained ML models, or (3) train the ML model as the simulation progresses (i.e., for Reinforcement Learning).
• ML after the simulation: Taking the output of the simulation and feeding it into an ML algorithm. There is little use of this on small scales currently. However, this can be used in very advanced AI algorithm training, such as autonomous driving. The models are not only driven with real cars, but the AI can "drive" through the simulation cities. This takes the concept of simulation models as testbeds of algorithms.
In the other direction, it is possible to imagine the benefit of simulation for ML and, more specifically, for RL. In the case of RL, the discrete event approach has its place in an architecture defined to determine an optimal policy in a communication scheme by event between an agent and an environment. The DEVS formalism is an ideal solution for implementing an RL algorithm such as Q-learning because it makes it possible to represent and carry out the learning of the system by simulation in a formal, modular, and hierarchical framework. This thesis presents the benefits of DEVS formalism aspects to assist in the realization of ML models with a special focus of RL. For the simulation part, Monte Carlo simulation is often used to solve ML problems using a "trial-and-error" approach. Monte Carlo simulation can also be used to generate random outcomes from a model estimated by some ML technique. ML for optimization is another opportunity for integration into simulation modeling. Agent-based systems often have many hyperparameters and require significant execution times to explore all their permutations to find the best configuration of models. ML can speed up this configuration phase and provide more efficient optimization. In return, simulation can also speed up the learning and configuration process of AI algorithms. Simulation can also improve the experimental replay generation in RL problems where the agent's experiences are stored during the learning phase. RL components can be developed to replace rule-based models. This is possible when considering human behavior and decision-making. For example, in [START_REF] Floyd | Creation of devs models using imitation learning[END_REF] the authors consider that by observing a desired behavior, in the form of outputs produced in response to inputs, an equivalent behavioral model can be constructed (Output Analysis in Figure 3.1). These learning components can be used in simulation models to reflect the actual system or to train Fig. 3.1 M&S for Machine Learning. Modeling part highlight important aspects related to discrete event M&S that brings benefits as temporal aspect for delayed reward notion in RL for example. Simulation part point out that it can be used to improve the output analysis or to facilitate the Monte Carlo process realization. ML components. By generating the data sets needed to learn neural networks, simulation models can be a powerful tool for deploying recursive learning algorithms.
Discrete Event Modeling and Simulation for Machine
Concerning the modeling part, many modeling aspects can be highlighted to help ML implementation:
• Temporal: Basically, the temporal aspect is implicit in the RL models. For instance, MDPs consider the notion of discrete and continuous time in order to consider the life time of a state. In RL models, the introduction of time in the awarding of rewards allows the modeling of a non-immediate response of a system. The use of the notion of time in the RL models also makes it possible to perform asynchronous simulations from real data.
• Hierarchical: The abstraction hierarchy allows the modeling of RL systems with different levels of detail. This makes it possible to determine optimal policies by levels of abstraction, and therefore to have more or less precise policies depending on the level chosen by the user.
• Multi-Agent: Multi-agent modeling can be used as part of RL where the optimal policy is based on a communication between an environment and one or more agents that are instantiated in a static or dynamic way.
• Specification: The assembly of all the ML pieces needed to solve problems can be a daunting task. There are many ML algorithms to choose from, and deciding where to start can be discouraging. Using specifications based on model libraries and system input analysis, the choice of the appropriate algorithm becomes simpler.
This thesis presents the benefits of hierarchical and specification aspects of the DEVS formalism to assist in the realization of RL models. Let us now see how the DEVS formalism makes it possible to represent and simulate in a generic manner an RL system.
Discrete Event System Specification and Reinforcement Learning
AI learning techniques have already been used in a DEVS simulation context. In fact, in [START_REF] Saadawi | Devs execution acceleration with machine learning[END_REF] the authors propose the integration of some predictive algorithms for automatic learning into the DEVS simulator to considerably reduce simulation execution times for many applications without compromising accuracy. In [START_REF] Toma | Detection and identication methodology for multiple faults in complex systems using discrete-events and neural networks: applied to the wind turbines diagnosis[END_REF], the comparative and concurrent DEVS simulation is used to test all possible configurations of the hyperparameters (momentum, learning rate, etc.) of a neural network. In [START_REF] Seo | Devs markov modeling and simulation: Formal definition and implementation[END_REF], the authors present the formal concepts underlying the DEVS Markov models and how they are implemented in MS4Me software [START_REF] Zeigler | System entity structure basics[END_REF]. Markov concepts of states and state transitions are fully compatible with the DEVS characterization of discrete event systems. In [START_REF] Rachelson | A simulation-based approach for solving generalized semi-markov decision processes[END_REF], temporal aspects have been considered in generalized semi-MDPs with observable time and a new simulation-based RL method has been proposed.
More generally, the DEVS formalism can be used to facilitate the development of the three traditional phases involved in a learning process by strengthening a given system (Figure 3.2):
• The Data Analysis phase consists of an exploratory analysis of the data which will make it possible to determine the type of learning algorithm (supervised, unsupervised, reinforcement) to deal with a given decision problem. In addition, this phase also allows us to determine the state variables of the future learning model. This phase is one of the most important phases in the modeling process. As noted in Figure 3.2, the system entity structure (SES) [START_REF] Zeigler | System entity structure basics[END_REF] can be used to define a family of models of the RL algorithm (DQN, DDQN, A3C, Q-Learning, SARSA, etc.) [START_REF] Sutton | Introduction to Reinforcement Learning[END_REF] based on the results of data analysis that use both statistical tools and the nature of the RL model (model free / model-based and on-policy / off-policy; see Chapter 2.3).
• The Simulation-based learning of an agent consists of simulating entry sets of a learning model to calibrate it by avoiding over-learning (learning phase). The DEVS formalism makes this possible by simulating the environment as an atomic model. However, the environment that interacts with the agent as part of a traditional RL scheme can also be considered as a coupled model composed of several interconnected atomic models. It is in this context that the environment is considered as a dynamic multi-agent DEVS model in which the number of agents can vary over time.
• The real-time simulation phase consists of submission of the model to the actual input data (test phase). The DEVS formalism and its experimental frame is an excellent candidate for simulating in real time the decision policies from real simulation data.
The DEVS formalism allows one to formally model a complex system with a discreteevent approach. The diagram of a RL system suggests that the Agent and Environment components can be represented by DEVS models (atomic or coupled depending on the modeling approach). The coupling of RL and DEVS can be done in two ways:
• DEVS can integrate the RL algorithms into its modeling specifications (transition functions, SES, modeling part) or within its simulation algorithms (sequential, parallel, or distributed, simulation part) in order to benefit from an AI, for example, improving the pruning phase in the specification process using SES or improving simulation performance by introducing a neural network-based architecture into the simulation engine.
• RL algorithms can benefit from the DEVS formalism to improve the explanability, observability, convergence, or search for optimal hyper-parameters, for instance. Moreover, the multi-agent RL algorithm can be modeled by DEVS due to its modular and hierarchical modeling possibilities. By separating Generators, Observers, and Transducers, the DEVS experimental frame allows a good framework to implement RL algorithms based on Agent-Environment interactions.
The thesis focused on this last point and relies on the formal framework proposed by the DEVS formalism in order to model and simulate the Q-Learning RL algorithm. The DEVS formalism makes it possible by specifying the Q-Learning algorithm using a set of interconnected components that react by its external transition function.
DEVS-Based RL Architectural Pattern
Basically, in the RL model, an agent and an environment communicate to converge the agent towards the best possible policy (Figure 3.3). Due to the modular and hierarchical aspect of DEVS, the separation/interaction between the agent and the environment within the RL algorithms is improved. A new generic DEVS modeling of the Q-Learning algorithm based on two DEVS models has been proposed: the DEVS models Agent and Environment. There are several ways to implement DEVS modeling of RL algorithms, and several DEVS modeling schemes are possible.
Atomic-based Modeling Approach:
The agent and the environment are two atomic DEVS models with an update of the matrix Q in the environment or in the agent. The first approach is quick to set up, but does not respect the consistency of communication between the two components. The environment does not need to know the optimal policy determined by the agent and therefore to calculate the matrix Q in the Q-Learning algorithm. The second approach seems to be more correct from a behavioral point of view.
Coupled-based Modeling Approach:
The agent and the environment are two coupled DEVS models with an update of the matrix Q in the agent (highlighted in the first case). This approach allows us to refine the decomposition of the parts of the RL algorithm of the agent and the environment separately. Moreover, it makes it possible to consider a multi-agent approach in a coupled Agent model, as detailed in Section 3.3.0.3. Finally, a hierarchy of abstraction and a parallel simulation are also possible with the use of the coupled modeling approach.
In the following subsections, we present the two approaches in detail with an additional approach that considers a multi-agent version of an RL system.
Atomic-based RL Modeling Approach
In Figure 3.4, the agent and the environment are two interconnected atomic models that communicate and perform their transition functions in a repetitive cycle until the agent determines the best policy with respect to the rewards it has received from its environment in response to its actions. Fig. 3.4 Atomic-based modeling approach with the DEVS atomic models of the RL Agent and RL Environment DEVS atomic models. The atomic model Observers can be inserted after the RL Agent to observe the mean of the Q matrix, which can be used to see its convergence. N Generator models are used to consider external events in the behavior of the Environment model. Figure 3.5 shows the UML sequence diagram of the User-Environment-Agent interactions in the Q-Learning algorithm framework. After an initialization phase, the Agent and Environ-ment models begin a series of episodes in which the environment sends a state/reward couple in response to an action (chosen according to the ε-greedy policy -see 2.11 in Section 2.3.3) by the agent. At each end of the episode, the Q matrix is updated, and it is only when it no longer evolves that the learning is finished and that the agent's policy is determined.
The atomic model Agent is intended to respond to events that occur from the Environment model. When it receives a new tuple (s, r, d), it will return an action (a ∈ A) following a policy depending on its current state s ∈ S and the implemented algorithm (ε-greedy for example). The model has a state variable Q which is an SxA dimension matrix for implementing the learning algorithm (Q-Learning or SARSA, for example). When convergence is reached (depending on the values of Q), the model becomes passive and no longer responds to the environment. The update of the Q attribute is done in δ ext after receiving the tuple (s, r, d).
The internal transition function makes the model passive. The output function is activated when the external function is executed. The DEVS specification of the RL Agent Atomic model is presented in Appendix 7.
The atomic model Environment will respond to the requests of the agent atomic model by assigning it a new state s, a reward r according to an action it has received. In addition, the model will inform whether it reaches the final state due to a variable Boolean d. Therefore, this communication will take place through an external transition function. The λ DEVS function will be activated immediately after the δ ext function to send the pair (s, r, d) to the Agent model. The δ int function will update the state variables of the model without generating an output. Finally, the initial state will be to determine the list of possible states S and actions A and the reward matrix R. The model is responsible for the generation of episodes (when a final state is reached). The DEVS specification of the RL Environment Atomic model is presented in Appendix 8.
Coupled-based RL Modeling Approach
This approach makes it possible to decompose the two previous atomic models Agent and Environment into an interconnection of specific atomic models. For example, the Agent model can be decomposed into four atomic models as shown in Figure 3.6.
The atomic model Define State-Action-Map of the Agents coupled model in Figure 3.6 could be dynamically instantiated only when a message arrives at its input port 0. The chain Get State-Reward-Done/Get Action/Update Q Matrix is activated as soon as a message arrives at input port 0. With this splitting, it is also possible to implement a deferred reward over time. The Update Q Matrix model implements the RL algorithm (Q-Learning, SARSA in our case). The Get Action model makes it possible to centralize the choice of a search by exploring / exploitation of the action, for example, with an e-greedy method. In the same Fig. 3.5 UML sequence diagram of the User-Environment-Agent interactions in the Q-Learning algorithm framework. The user starts the simulation by invoking the initial (init) phase of the Environment model that sends an event with the state/action map used to initialize the Agent model. The Agent then sends an action depending on its initial state in return that activates the δ ext function of the Environment model that immediately updates the state, the reward, and the done flag (which informs if the end state has been reached) to return to the Agent. The agent then updates its Q matrix according to the action received. This cycle (episode) is repeated until the end state is reached (the done flag is true). When the Q matrix is stable, the final policy can be outputted by the Agent model and the Environment model can print the simulation trace before becoming inactive. way, we can imagine a split for the atomic model Environment in which the management of the acquisition of the input data (Generators in Figure 3.6) responsible for the construction Among the advantages attributed by DEVS, composability makes it possible to introduce observers and thus increase the explainability of the model. Indeed, observers 2 and 1 make it possible to trace, during the simulation, the state-reward pairs in order to follow the episodes and the actions chosen by the agent for its learning. In this sense, we can say that DEVS makes it easier to control the progress of the learning algorithm. In addition, it is possible to position in parallel with its observers models capable of modifying the actions or the rewards during the learning loop without modifying the original algorithm.
Multi-Agent-based RL Modeling Approach
In human society, learning is an essential component of intelligent behavior. However, each individual agent needs to learn everything from scratch by its own discovery. Instead, the students exchanged information and knowledge with each other and learned from their peers or teachers. When a state space is too large for a single agent to handle, multiple agents may cooperate to take into account a larger amount of information to optimize a decisionpolicy process. Traditionally, RL is used to study intelligent agents. Each RL agent can incrementally learn an efficient decision policy over a state space by trial-and-error, where the inputs from an environment are next states and a delayed scalar reward. It seems important to remember that in RL there are no predefined data and that the whole RL process itself is a training and testing phase.
Although most of the work on RL has focused exclusively on a single agent that interacts with a component of the environment [START_REF] Sutton | Reinforcement Learning: An Introduction[END_REF]. However, the single-agent approach is limited by the size of the state space. That is the reason why, to overcome this kind of limit, neuron network approaches are used to approximate the state matrix [START_REF] François-Lavet | An Introduction to Deep Reinforcement Learning[END_REF][START_REF] Pan | Multisource transfer double dqn based on actor learning[END_REF]. Furthermore, with a neural network technique, the multi-agent reinforcement learning (MARL) [START_REF] Camus | Combining devs with multi-agent concepts to design and simulate multi-models of complex systems (wip)[END_REF][START_REF] Fudenberg | An economist's perspective on multi-agent learning[END_REF] approach can be applied [START_REF] Daavarani Asl | A new approach on multi-agent multi-objective reinforcement learning based on agents' preferences[END_REF][START_REF] Jang | Q-learning algorithms: A comprehensive classification and applications[END_REF][START_REF] Pan | A novel method for improving the training efficiency of deep multi-agent reinforcement learning[END_REF] in order to divide a large state space by distributing the state space set from a single agent to multiple agents that interact and learn independently. This approach is close to modular RL in the sense that the state space is broken down into a subset of states, but the collaborative aspect, although learning is independent, specifies the approach proposed as MARL. DEVS is used to straightforwardly extend RL to multiple independent agents. Together, they will outperform any single agent due to the fact that they have twice as many indices to invest in and, therefore, a better chance of receiving rewards. However, the goal is to study the added value of implementing MARL in the DEVS formalism and to compare the performance of an independent agent with the one of a multi-agent agent and to identify their trade-offs. Fig. 3.7 Multi-agent DEVS reinforcement learning model with a supervisor. Each agent explores a subset of possible states and gives its optimal decision plot based on interactions with its own environment. The supervisor then proceeds to the final decision-making logic.
The proposed approach aims to answer a problem due to the necessity of dealing with a large number of state frameworks. Instead of letting one single agent iterate in a single large environment, the task is divided into multiple agents that explore a part of the number of states. As depicted in Figure 3.7, each agent gives an optimal policy resulting in the execution of the Q-Learning algorithm that depends on the interaction with its environment. Once the supervisor has received all agent policies at each simulation time step, it determines the best combined policy and communicates to the single agent the action to be taken. The construction of the best-combined policy is the result of a process of trial-and-error by reward return. If the environment changes, the process must be restarted. A policy is applicable to a given environment.
DEVS is used for its modular and hierarchical power, which makes it possible to describe a system by interconnections of atomic models (agent and environment) in order to describe a multi-agent model. In the case of multi-agent models, DEVS makes it possible to solve the problems of synchronization of events generally present in this type of system due to its time advance function specification [START_REF] Camus | Combining devs with multi-agent concepts to design and simulate multi-models of complex systems (wip)[END_REF]. The DEVS formalism makes it possible to simulate the environment as an atomic model. However, the environment that interacts with the agent in a traditional learning-by-reinforcement scheme may also be considered as a coupled model composed of several interconnected atomic models. In that context, we may consider the system as a dynamic structure DEVS model with a static structure of the coupled DEVS model in which the number of agents/environment can vary over time and can be executed in parallel. The supervisor model builds its policy by integrating all policies and experiences already learned by each agent. This is the age-old divide-and-rule policy.
What About Observability and Explainability?
According to Section 2.3.5, two advantages of using the coupled model-oriented architecture that improve observability and explainability are: the increase in understanding of the model (explainability) and the possibility of making observable internal signals belonging to the learning algorithm (observability). Learning models are called black boxes because it is often difficult to understand the mechanisms that lead to correct but difficult to explain results. The use of coupled models allows for a greater hierarchy of description and, therefore, a functional breakdown of the learning algorithm. The workflow State-Reward-Done/Get Action/Update Q makes it possible to isolate the basic functions of the learning algorithm and to have access to observable signals such as the actions, states, and rewards involved in learning. The increase in observability allows a better understanding of the mechanism that leads the agent to converge towards the best decision-making policy. The observability and the explainability of the approach by coupled models is also achievable with the approach by atomic models, but it is less modular. With the coupled model approach, the developer does not need to insert debugging instructions inside the code to observe a property of a system. You do not need to implement a new function (or method, in the case of an object-oriented approach) to introduce or change the behavior of a system.
. In recent years, many XAI tools (IntepretML, LIME, SHAP, Seldon Alibi, etc.) have been developed and integrated in ML workflows: These tools focus on making ML clear, at least visually, and how much particular features influenced the prediction. The development of explainable AI (XAI) tools is still juvenile.
1. How do we incorporate explainability into our experiences without detracting from the user experience or distracting from the task at hand? 2. Do certain processes or specific information need to be hidden from users, and how do we explain them to users?
3. Which segments of our AI decision-making process can be easily digestible and explainable to users?
In general, answering those questions gives us valuable insights in terms of explainability for carrying out projects in the five most important applications in Computer Science: debugging, informing feature engineering, directing future data collection, informing human decision-making, building trust, regulatory compliance, and high-risk applications.
In terms of building trust, let us first stress the main difference between the three models, according to the taxonomy described in the paragraph dedicated to XAI of the State of the Art.
Time
Scope
Local Global Intrinsic Model-Specific LSTM ESV DEVS LSTM ESV DEVS Post-Hoc Model-agnostic DEVS DEVS Table 3.1 Comparison using XAI taxonomy.
As synthesized in Table 3.1, DEVS is global and post hoc. It is global because it leads the user to trust the model, not just the prediction, and it is post hoc, in the sense that it will turn a model-free portfolio optimization based on modeling and simulation into a simpler model-base optimization model. A determined weighted portfolio with specific stock volatility leads to a determined portfolio reallocation. LSTM is intrinsic or model-specific and local. Our LSTM model leads the user to trust the prediction for a specific stock because we can compare the prediction results with the real values of the stock. Our LSTM is also local because it offers an explanation of the specific prediction and answers the following question: How do we evaluate the precision of the prediction made by the model?
Explicit Time in Q-Learning
In MDPs, the time spent in any transition is the same. However, this assumption is not the case for many real-world problems.
In [START_REF] Bradtke | Reinforcement learning methods for continuous-time markov decision problems[END_REF], the authors extend the classical RL algorithms developed for MDP and semi-Markov decision processes. Semi-MDPs (SMDPs) extend MDPs by allowing transitions to have different durations (t). In [START_REF] Pardo | Time limits in reinforcement learning[END_REF], the authors considered the problem of learning optimal policies in the time-limited and time-unlimited domains using time-limited interactions (limited number of steps k) between agents and environment models. The notion of time is explicitly considered, but only in terms of the limited time T considered to maximize the total reward assigned to the agent who tries to maximize the discounted sum of future rewards:
G t:T = T -t ∑ k=1 γ k-1 R t+k
In [START_REF] Mahadevan | Self-improving factory simulation using continuous-time average-reward reinforcement learning[END_REF], the authors introduce a new model-free RL algorithm to solve SMDP problems under the average-reward model. In addition, in [START_REF] Sutton | Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning[END_REF], the authors introduce the theory of options to bridge the gap between MDPs and SMDPs. In SMDPs, temporally extended actions or state transitions are considered as indivisible units; therefore, there is no way to examine or improve the structures inside extended actions or state transitions. The option theory introduces temporally extended actions or state transitions, called options, as temporal abstractions of an underlying MDP. It allows to represent components at multiple levels of temporal abstractions and the possibility of modifying options and changing the course of temporally extended actions or state transitions. In [START_REF] Rachelson | A simulation-based approach for solving generalized semi-markov decision processes[END_REF], the authors explore several approaches to model a continuous dependence on time in the MDP framework, leading to the definition of Temporal Markov Decision Problems. They then propose a formalism called Generalized Semi-MDP (GSMDP) in order to deal with an explicit event modeling approach. They establish a link between the Discrete Event Systems Specification (DEVS) theory and the GSMDP formalism, thus allowing the definition of coherent simulators.
In the case of explicit time, the time of a transition has to be taken into account. The time between actions is an explicit variable (which may be stochastic) and depends on the state and the action. This transition time is known as the notion of sojourn [START_REF] Rubino | Sojourn times in finite markov processes[END_REF]. DEVS Markov models [START_REF] Seo | Devs markov modeling and simulation: Formal definition and implementation[END_REF] are capable of explicitly separate probabilities specified in transitions and defined in times / rates. Furthermore, the dynamic properties involved in the DEVS formalism allow one to dynamically modify these specifications during model simulation. Transition probabilities are classically associated with a Markov chain, while transition times / rate probabilities are associated with the sojourn notion [START_REF] Cammarota | Entrance and sojourn times for markov chains. application to (l, r)-random walks[END_REF]. This modeling feature associated with Markov chains offers the possibility of explicitly and independently defining transition probabilities and transition times.
The following section presents the implementation of the Agent-Environment model in the DEVSimPy environment.
DEVSimPy Modeling and Simulation of RL
DEVSimPy is a graphical environment that allows the M&S in Python language of DEVS models [START_REF] Capocchi | DEVSimPy software[END_REF]. We have implemented the Agent-Environment model as atomic models in a library called "RL" (left part of Figure 3.8). The Agent.amd and Env.amd files implement the agent model and environment models according to the Q-learning or SARSA algorithm.
Implementing an RL model in DEVSimPy consists of dragging and dropping one or more Agent and Environment models in the right panel, interconnecting them, and then configuring them. The configuration goes through the definition of the behavioral properties mentioned above, but it is also necessary to define the methods specific to the field studied. It is the environment model code that needs to be adapted. As mentioned in 8, the methods GetInitState(inputs1N), GetEndState(inputs1N), and GetStateActionMap(inputs1N) must be implemented based on the input of model 1 through N (inputs1N). If the environment is a coupled model, it will also be necessary to define other DEVS models which will participate in the definition of the state action map.
The Pursuit-Evasion Case Study
In order to illustrate the use of DEVSimPy to implement a RL framework (involving multiagents), we propose to use a case study known as Pursuit-Evasion [START_REF] Hespanha | Multiple-agent probabilistic pursuitevasion games[END_REF]. Thanks to this example, we show that DEVS makes it possible to build RL models by combining different Agents with learning logics. Indeed, in this example, an agent will use a "Best Escape algorithm" to choose his action instead of respecting the e-greedy algorithm, for example.
One or more pursuing agents, here called the "cat", must have one or more fugitive agents, called the "mouse". This type of problem distinguishes Search and Rescue problems in the sense that the fugitive agent tries at all costs to avoid pursuers. The mouse-cat problem is played one by one: Each agent performs an action and must wait for all other agents to Fig. 3.8 RL library into the DEVSimPy software. The Properties panel of the Agent and Environment models allows one to configure these models. The algo property of the agent allows one to select the learning algorithm between Q-Leaning or SARSA. Properties γ, ε, and α are related to the Bellman equation (see Section 2.3.3). Concerning the Env model, the option goal_reward allows us to define the reward as a goal (when goal_reward = True is checked) or as a penalty (when goal_reward is uncheked) (see Section 2.3.2.3) perform an action before executing the next one. In this example, we consider only two agents: one mouse and one cat. We also consider a discrete time: each action corresponds to a unit of time t ∈ τ with τ = {1, 2, . . . , T }. Reflection and observations are considered to be performed in zero time.
Like time, the space of the problem is discrete. Therefore, the space in which the part unfolds is a finite and discrete set of cells x = {1, 2, . . . , X} arranged in two dimensions. All movements are therefore discrete and concern the displacement of an integer number of squares, defined by the unit by default. We will formally define the distance between Agents 1 and 2 as the number of actions necessary for Agent 1 to occupy the place of Agent 2, this one not being deployed. This amounts to calculating the Manhattan distance if agents are not allowed to use diagonals.
Two primordial notions of the environment in RL are distance and identification of a state. The distance is used as a reward function since it allows one to measure the distance between the cat and the mouse. The identification of a state corresponds to the position of an agent (cat or mouse) in the matrix. For example, in a 9x9 matrix, there are 81 states, each corresponding to a given position.
A cat is an agent whose goal is to catch a mouse agent. Clearly, this means performing a series of actions that would lead to a final state where the cat-mouse distance would be zero. At the level of perception, a cat has perfect acuity: At any time, he knows the position of all agents in the space. Therefore, information on the state of the environment is complete and correct. A cat can perform 5 actions, which are listed below:
• Action 0: Stay still.
• Action 1: Up, move one box north.
• Action 2: Right, move one space to the East.
• Action 3: Bottom, advance one box to the South.
• Action 4: Move one space to the left. Note that our cat cannot move diagonally. The mouse agent enjoys exactly the same properties as the cat: global vision of the space and the same five possible actions.
Our mouse aims to escape the cats. We have decided to develop a custom algorithm called best-escape in order to perform the way the mouse is moving. The principle of the best escape algorithm is very simple. It consists first of all of refusing any movement which would throw the mouse into the claws of a cat or any movement which would lead the mouse into the immediate neighborhood of a cat (we mean the adjacent box). These two actions would, in fact, be synonymous with certain death. For each of the remaining actions, we compute the escape value, the sum of the squares of the distances from the arrival point of the action with each of the mice present in the game. The goal is obviously to maximize this value. Then we choose the action that gives the highest escape value possible. If several actions have the same escape value, we take the first one from the list. Therefore, the choice is completely deterministic and can be predicted for each given situation. We introduced a fear factor as a variable in the mouse agent. This very simple parameter actually introduces the fact that the mouse will not perform any action at a given moment, and this without regard to the rest of the environment. It can also be seen as the fact that the mouse is frozen in fear.
The main difference between DEVS multi-agent learning and a single agent is that the learning agent is connected to a DEVS coupled model instead of an atomic model (Figure 3.9). in Figure 3.9 is connected to the coupled DEVS model (called Environment) embedding other agents (as MouseAgent), the global environment vision (as EnvironmentMat) and the specific environment (as CatEnv) associated with the learning agent (as CatAgent).
The global environment (EnvironmentMat) defines the space in which the agents will evolve. In our implementation, this space is a simple matrix of integers. 0 means an empty box. Each other integer means an agent (1 for the cat and 2 for the mouse). It is quite possible to imagine extending this environment to, for example, search in a space with 3 dimensions or more. An action is denoted by an integer and corresponds to the movement of the agent in question. If the number of possible actions is not problematic, it is also necessary to assign to each action the corresponding movements of the concerned agent. The chosen implementation consisted of a simple dictionary that assigned, for each action, the corresponding movements.
The heart of the Q-Learning algorithm [START_REF] Sutton | Introduction to Reinforcement Learning[END_REF] is implemented in the agent (CatAgent) and is associated with the atomic model of the specific environment (CatEnv). From this observation, we will compute the reward to associate with the action. Both the new state and the computed reward are sent to the input of the CatAgent atomic model. The Q-Learning algorithm assumes that CatAgent has received a reward and a new state after performing an action.
The mouse agent does not use an RL technique. To choose its action, the mouse uses the Best Escape algorithm: the mouse tries to minimize the sum of the squares of the distances between itself and the cats, and, if no solution is possible, the mouse remains on the spot.
The simulation has been performed using the fear factor parameter for the mouse agent: (value 0 with fear factor/value 1 without fear factor). Figures 3.10 and 3.11 show the evolution of the mean Q value during the learning phase with and without the fear factor between a mouse agent and a cat agent. Notice that since the mouse cannot move, the Figure 3.10 is a regular curve. While the mouse tries to escape when the fear factor is equal to 0, the Figure 3.11 is not regular. These irregularities of the curve fit the fact that the mouse is escaping.
In this example, only one cat agent and one mouse agent are involved. However, when considering multiple cats, one of the questions is to decide whether the agents will perform the learning phase collectively as a team or individually. Furthermore, when considering multiple mice, we have to decide whether the goal is finally to catch a mouse or all the mice. These problems are general problems inherent to multi-agent learning, since research in this domain focuses on studying software agents that learn and adapt to the behavior of other software agents [START_REF] Shoham | Multi-agent reinforcement learning: a critical survey[END_REF]. In addition, the presence of other learning agents complicates learning, since the environment may become non-stationary (a situation of learning a moving target similar with the fear factor of the mouse agent). This will be a problem we will return to in the case study of this thesis.
Of course, the reward function will also be harder to define since it depends on both the learning agents and the multiple and moving targets. An RL agent learns by interacting with its dynamic environment. At each time step, the agent perceives the state of the environment and takes an action, which causes the environment to transit into a new state. A scalar reward signal evaluates the quality of each transition, and the agent must maximize the cumulative reward throughout the course of interaction. Well-understood, provably convergent algorithms are available for solving the single-agent RL task. Together with the simplicity and generality of the setting, this makes RL attractive also for multi-agent learning in an environment.
Conclusion
The discrete event-oriented modeling of the Agent-Environment system by the DEVS formalism allows us to explore the Q-Learning and SARSA algorithm in a more behavioral way. In general, the Q-Learning and SARSA algorithms are based on the nesting of two loops: a repetitive loop on a number of episodes, which includes a conditional loop on actions. The DEVS discrete event approach makes it possible to dissociate these two loops through the communication of two models (atomic or coupled) Agent and Env. This makes it easier to interact with the two learning algorithms in order to implement specific stopping conditions, for example. In addition, the modularity provided by the DEVS formalism makes it possible to divide the algorithms into an interconnection of atomic models, thus improving the experimentation of new calculation methods to update the variable Q or the method of allocating new actions by the agent.
The coupled models approach has the advantage of facilitating the implementation of a multi-agent model for RL. Indeed, the Agents coupled model can be composed of an interconnection of Agent atomic models that communicate both with each other and with the environment. The DEVS formalism is a good candidate for setting up a multi-agent model. In addition, the possibility of simulating coupled models in parallel thanks to PDEVS is an interesting avenue when one wants to set up multi-agent models which can lead to significant simulation times. We do not present this aspect in this thesis.
DEVS is based on a formal representation of time in finite-state automata. This property is not highlighted in this thesis. However, when we talk about delayed reward, we can think of a different simulation-time exploitation than the one implemented in the atomic models presented in this thesis. It would then be interesting to see what the consequences would be for the final policy.
Chapter 4
Case Study: Leverage Effects in Asset Management Optimization Processes
Introduction
The case study deals with a decision-making process carried out by traders during the process of optimizing asset management, which may lead to a leverage effect. The first thing to consider is how a human trader would perceive their market environment. What observations would they make before deciding to make a trade? A trader would most likely look at some charts of the price action of a stock, possibly overlaid with a few technical and macroeconomic indicators. From there, they would combine this visual information with their prior knowledge of similar price action to make an informed decision on the likely direction of the stock. Figure 4.1 shows the human-driven trading process. Typically, agents, whether humans or AI, perceive the market environment by observing characteristics of stocks such as the open price, high, low, close price, daily volume, etc. for a certain number of days, as well as other data points such as their account balance, current stock positions, and current profit. Then, human traders considers the price action leading up to the current price, as well as the status of their own portfolio, in order to make an informed decision about their next action. Once a trader has perceived their market environment, they need to take action. The range of actions available to human traders consists of three possibilities: buy a stock, sell a stock, or do nothing (wait).
Here, we come to the most interesting part of the topic of human decision-making. In most AI agents (the AI-driven approach in Figure 4.1) developed to date, the AI agent behaves like a traditional human trader. In other words, to solve a given problem, it is In other words, to address the issue of risk aversion, traders trade a small portion of their portfolio at a regular pace, such as quarterly, monthly, or weekly. The supervised AI agent has a different reward system to reduce human biases and to produce a leverage effect (make more money) with a more sustainable long-term strategy. necessary to understand how current actions result in future rewards. However, the only proposed approach to understanding the market is to buy or sell an AAA-rated stock in small amounts every month or period of time to maximize the probability of buying at a lower price and selling at a higher price. This simple routine is basically their portfolio optimization decision process to make them earn money (get their rewards) from the transaction fees charged to their customers. AAA is the highest rating assigned to any debt issuer; we interviewed dozens of bankers and all applied the same strategy. To predict the total future reward that will result from an action, it is often necessary to take many steps into the future by trying strategies that may depend only on the present and not on a consolidated past.
The question of constructing the reward becomes central. In fact, the following rule is usually applied: Classically, banks want to incentivize profits that are sustained over long periods of time. At each step, they will set the reward as the balance of the account multiplied by some fraction of the number of time steps so far. When traders use AI agents, the purpose of this is to delay rewarding the agent too quickly in the early stages and to allow it to explore sufficiently before optimizing a single strategy too deeply. It will also reward agents who maintain a higher balance for longer, rather than those who rapidly gain money using unsustainable strategies. But why? Rapidly gaining money usually means using unsustainable strategies by default.
As introduced in Figure 4.2, our research effort aims to provide samples and results to prove the contrary and provide some evidence that this is probably the best policy for large human trading institutions, such as banks, which make money taking a percentage of the transaction cost and not based on the pure financial performance of the portfolio, but not for scalpers, as in our case study. In this section, we demonstrate how the DEVS (Discrete Event System Specification) formalism can be used to create a formal framework for an RL (reinforcement learning) system, resulting in improved modularity and clearer explanations of the AI models. A DEVS-RL (DEVS with reinforcement learning) simulation model will be developed to study the effects of leverage on financial assets and to compare the effectiveness of the proposed approach with other commonly used AI tools. The modularity of the approach will also be highlighted, as it can be combined with other AI algorithms to create decision and prediction models through simulation. The validity of the approach will be tested through the implementation and analysis of a simulation model in a first experiment. In addition, compatibility and combination studies will be conducted with other AI algorithms to further assess the effectiveness of the approach and examine its compatibility with other intelligent learning models.
First Experiment
In our study case, let us consider the possibility of raising funds by borrowing capital. Our Agent gains or losses are driven by a machine learning algorithm. The leverage effect will be estimated by calculating the difference between the evolution of the value of the initial portfolio (basket of indexes bought with the borrowed capital) without sell and buy and the total value of the portfolio driven by the machine learning algorithm embedded in the Agent.
We propose four scenarios. In simulation number 1, named initial cash=0, the internal financing (IF) is 705$ and leverage effect borrowed capital in 10 times the IF, that is 7053$ that are invested in an investment portfolio (equivalent to a state) of 1 to 3 stock indexes (index a , index b , index c ) among the main world indexes (CAC40, DJI (Dow Jones Industrial Average) and IXIC (Nasdaq Composite)). In the three other simulations, the Agent has the same amount in stocks as in simulation number 1 and the possibility to invest 3 different extra cash values. The goal is to have a portfolio of a multiplicity N of 0 to N-1 of each of these indexes that allows one to obtain at any moment the maximum possible value by adding together all the values of the N indexes (see equation 4.1).
max ∑ i=a,b,c {0; N -1} * index i (4.1)
Investing IF in stock market indexes rather than in company-specific shares makes it possible to take into account the evolution of the environment of financial markets. Indeed, the volatility of the indexes that represent the trend of the best (or the average of all) shares of the market reflects the behavior of the major agents who influence market trends and its environment (correction, bullish, bearish, etc.). It is important to note how much the environment (volatility of the indexes or new cash inflow to buy additional indexes) can have an impact on our simulation. In fact, the change in the value of an index or the availability of new cash will influence, for example, the number of states or their length of life. Our simulation example will deal with the policy to be followed in the case of a change in environment related to the increase of the Agent cash availability and indexes volatility.
In an RL system, the agent learns from the rules. In our case, the rules are as follows:
• The whole agent state is finite and is calculated from equation 4.1,
• The Agent can take one action at a time, among 3 actions (buy, sell, wait) that are chosen on the basis of the indexes volatility,
• The Agent uses a goal-reward representation [START_REF] Sutton | Introduction to Reinforcement Learning[END_REF] and gets a reward different from 0 only when it reaches the goal state corresponding to the maximum value of the investment portfolio.
• The Agent uses the single-goal approach that considers only one goal state of policy research.
Taking into account our case study, an MDP can be formalized as follows.
• The finite set of states S = {(s 0 , . . . , s k )|k ∈ N, s ∈ [0, N -1]} and the size of the state space are |S| = N k with N the multiplicity of the indexes. If N = 8, 512 possible states can be considered. Each state can be reached from every other state.
• The nonempty set of goal states G ⊆ S.
• The finite set of actions A = {(a 0 , . . . , a k )|k ∈ N, a ∈ [-1, 0, 1]} (-1 for selling, 1 for buying, and 0 for waiting) and the total number of actions is ∑ s∈S |A(s)| = 7. All actions are deterministic, and an ε-greedy algorithm is used to choose them with an exploitation-exploration approach. One action at a time is possible and is controlled by a parameter M = 1.
• According to the goal-reward representation [START_REF] Sutton | Introduction to Reinforcement Learning[END_REF] where the agent is compensated for entering a goal state, but is not compensated or penalized otherwise, the set of reward r : S × A -→ R 1 0 r(s, a) = 1 if s is a goal state 0 otherwise
In our case and due to the goal-reward approach, the convergence of the Q-Learning algorithm can be obtained by following the Q matrix until a stable value is obtained [START_REF] Neuneier | Enhancing q-learning for optimal asset allocation[END_REF]. Line 2 of algorithm 2.11 in Section 2.3.3 could be replaced by "Repeat until Q converges".
The goal is to implement the case study of an IF invested in 3 stock indexes, CAC40, DJI and IXIC, using the DEVS-RL library and the DEVSimPy M&S environment [START_REF] Capocchi | DEVSimPy: A collaborative python software for modeling and simulation of DEVS systems[END_REF]. A DEVSimPy modeling is proposed using the Agent-Environment RL approach combined with a discrete event Q-learning algorithm. Simulations have been done to obtain optimal policies that are able to manage an IF invested in stock market indexes. • The CAC40, DJI, and IXIC DEVS models: They are generator models that send on their output the index values collected during a period (stored in a csv file) or the real index values from the real stock market (using an API REST request). When a control message is received on the input port, the new index value is obtained and triggered on the output port.
DEVSimPy Modeling
• The Env atomic DEVS model: It is an environment component according to the Qlearning algorithm. When all the inputs are received, the possible states/action tuples are computed, and the reward is defined to 1 on the goal state. The model interacts with the Agent model through its port 0 to send the new tuple state/reward (line 4 in algorithm 2.11) and on its port 1 to activate the Agent model for a new episode (line 2 in algorithm 2.11).
• The Agent atomic DEVS model: It is an agent component according to the Q-learning algorithm. When a message is received on port 0, the agent updates the Bellman equation through its external transition function and sends to the Env model an action depending on the exploration/exploitation configuration defined in the model. When all steps are calculated and the goal is reached (line 9 of algorithm 2.11, the agent gives the best possible policy and sends a message on port 1 to the index generator models. The next section is dedicated to the Q-Learning DEVSimPy simulations of this case study model. Two cases are considered. Single-episode case where the previous Q-Learning algorithm is executed only once and multiple-episode case where the Q-Learning is updated depending on the evolution of the indexes controlled by generators.
DEVSimPy Simulation
This section presents two types of simulation schemes. In the single episode case, only one set of indexes is simulated, and only one best possible policy is obtained at the end of the simulation. In the multiple-episode case, the indexes from 1991-01-02 to 2018-07-05 are simulated, and all optimal policies are stored and analyzed to validate our approach. • For the Env model: cash = 10000$; M = 1; N = 8; init state = (1 IXIC, 1 CAC40, 2 DJI). These properties have been added to the initial Env class of the DEVS-RL library presented in Section 3.4.
• For the Agent model: α = 0.8, ε = 1, and discount factor γ = 0.95. These properties were already available in the initial Agent class presented in Section 3.4.
The end of the simulation is obtained with the convergence of matrix Q. The simulation results give the path of state/action to reach the optimal goal state (0 IXIC, 0 CAC40, 6 DJI) among 250 possible states and 7 possible actions:
(1, 1, 2) [0,-1,0] ----→ (1, 0, 2) [0,0,1] ---→ (1, 0, 3) [0,0,1] ---→ (1, 0, 4) [0,0,1] ---→ (1, 0, 5) [-1,0,0] ----→ (0, 0, 5) [0,0,1] ---→ (0, 0, 6) → wait
Due to the Q-Learning algorithm, the proposed path can reach the goal state with a minimal number of transitions (ε = 1) and without cash loss. For all values of each index, the agent gives the best possible policy to obtain a leverage effect depending on the initial cash and the initial state. Scenarios with different initial cash values have been simulated and analyzed with a new algorithm that consists of determining, for all tuples' state/action, the maximum of the Q value during the evolution of the indexes.
The next scenarios are defined with an initial cash equal to 0$, 8000$, 16000$ and 24000$ added to the previous general setting. Figure 4.6 shows the size of the episode and the number of steps during the simulations. One step is equivalent to a message between the Agent and the Environment. An episode consists of steps to find the best possible policy. As shown in Figure 4.6, the size of the episode (and the steps) is short, since the initial amount of cash is higher. It seems to be correct that the state (7,7,7) is reached earlier with a higher amount of initial cash. The sizes of the episode are ordered following the cash amount availability. Between 0 to 2000, concerning the number of steps, its seems obvious that there is a correspondence between the number of steps and the number of states generated by the amount of cash. In a specific case of initial cash 0, the growth of the number of steps at the same level of the other proves that there is a real growth in value of the state from day 4000 to last day of the simulation. 4.7 shows that the index multiplicity is reached more rapidly in the init cash 24000 scenario, which confirms that the Agent is acting to reach the best policy in the shorter "time" and the three other scenarios respect the rule. Figure 4.6 validates that the agent behaves in order to maximize the value in the portfolio; indeed, during the simulation period, it is only the best time to take an action for one specific state. Simulations validate that our Agent behaves correctly following the volatility and values of the indexes as shown in Figure 4.9, which also validate that the Agent respects the learning rules, indeed even when the values of all indexes are dropping, the Agent continues to try to reach the best possible state and at the same time to invest the maximum amount of cash available. 4.9 also validates that cash is not a limiting factor in obtaining the best investment policy.
Figure 4.10, clearly validates that the Agent correctly invests the cash to try to reach the best investment policy as soon as it can; indeed, the time to have a residual cash close to 0 is faster with a higher initial cash, and the three other scenarios respect the same order. Figure 4.10 also validates that the agent has a real leverage effect even with the lowest initial cash; In fact, if we compare the evolution in the period of the value of the initial investment portfolio (1 IXIC,1 CAC 40,2 DJI) that reaches at 2018-07-05 the value of 63 thousand and the value of the same initial portfolio driven by the Agent. In this second case, the final value of the portfolio is 252 thousand, i.e., a leverage effect 4 times higher than the same portfolio with no trades (buy and sell), and the four scenarios arrive at this value correctly following the residual cash order.
Discussion
: This research effort aimed to provide samples and showed that DEVS can promote trust in algorithm-driven decision-making. Generally speaking, modeling is the process of representing a model which includes its construction and working. One of the main steps in modeling is to create a model that represents a system, including its properties. To achieve a model, it is fundamental to establish an experimental frame, i.e. the specification of the conditions under which the system is observed or experimented with. Most of the time, the system is a mathematical object, but mathematical representation may be challenged by AI to model some real-world applications, in particular those with a high degree of uncertainty. The specification of the conditions needs a fairness approach. This fairness approach plays an important role in AI explainability AI. Machine learning Fairness can be considered as a subdomain of machine learning interpretability that focuses solely on the social and ethical impact of machine learning algorithms by evaluating them in terms of impartiality and discrimination. To gain insight into the behavior of the system or to support a decision-making process, simulation helps to strengthen modeling processes.
One of the reasons we decided to use DEVS is because it helps us alter the model construction process and is a way to ensure fairness in machine learning models. DEVS formalism will allow us to put into action a fundamental strategy to guarantee fairness in explainability. The first strategy, which is called suppression, detects the features that correlate the most, according to some threshold with any sensitive features, such as stock market influencers opinions or economic results claims. To reduce the impact of sensitive features on model decisions, sensitive features are removed, along with their most correlated features, prior to training. This forces the model to learn from and, therefore, to base its decisions on other attributes, thus not being biased against certain market trends.
The first experiment shows how the observability made possible thanks to DEVS makes it possible to visualize the step and episode signals, thus improving the explainability of the RL model and also strengthening confidence in the RL model. In addition, access to the cash signal, which is specific to the application, also makes it possible to validate the model by simply interception of an event by an observer atomic model in the DEVS modular framework.
Compatibility and Combination
We decide to carry out compatibility of our DEVS-RL simulation model with the EF and a combination with the LSTM model. In this compatibility, we focus on building trust applications.
Efficient Frontier vs. DEVS
ESV Model : Our efficient frontier semivariance (ESV) portfolio consists of N stocks with S 0 = (s 0 1 , . . . , s 0 N ) being the set of initial values for each stock in the portfolio. The number of each stock in the portfolio is denoted as X = (x 1 , . . . , x N ). The initial value of the portfolio V 0 is calculated as follows:
V 0 = N ∑ i=1 x i s 0 i .
The decision on the number of shares in each asset is expressed as the weights W = (w 1 , . . . , w N ) with the constraint ∑ N i=1 w i = 1, defined by w i =
x i s 0 i V 0 with i = {1, . . . , N}. At the end of the period t, the values of the stocks change S t = (s t 1 , . . . , s t N ), which gives the final value of the portfolio V t as a random variable:
V t = N ∑ i=1 x i s t i .
The actual return of a portfolio R p = (r 1 , . . . , r N ) is the set of random returns for each stock in the portfolio, and the vector of expected return of µ = (µ 1 , µ 2 , . . . , µ N ) with µ i = E(r i ) for i = 1, 2, . . . , N. The actual return on multiple asset portfolios over a specified period of time is easily calculated as follows.
R p = w 1 r 1 + w 2 r 2 , . . . , w n r n .
The expected portfolio return is a weighted average for each asset in our portfolio. The weight assigned to the expected return of each asset is the percentage of the asset's market value to the total market value of the portfolio. Therefore, the expected return E(R p ) = µ p of the portfolio at the end of the period t is calculated as follows:
E(R p ) = N ∑ i=1 w i µ i .
The ESV has been implemented in the Google Colaboratory Python Environment and is detailed in Appendix 9. The four main indexes of the US market are Nasdaq Composite (IXIC), Dow Jones Industrial Average (DJI), S&P 500 (GSPC), and Russell 2000 (RUT) have been chosen. For this model, the total value of the portfolio is equal to 100,000$ with a weight distribution of 25% per asset. The DEVS agent atomic model has been configured with α = 0.95, γ = 0.2, and ε = 1.0 (ε-greedy) as shown in the Properties panel in Figure 4.14. The DEVS Env atomic model has been configured with M = 1 (action multiplicity), cash = 0 (initial cash) and with the goal-reward activated as shown in Figure 4.15. The parameter N is manually fixed depending on the initial value of our portfolio. N is chosen with two constraints. The first is to avoid the risk that the agent will invest all its initial cash just in one asset (the one with the highest volatility) by limiting the number of stocks per asset (not all the eggs in the same basket). The second constraint is to invest at least all its initial cash. Among all possible values of N that satisfy the constraint, the final value that will be chosen is the one that will result in an expected annual performance close to the ESV benchmark, which is the optimal allocation of ESV or at least one of the allocations on the efficient frontier of ESV. Note that the higher the value of N, the larger the number of states to consider and the longer the search for the optimal policy. The initial state (the first state on the path formed by [IXIC, RUT, DJI, GSPC]) is defined to have the same total portfolio value as the ESV (100,000$). Now, let us experiment with the effect of search-by-exploration on the part of the agent. It is necessary to vary the parameter epsilon (for example, with the values (0.97, 0.95, 0.9, 0.8, 0.7, 0.6, 0.5)) in each simulation and observe the policy obtained at the end of the simulations. Figure 4.17 shows the tree obtained when we merge the paths of each simulation for the date September 1, 2022. From the initial state (2,2,2,2), the final state is reached by taking different paths depending on the value of ε noted on the arc. For example, if ε = 0.97 the end state is reached by going through 9 states ((2,2,1,2), (3,2,1,2), (4,2,1,2), (4,2,0,2), (5,2,0,2), (6,2,0,2), (7,2,0,2), (7,2,0,1), (7,2,0,0)) for more than 4 minutes of simulation time and the path is (2,2,2,2) → (2,2,1,2) → (3,2,1,2) → (4,2,1,2) → (4,2,0,2) → (5,2,0,2) → (6,2,0,2) → (7,2,0,2) → (7,2,0,1) → (7,2,0,0) → (8,2,0,0) → wait. If ε = 0.5, the number of intermediate states increases (13) and the simulation time is longer than 2 hours.
Discussion : With N equal to 8 and 14, the allocation of the DEVS-RL optimization portfolio offers higher expected returns than N = 4, but does not match the efficient frontier. In contrast, with N = 4 the DEVS-RL optimized portfolio became one of the solutions of the efficient frontier with an expected annual return equal to 34.72%. The compatibility with ESV definitely suggests that our novel DEVS-RL portfolio optimization model based on the Markov property is valid and offers at least one of the possible solutions of an ESV model. Moreover, the same results in terms of optimization are achieved without the need to take historical returns into account, turning the optimization process into a simpler interpretable model due to the benefits of the RL algorithm easily implemented with DEVS. This verdict also suggests that it would be challenging to further investigate the relationship between volatility, return, and volatility return and to offer a novel approach to reconsider the market risk exposure as defined by VaR, i.e., the standard for banking and insurance companies. Figure 4.16 shows that around day 100 all DEVS-RL portfolio values with N equal to 4,8, and 14 decrease by approximately 7%. The lost value is recovered after 20 days. On day 100 all portfolio indexes lost values by about 4% to 6%. It seems important to anticipate discrete events such as the Bear market using a supervised neuronal network prediction model such as LSTM or GARCH. In the next section, we introduce a combination of the LSTM and the DEVS-RL model to improve portfolio optimization using an index trend prediction model.
LSTM Model Combinated with DEVS-RL
Unlikely or in combination, GARCH and LSTM are commonly used to predict volatility of commodity market returns and stock index [START_REF] Koo | A hybrid prediction model integrating garch models with a distribution manipulation strategy based on lstm networks for stock market volatility[END_REF][START_REF] Zolfaghari | A hybrid approach of adaptive wavelet transform, long short-term memory and arima-garch family models for the stock index prediction[END_REF]. Recent work highlights that Fig. 4.17 Family of paths obtained after simulations with different values of ε (noted on each arc). Depending on the selected ε, each path traced the evolution of the indexes. If ε is (resp. far from) 1.0, the path is obtained in a minimal (resp. maximal) time due to the exploitation (resp. exploration) policy executed by the agent (ε-greedy algorithm). combining the information of the GARCH types into an LSTM model leads to a superior volatility forecasting capability [START_REF] Kakade | Forecasting commodity market returns volatility: A hybrid ensemble learning garch-lstm based approach[END_REF]. As many other authors, our combined architecture, as shown in Figure 4.21 uses GARCH and LSTM distinctively, as well as together, to lead to portfolio construction based on volatility of the risk model. The novelty of our approach is to propose a prediction model using distinctively or combined GARCH and LSTM and an optimization model based on DEVS-RL with the possibility of benchmarking with ESV the evaluation of the results of portfolio optimization.
Our LSTM models have been coded under the Google Colaboratory Python Environment and the Google GPU accelerator (see Appendix 10). Figure 4.18, shows the four plots of our predictions for the index data from September 2012 to November 2022. Visually, the level of one-year-horizon prediction seems pretty good.
To evaluate the prediction error rates and performance of the LSTM model, the root mean squared (RMSE) was used to measure the difference between the predicted and practical data. RMSE has been calculated as follows:
RMSE = ∑ n i=1 (y i -y ′ i ) 2 n ,
where Y = (y The RMSE, that is, the weighted average error between the predictions and the actuals, is shown in Table 4 4.1 LSTM models prediction performance. The RMSE values confirm that our LSTM models compute predictions on a one-year horizon with a high degree of precision given the respective 2019 average price (AP).
We also tried the same LSTM architecture with 10 to 100 epochs, but the prediction was actually close. of the solutions of the ESV efficient frontier. The expected return lines for real and predicted data are following the same trend, which means that respectively the lowest points and the highest points of both lines are simultaneous. Actually, the leverage effect for predicted data is 10.82 lower than the real data one, this is probably due to the cumulative effects of RMSE. But this signal is still an important signal to exploit for portfolio optimization because it indicates when it is time to sell (highest points) and when it is time to buy (lowest points) no matter what is the price.
Discussion Our LSTM models would probably need to be perfected to obtain RMSE values closer to zero. However, the actual predictions of the LSTM model will be useful for predicting the bull and bear markets. The bull market is characterized by a rise of stock prices. On the contrary, a bear market is a period characterized by the beginning of a decline in stock prices. In reality, a bear market is defined as a period with a decline of 20% of the main index. In our case study, we consider any period with more than a week of price decline as a time signal from the bear market. LSTM prediction will send a signal to DEVS-RL to seek or not seek an optimized portfolio. During the Bull market, predicted by LSTM, DEVS-RL will launch the simulation to trade the portfolio in order to get to the optimal one. In contrast, with a LSTM predicted bear market signal, DEVS-RL will not launch the simulation to avoid trading in a period of declining stock prices with the certainty of losing portfolio value. The Bull market and the Bear market are discrete events, and the DEVS formalism perfectly fits with this constraint.
ESV, LSTM and DEVS-RL Combination
The portfolio presented in Figure 4.18 was constructed by prediction the Bear and Bull Market. DEVS-RL simulation may use or not the signal of the Bear Market to avoid simulation until the Bull market is predicted by the LSTM. Then, the optimal allocation of stocks for the constructed portfolio was evaluated by simulation and optimization modeling, where DEVS-RL and ESV were used to evaluate the optimal allocation weights of stocks. In Figure 4.21, the combination can result in a model based on forecasting of price returns through supervised neural networks, followed by a decision policy driven by an MDP model implemented with the DEVS formalism. In this combination section, we tried to route the way to formulate the hypothesis that, while preserving computational efficiency, it is possible to improve the financial performance of the forecast-based approach by a better optimization of the trading agent's financial performance. In particular, our aim is to propose lines of research for future work that will focus on more adequate ways of designing the training patterns from those price data features that change over time; new trading rules based on different forecast horizons; and the use of adaptation rules able to cope with transaction costs.
Conclusion
The case study deals with a decision-making process carried out by traders during a process of optimizing asset management that leads to a possible leverage effect. This section detailed a first experiment in which simulation models were used to prove that our DEVS-RL combines short-term financial leverage with a sustainable strategy over a 30-year period. The validity of asset management is highlighted by earnings that reach three times the value of three of the main world stock markets managed naively.
Furthermore, this section shows how the DEVS formalism makes it possible to implement an RL system in a formal framework, allowing one to benefit from a better modularity and explanation of the AI models. We also highlighted the effectiveness of the proposed approach in terms of complementarity and combinability with other AI tools such as ESV and LSTM. We also provided evidence of how the modular aspect of DEVS-RL allowed one to build decision and prediction models through simulation combined with other AI algorithms.
Chapter 5
Conclusions and Future Works
The objective of this thesis is to propose a decision-making management policy for complex systems evolving in highly dynamic environments, combining the DEVS formalism and the RL technique. This work raises a number of questions mentioned in the Introduction, which can now be answered.
What are the advantages of coupling DEVS simulation and AI techniques, particularly reinforcement learning, in the decision-making process of a complex system?
In the simulation, ML makes the simulation learn some aspects and directly apply these lessons. This coupled approach is useful when teaching ML algorithms in specific parts of the simulation model itself, such as the agent trying to find paths within the simulation model environment. To mitigate the risk of delegation to machines in decision-making processes, DEVS offers the opportunity to remain under observation, interactions, and separation between the agent and the environment involved in the traditional RL algorithm, such as Q-Learning. In RL, an agent seeks an optimal control policy for a sequential decision problem. Unlike in supervised learning, the agent never sees examples of correct or incorrect behavior. Instead, it receives only positive and negative rewards for the actions it tries. Since many practical real-world problems (such as robot control, game play, and system optimization) fall into this category, developing effective RL algorithms is important to the progress of AI. The Q-Learning algorithm needs to be formally separated in a DEVS Agent component that reacts with a dynamic DEVS Environment component. This modular capability of DEVS permits a total control of the Q-Learning loop in order to drive the algorithm convergence.
How the DEVS formalism is able to improve understanding of AI algorithms? Simulation models are built 'process-centric', while ML models are built 'data-centric'. The discrete event-oriented modeling of the Agent-Environment RL system by the DEVS formalism allows us to explore the Q-Learning and SARSA algorithms in a more behavioral way. In general, the Q-Learning and SARSA algorithms are based on the nesting of two loops: a repetitive loop on a number of episodes, which includes a conditional loop on actions. The DEVS discrete event approach makes it possible to dissociate these two loops through the communication of two models (atomic or coupled) Agent and Environment. This makes it easier to interact with the two learning algorithms in order to implement specific stopping conditions, for example. In addition, the modularity provided by the DEVS formalism makes it possible to divide the algorithms into an interconnection of atomic models, thus improving the experimentation of new calculation methods to update the variable Q or the method of allocating new actions by the agent.
The coupled models approach has the advantage of facilitating the implementation of a multi-agent model for RL. Indeed, the Agents coupled model can be composed of an interconnection of Agent atomic models that communicate both with each other and with the environment. The DEVS formalism is a good candidate for setting up a multi-agent model. In addition, the possibility of simulating coupled models in parallel thanks to PDEVS is an interesting avenue when one wants to set up multi-agent models which can lead to significant simulation times. We do not present this aspect in this thesis.
How does the association of RL algorithms with DEVS simulation improve the management of leverage effects in financial management processes? The research work proposed in this thesis contributes to answering these questions using a concrete case of optimizing the management of a portfolio of stocks using discrete event simulation. The DEVS-RL models results clearly validate that the Agent correctly invests the cash to try to reach the best investment policy. This research effort provided samples and showed that DEVS can promote trust in algorithm-driven decision making. We highlighted several advantages brought by the coupling of DEVS simulation and reinforcement learning in the decision-making process of a complex system by offering the opportunity to keep the Q-Learning loop under observation including all the interactions, and due to the modular and hierarchical aspect of DEVS, the separation between the agent and the environment involved in the traditional RL algorithm, such as Q-Learning, is improved. We also gave samples of how the DEVS formalism is capable of improving the understanding and deployment of AI algorithms by mitigating the risk of delegation to machines in decision-making processes. We also provided evidence of how the association of RL algorithms with DEVS simulation improved the management of leverage effects in asset management by offering complementarity and combinability with the most explored algorithms and supervised IA in the data finance literature. DEVS-RL model allowed to produce leverage effects three times higher than some of the most important market places in the world over a thirty-year period and may contribute to addressing the Active portfolio theory with a novel approach that question the need to distinguish between patient, aggressive, and conservative behavior. Bridging portfolio theory and RL, DEVS-RL combined all three in one by maximizing returns as an aggressive approach and having a positive impact (thirty years) on long-term yield and stability as a conservative approach with income earning despite economic conditions as wished by a patient portfolio investor.
In perspective, the next stages of our research will be to add to Q-Learning the estimation of the Bellman equation by a neural network (Deep Q-Networks). In fact, the Q-Learning algorithm based on table management reaches these limits when the number of states increases. The use of neural networks makes it possible to have an approximator of the variable Q and makes it possible to consider a much larger number of possible states. This development should allow us to avoid having to estimate the rewards upstream and to respond to the increase in the number of states generated by considering new constituent factors of the environment.
Another solution to deal with a large number of state frameworks is to consider multiagent approach. Instead of letting one single agent iterate in a single large environment, the task is divided into multiple agents that explore a part of the number of states. Each agent gives an optimal policy that results in the execution of the Q-Learning algorithm, depending on the interaction with its environment. Once the supervisor has received all agent policies at each simulation time step, it determines the best combined policy and communicates to the single agent the action to be taken. The best-combined policy is determined as follows: sell the indexes with the lowest volatility or a negative one and buy the ones with the highest volatility. These two actions are limited by the amount of cash available to be added to the indexes prices. The agents interact through the supervisor. Agent policies communicated to the supervisor will represent an opportunity or a constraint for the other agents. The supervisor synchronizes the interaction between agents. Learning takes place inside the agents. The construction of the best-combined policy is the result of a process of trial-anderror by reward return. If the environment changes, the process must be restarted. A policy is applicable for a given environment. In that context, we may consider the system as a dynamic structure DEVS model with a static structure of the coupled DEVS model in which the number of agents/environment can vary over time and can be executed in parallel. The supervisor model builds its policy by integrating all the policies and experiences that each agent has already learned.
To broaden the scope of application, the compatibility between the DEVS-RL portfolio optimization results and the ESV model results also suggests that it would be interesting by integrating GARCH to further investigate the relationship between volatility, return, and volatility return and to offer a novel approach to reconsider market risk exposure as defined by the VaR, the standard for banking and insurance companies. • state ∈ { ′ IDLE ′ , ′ UPDAT E_STAT E_ACT ION_MAP ′ , ′ SEND_ACT ION ′ , ′ SEND_QMEAN ′ } is the state space.
• Q = S M xA is the Q matrix with a number of rows equal to the number of Markov states S M and a number of columns equal to the number of actions A.
• state_action_map is the Markov states -actions dictionary sent by the Environment model.
• stop is a flag to inform if the • ε ∈ R + 0,1 is the epsilon Q-Learning parameter. • γ ∈ R + 0,1 is the gamma Q-Learning parameter. • α ∈ R + 0,1 is the alpha Q-Learning parameter. • s is the Markov state received from the Environment model.
• r is the reward received from the Environment model.
• d is the flag received from the Environment model.
• a is the current action chosen by the Agent model.
• S : state × sigma is the set of sequential states.
• δ int (S → S) :
1. if state is 'SEND_ACTION' then 2.
stop ← new_QMEAN is old_QMEAN and ε is 1.0.
3. state ← ( ′ IDLE ′ , ∞)
• δ ext (Q × X → S) :
1. (p,v) ← peek(X) • state_action_map is the Markov states -actions dictionary.
• init_state is the initial state of the Environment.
• end_state is the end state of the Environment (if the goal reward option is enabled).
• s is the current state of the Environment.
• a is the action received from the Agent model.
• inputs1N is the list of input messages detected in the input ports 1 to N (coming from generator models).
• S : state × sigma × s is the set of sequential states Our ESV model has been coded in the Google Collaboration Python Environment [START_REF] Carneiro | Performance analysis of google colaboratory as a tool for accelerating deep learning applications[END_REF]. The four-step methodology that has been followed in our ESV is introduced in the following:
3. 1 64 3. 3
1643 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Discrete Event Modeling and Simulation for Machine Learning . . . . . . . 3.2.1 Discrete Event System Specification and Reinforcement Learning . 3.3 DEVS-Based RL Architectural Pattern . . . . . . . . . . . . . . . . . . . . 3.3.0.1 Atomic-based RL Modeling Approach . . . . . . . . . . 3.3.0.2 Coupled-based RL Modeling Approach . . . . . . . . . . 3.3.0.3 Multi-Agent-based RL Modeling Approach . . . . . . . 3.3.1 What About Observability and Explainability? . . . . . . . . . . . 3.3.2 Explicit Time in Q-Learning . . . . . . . . . . . . . . . . . . . . . 3.4 DEVSimPy Modeling and Simulation of RL . . . . . . . . . . . . . . . . . 3.4.1 The Pursuit-Evasion Case Study . . . . . . . . . . . . . . . . . . . 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Case Study: Leverage Effects in Asset Management Optimization Processes 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 First Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1.1 DEVSimPy Modeling . . . . . . . . . . . . . . . . . . . 2.14 Classic DEVS Atomic Model in action. . . . . . . . . . . . . . . . . . . . 50 2.15 Finite-state machine of the EXEC DEVS atomic model. . . . . . . . . . . . 52 2.16 State trajectory of the EXEC DEVS atomic model after simulation. . . . . . 53 2.17 An example of a DEVS coupled model. . . . . . . . . . . . . . . . . . . . 54 2.18 DEVSimPy general interface. The left panel shows the library panel with all atomic or coupled DEVS models that are instantiated in the right panel to create a diagram of a simulation model. The dialogue window on the bottomright allows one to simulate the current diagram by assigning a simulation time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.1 M&S for Machine Learning. Modeling part highlight important aspects related to discrete event M&S that brings benefits as temporal aspect for delayed reward notion in RL for example. Simulation part point out that it can be used to improve the output analysis or to facilitate the Monte Carlo process realization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.2 Traditional RL workflow and the corresponding phases in DEVS. . . . . . . Learning by reinforcement with the DEVS Agent and Environment models. The Decision logic is embedded in the agent that reacts to a new state and reward couple by sending an action that will be evaluated by the Environment model. The Environment model can be a discrete event simulation model executed in a specific experimental frame. . . . . . . . . . . . . . . . . . . 65
2 . 3 ) 75 3. 9 78 4. 1
23759781 Mouse-cat M&S into the DEVSimPy framework. . . . . . . . . . . . . . . 77 3.10 Mean Q-value per episode with fear factor. . . . . . . . . . . . . . . . . . . 78 3.11 Mean Q-value per episode without fear factor. . . . . . . . . . . . . . . . . Approaches driven by humans and AI. This process is theoretically carried out taking into account macroeconomic indicators, technical data, stock trends, market risk indicators, and the current composition of the trader's portfolio.
4. 2
2 DEVS-RL driven proposed approach. . . . . . . . . . . . . . . . . . . . . 4.3 DEVSimPy model that puts into action the DEVS-RL library (Section 3.4) with the three generator atomic models CAC40, DJI, and IXIC. . . . . . . . 4.4 Mean of the Q matrix in the Agent model for the single-episode simulation case after 35000 episodes. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Stock market indexes from 1991-01-02 to 2018-07-05. . . . . . . . . . . . 4.6 Size of the episode (left part) and number of steps (right part) during the period and ordered by the initial cash scenario (from top to bottom: 0$, 8000$, 16000$, 24000$). . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Index multiplicity (IXIC on the left, CAC40 on the middle, and DJI on the right) during the simulation for the four scenarios depending on the initial cash (from top to bottom: 0$, 8000$, 16000$, 24000$). . . . . . . . . . . . 4.8 Total assets (stock + cash) during simulations according to the period 1991-01-02 to 2018-07-05. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Residual cash during simulations for the four scenarios. . . . . . . . . . . . 4.10 Cash investment time (with initial cash from top to bottom: 0$, 8000$, 16000$, 24000$). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4. 12
12 IXIC, DJI, GSPC, and RUT index values for each day of the period from January 1, 2019 to January 1, 2020. . . . . . . . . . . . . . . . . . . . . . 4.13 DEVSimPy simulation model with DEVS-RL agent and environment atomic models and the four DEVS generators (IXIC, DJI, GSPC, and RUT) that send the index values for each day of the period from January 1, 2019 to January 1, 2020. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 DEVSimPy properties of the atomic model Agent with a configuration defined for the compatibility experiment. . . . . . . . . . . . . . . . . . . .
xix 4 .
4 15 DEVSimPy properties of the Env atomic model with a configuration defined for the compatibility experiment. . . . . . . . . . . . . . . . . . . . . . . .
4. 18
18 LSTM index predictions from September 2012 to November 2022. The blue, green, and orange lines represent, respectively, the real values during the training period, the real values during the prediction period, and the predicted values. Visually, the orange and green lines seem to be superimposed. . . . 4.19 LSTM and DEVS-RL combination. LSTM coupled model contains the four generator model (for the four indexes) based on the predicted data extracted from the LSTM algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.20 The real and LSTM predicted leverage effects after the DEVS-RL simulation from 100,000$ with N = 4 from January 1, 2019 to January 1, 2020. . . . . 4.21 Our combined architecture from input data to optimal portfolio. The prediction model use distinctively or combined LSTM and GARCH to build a portfolio based on risk model volatility. Next, the portfolio optimization is based on DEVS-RL and the results are evaluated by Efficient Semivariance. 7.1 The DEVS agent atomic model with its two input ports In 0 and In 1 used to receive the initialization message (for the Q matrix) and the set (s, r, d) from the DEVS environment model. The two output ports Out 0 (resp. Out 1 ) are used to send the action a (resp. Q matrix) to be evaluated by the environment model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 The DEVS environment atomic model with its two input ports In 0 and In 1 used to receive the action a from the Agent DEVS model and the message [v] from the generators. The two output ports Out 0 (resp. Out 1 ) are used to send the initial message (resp. (s, r, d)) to the Agent model. . . . . . . . . . 131
Leverage e f f ect = 1 - 100 Fig. 1 . 1
110011 Fig. 1.1 An example of a leverage effect based on financial leverage.
Fig. 1 . 2
12 Fig. 1.2 In the generic framework, DEVS-RL is a general methodological approach based on the combination of DEVS M&S and RL algorithm. The study case allows us to validate the quality of results in terms of financial gains. Confrontation and combination allow us to highlight the similarity and complementary between DEVS-RL, EF, and LSTM techniques.
Fig. 2 .
2 Fig. 2.2 LSTM model for predicting stock prices [89].
Figure 2 .
2 Figure 2.3 shows the output cell of an LSTM model as modeled in Figure 2.2.
Fig. 2 . 3
23 Fig. 2.3 Actual vs. predicted stock price by an LSTM model (period: Jan. 1, 2021, to June 1, 2021) [163].
Fig. 2 . 4
24 Fig. 2.4 Efficient Frontier Process from historical stock price data to diversified portfolio.
Fig. 2 . 6
26 Fig. 2.6 Efficient Frontier example.
Fig. 2 .
2 Fig. 2.8 MRP scalper daily routine MRP.
Fig. 2 .
2 Fig.2.11 Q-Learning algorithm from[START_REF] Sutton | Introduction to Reinforcement Learning[END_REF].
Fig. 2 .
2 Fig. 2.13 DEVS experimental frame with the model and the simulator interacting through the modeling and the simulation relationships.
A
classic DEVS AM atomic model (Figure2.14) with the behavior is represented by the following structure:AM =< X,Y, S, δ int , δ ext , λ ,t a >where:• X : {(p, v)|(p ∈ input ports, v ∈ X h p )}is the set of input ports and values. • Y : {(p, v)|(p ∈ out put ports, v ∈ Y h p )} is the set of output ports and values. • S: is the set of states.
Fig. 2 .
2 Fig. 2.14 Classic DEVS Atomic Model in action.
Fig. 2 .
2 Fig. 2.15 Finite-state machine of the EXEC DEVS atomic model.
Fig. 2 .
2 Fig. 2.16 State trajectory of the EXEC DEVS atomic model after simulation.
Figure 2 .
2 18 shows the general interface of the DEVSimPy environment. A panel on the left (left part of Figure 2.18) shows the libraries of DEVSimPy models.
Fig. 2 .
2 Fig. 2.18 DEVSimPy general interface. The left panel shows the library panel with all atomic or coupled DEVS models that are instantiated in the right panel to create a diagram of a simulation model. The dialogue window on the bottom-right allows one to simulate the current diagram by assigning a simulation time.
Figure 3 .
3 Figure 3.1 shows some possible integration of M&S aspects into ML framework.For the simulation part, Monte Carlo simulation is often used to solve ML problems using a "trial-and-error" approach. Monte Carlo simulation can also be used to generate random outcomes from a model estimated by some ML technique. ML for optimization is another opportunity for integration into simulation modeling. Agent-based systems often have many hyperparameters and require significant execution times to explore all their permutations to find the best configuration of models. ML can speed up this configuration phase and provide more efficient optimization. In return, simulation can also speed up the learning and configuration process of AI algorithms. Simulation can also improve the experimental replay generation in RL problems where the agent's experiences are stored during the learning phase. RL components can be developed to replace rule-based models. This is possible when considering human behavior and decision-making. For example, in[START_REF] Floyd | Creation of devs models using imitation learning[END_REF] the authors consider that by observing a desired behavior, in the form of outputs produced in response to inputs, an equivalent behavioral model can be constructed (Output Analysis in Figure3.1). These learning components can be used in simulation models to reflect the actual system or to train
Fig. 3 . 2
32 Fig. 3.2 Traditional RL workflow and the corresponding phases in DEVS.
Fig. 3 . 3
33 Fig. 3.3 Learning by reinforcement with the DEVS Agent and Environment models. The Decision logic is embedded in the agent that reacts to a new state and reward couple by sending an action that will be evaluated by the Environment model. The Environment model can be a discrete event simulation model executed in a specific experimental frame.
Fig. 3 .
3 Fig. 3.6 Coupled-Based Modeling Approach with the Agent and Environment DEVS Coupled Models which improve the explainability of the AI embedded. Each observer is an atomic model that turns the signal into an interpretable feature.
Fig. 3 . 9
39 Fig. 3.9 Mouse-cat M&S into the DEVSimPy framework.
Fig. 3 .
3 Fig. 3.10 Mean Q-value per episode with fear factor.
Fig. 3 .
3 Fig. 3.11 Mean Q-value per episode without fear factor.
Fig. 4 . 1
41 Fig. 4.1 Approaches driven by humans and AI. This process is theoretically carried out taking into account macroeconomic indicators, technical data, stock trends, market risk indicators, and the current composition of the trader's portfolio.In other words, to address the issue of risk aversion, traders trade a small portion of their portfolio at a regular pace, such as quarterly, monthly, or weekly. The supervised AI agent has a different reward system to reduce human biases and to produce a leverage effect (make more money) with a more sustainable long-term strategy.
Fig. 4 .
4 Fig. 4.2 DEVS-RL driven proposed approach.
Figure 4 .
4 Figure 4.3 details the DEVSimPy model of the case study presented in this section. The model includes five atomic DEVS models:
Fig. 4 . 3
43 Fig. 4.3 DEVSimPy model that puts into action the DEVS-RL library (Section 3.4) with the three generator atomic models CAC40, DJI, and IXIC.
4. 1 . 1 . 2 . 1
1121 Single Episode Case : In this section, the model of Figure 4.3 has been simulated with the following settings: • IXIC model index/volatility values: 360.200012/-0.01906318. • CAC40 model index/volatility values: 1508.0/-0.025210084. • DJI model index/volatility values: 2522.77002/-0.016881741.
Fig. 4 . 4
44 Fig. 4.4 Mean of the Q matrix in the Agent model for the single-episode simulation case after 35000 episodes.
Fig. 4 . 5
45 Fig. 4.5 Stock market indexes from 1991-01-02 to 2018-07-05.
Fig. 4 . 6
46 Fig. 4.6 Size of the episode (left part) and number of steps (right part) during the period and ordered by the initial cash scenario (from top to bottom: 0$, 8000$, 16000$, 24000$).
Fig. 4 . 7
47 Fig. 4.7 Index multiplicity (IXIC on the left, CAC40 on the middle, and DJI on the right) during the simulation for the four scenarios depending on the initial cash (from top to bottom: 0$, 8000$, 16000$, 24000$).
Fig. 4 . 8
48 Fig. 4.8 Total assets (stock + cash) during simulations according to the period 1991-01-02 to 2018-07-05.
Figure
Figure 4.7 shows that the index multiplicity is reached more rapidly in the init cash 24000 scenario, which confirms that the Agent is acting to reach the best policy in the shorter "time"
Figure 4 .
4 Figure 4.8 shows the variations of the total assets (stock + cash) during the simulations.Simulations validate that our Agent behaves correctly following the volatility and values of the indexes as shown in Figure4.9, which also validate that the Agent respects the learning rules, indeed even when the values of all indexes are dropping, the Agent continues to try to reach the best possible state and at the same time to invest the maximum amount of cash available.
Fig. 4 . 9
49 Fig. 4.9 Residual cash during simulations for the four scenarios.
Figure
Figure4.9 also validates that cash is not a limiting factor in obtaining the best investment policy.
Fig. 4 .
4 Fig. 4.10 Cash investment time (with initial cash from top to bottom: 0$, 8000$, 16000$, 24000$).
Fig. 4 .
4 Fig. 4.11 Efficient Frontier for the IXIC, DJI, GSPC, and RUT indexes with no short sale in the period from January 1, 2019 to January 1, 2020. The optimal solution (red cross) obtained for an expected annual return equal to 30.0%.
Fig. 4 .
4 Fig. 4.12 IXIC, DJI, GSPC, and RUT index values for each day of the period from January 1, 2019 to January 1, 2020.
Figure 4 .
4 Figure 4.12 shows the values of the IXIC, DJI, GSPC, and RUT indexes each day of the period from January 1, 2019 to January 1, 2020.Figure4.11 shows the efficient frontier with the associated expected annual return value and the volatility (risk) of optimizing the ESV portfolio over a period from January 1, 2019 to January 1, 2020. The frontier is obtained in a few seconds and gives portfolio optimization allocations within a range of expected annual return from 25% up to 35%. The optimal one is (50 GSPC, 4 IXIC) with an expected annual return equal to 30%. The ESV's efficient frontier would be compared to the expected annual return obtained with DEVS-RL with two different N.
Figure 4 .
4 Figure 4.12 shows the values of the IXIC, DJI, GSPC, and RUT indexes each day of the period from January 1, 2019 to January 1, 2020.Figure4.11 shows the efficient frontier with the associated expected annual return value and the volatility (risk) of optimizing the ESV portfolio over a period from January 1, 2019 to January 1, 2020. The frontier is obtained in a few seconds and gives portfolio optimization allocations within a range of expected annual return from 25% up to 35%. The optimal one is (50 GSPC, 4 IXIC) with an expected annual return equal to 30%. The ESV's efficient frontier would be compared to the expected annual return obtained with DEVS-RL with two different N.
Fig. 4 .
4 Fig. 4.13 DEVSimPy simulation model with DEVS-RL agent and environment atomic models and the four DEVS generators (IXIC, DJI, GSPC, and RUT) that send the index values for each day of the period from January 1, 2019 to January 1, 2020.
Fig. 4 .
4 Fig. 4.14 DEVSimPy properties of the atomic model Agent with a configuration defined for the compatibility experiment.
.
Fig. 4 .
4 Fig. 4.16 Leverage effect after DEVS-RL simulation from 100,000$ with N equal to 4,8 and 14 from January 1, 2019 to January 1, 2020. The leverage effect for N = 4 is 34.72% and the discrete allocation is (3 IXIC, 3 RUT, 3 DJI, 3 GSPC) that corresponds to one of the efficient frontier expected return points. For N = 4 at the end of 2019, the current value of the portfolio is 153 188.002 for an initial capital investment of 100,000$. The initial capital represents 65.28% of the current value. The leverage effect for 2019 is 34.72% with a return ratio of 53.18% which represents the gain for N = 4 in 2019.
Fig. 4 .
4 Fig. 4.18 LSTM index predictions from September 2012 to November 2022. The blue, green, and orange lines represent, respectively, the real values during the training period, the real values during the prediction period, and the predicted values. Visually, the orange and green lines seem to be superimposed.
Figure 4 .
4 Figure 4.19 shows the integration of the LSTM DEVSimPy coupled model into our previous DEVS-RL library. The LSTM model contains the four generators that implement the Python code presented in Appendix 10 to have the index predictions.
Figure 4 .
4 Figure 4.20 shows the real and LSTM predicted leverage effects after the DEVS-RL simulation from 100,000$ with N = 4 from January 1, 2019 to January 1, 2020. N = 4 is one
2 LSTM models prediction performance with three different epoch settings. With 1 epoch, the computational time of the model was about 72 seconds, and every next epoch needs 71 seconds more. The results in terms of prediction do not vary significantly.
Fig. 4 .
4 Fig. 4.19 LSTM and DEVS-RL combination. LSTM coupled model contains the four generator model (for the four indexes) based on the predicted data extracted from the LSTM algorithm.
Fig. 4 .
4 Fig. 4.20 The real and LSTM predicted leverage effects after the DEVS-RL simulation from 100,000$ with N = 4 from January 1, 2019 to January 1, 2020.
Fig. 4 .
4 Fig. 4.21 Our combined architecture from input data to optimal portfolio. The prediction model use distinctively or combined LSTM and GARCH to build a portfolio based on risk model volatility. Next, the portfolio optimization is based on DEVS-RL and the results are evaluated by Efficient Semivariance.
grams, in Proc. of Winter Simulation Conference (WinterSim'17), Dec. 3-6, 2017, Las Vegas, NV, USA, pp. 4558-4559. (Code Australia rank: B) 7. E. Barbieri, L. Capocchi, J.F. Santucci, DEVS Modeling AND Simulation Based on Markov Decision Process of Financial Leverage Effect in the EU Development Programs, poster à la Journée Des Doctorants (JDD), Université de Corse, 15 juin 2017.
1 . 2 . 3 .
123 Choosing the sectors: The four main indexes of the US market are chosen first. The chosen sectors are as follows; (a) Nasdaq Composite (noted IXIC) (b) Dow Jones Industrial Average (noted DJI) (c) S&P 500 (noted GSPC) (d) Russell 2000 (noted RUT) Data acquisition: For each index, the historical prices of the four most critical stocks are extracted using the DataReader function of the data submodule of the pan-das_datareader Python module. Stock prices are extracted from the Yahoo Finance site from September 1, 2012, and from September 1, 2014 to January 3 and September 1 of every year from 2015 up to 2022. There are five features in the stock data: open, high, low, close, volume, and adjusted_close. The current work is a univariate analysis and, hence, the variable adjusted_close is chosen as the only variable of interest. Derivation of the return and volatility: The percentage changes in the adjusted_close values for successive days represent the daily return values. For computing the daily
3.1 Comparison using XAI taxonomy. . . . . . . . . . . . . . . . . . . . . . . 4.1 LSTM models prediction performance. The RMSE values confirm that our LSTM models compute predictions on a one-year horizon with a high degree of precision given the respective 2019 average price (AP). . . . . . . . . . . 4.2 LSTM models prediction performance with three different epoch settings.
xxiv Nomenclature
IF Internal Financing
LSTM Long Short-Term Memory
MARL Multi-Agent Reinforcement Learning
Nomenclature MDP Markov Decision Process Chapter 1
MPT Modern Portfolio Theory
Acronyms / Abbreviations MRP Markov Reward Process
ACER Actor-Critic with Experienced Replay MVO Mean-Variance
ACW Average Actual Weight ORS Ordinary Least Squares
API OTC Application Programming Interface Over-The-Vounter
IXIC NASDAQ Composite PDEVS Parallel Discrete Event system Specification
CNN Convolutional Neural Networks RMSE Root Mean Squared
RNN Recurrent Neural Networks
CPS Cyber-Physical Systems
RUT Russell 2000
CR Capital Requirements
SES System Entity Structure
DDPG Deep Deterministic Policy Gradient
TD Temporal Difference
DEVS Discrete Event system Specification
TM&S Theory of Modeling and Simulation
DEVSimPy DEVS Simulator in Python language
VaR Value-at-Risk
DJI Dow Jones Industrial Average
DQN Deep Q-learning
EF Efficient Frontier
ESV Efficient Frontier Semi-Variance
GARCH Generalized Autoregressive Conditional Heteroscedasticity Process
GDPR General Data Protection Regulation
GSPC S&P 500
With 1 epoch, the computational time of the model was about 72 seconds, and every next epoch needs 71 seconds more. The results in terms of prediction do not vary significantly. . . . . . . . . . . . . . . . . . . . . . . . . . . .
is s 1 then 2. s ← s 2 during t a (s 2 ) ← ∞ It often happens that the time advance function (t a (S)
12. else
13. return ∞
3. else pass
• δ ext (QxS) :
4. If s is s 2 then
5. s ← s 1 during t a (s 1 ) ← proc
6. else
7. t a (s 1 ) ← t a (s 1 ) -e
• λ (S) :
8. If s is s 1 then
9. send (p 2 , y i )
• t a (S) :
10. if s is s 1 then
11. return proc
.1.
Index RMSE AP RMSE/AP [%]
IXIC 107.78 7965.39 1.35
RUT 18.11 1551.52 1.17
DJI 303.20 26232.78 1.15
GSPC 37.46 2933.76 1.27
Table
Table 4 .
4
[%] RMSE/AP 10 epochs RMSE/AP 100 epochs
[%] [%]
IXIC 1.35 1.34 1.34
RUT 1.17 1.17 1.16
DJI 1.15 1.14 1.13
GSPC 1.27 1.27 1.25
{(p, v)|p ∈ [Out 0 , Out 1 ], v ∈ V } is the set of the two output ports Out 0 (resp. Out 1 ) used to send the initialization message (resp. (s, r, d)) to the agent model.• sigma ∈ R + 0,∞ is the variable introduced to manage the time advance function• state ∈ { ′ IDLE ′ , ′ UPDAT E_ACT ION ′ , ′ UPDAT E_EPISODE ′ , ′ UPDAT E_STAT E_ACT ION_MAP ′ , ′ UPDAT E_ALL ′ }is the state space • goal_reward is a variable used to specify if the search algorithm is oriented to the goal reward (see. Section 2.3.2.3)
• Y :
2. if p is Out_1 then
3. state_action_map ← v
4. Q ← / 0
5. stop ← False
6. new_QMEAN ← old_QMEAN ← 0.0
7. state ← ( ′ UPDAT E_STAT E_ACT ION_MAP ′ , 0.00001)
8. else
9. s, r, d ← v
The Step(s, a) function is used to continue the learning process. Returns the set (s ′ , r, d) depending on the current state s and the action a and can be implemented as follows.
3. return 0.0 // 1.0 is defined only for the end states
4. else
5. pass // define the reward algorithm
An example of the GetStateActionMap(inputs1N) method can be presented as follows.
1. function GetStateActionMap(inputs1N) Chapter 9
2. A ← dictionary
3. loop for state,action in state_action_map
4. Efficient Frontier Semi Variance if state is end_state then
5. end ← end_state
6. Implementation else
7. end ← None
8.
9. if end is end_state then
10. d[end_state] ← True
11. if end is init_state then
12. d[init_state] ← True
13. if state is not in A then
14. A[state] ← [d]
15. else
16. append d to A[state]
1. function Step(s, a)
1. (p,v) ← peek(X) 2. loop for t in state_action_map[s]
3. 2. if p is Out_0 then If a is t[ ′ action ′ ] then
4. 3. a ← v new_state ← t[ ′ end ′ ]
5. 6. 4. state ← ( 5. else reward ← t[ ′ reward ′ ] done ← reward is 1.0
• δ int (S → S) :
1. if done is True then 2. s ← init_state 3. init_state ← end_state 4. done ← False 5. state ← ( ′ UPDAT E_EPISODE ′ , 1.0, s) 6. else 7. state ← ( ′ UPDAT E_ACT ION ′ , ∞, s) • δ ext (Q × X → S) : ′ UPDAT E_ACT ION ′ , 0.00001, s) d ← {'
end':end, 'action':action, 'reward':1.0 if end is end_state else getReward(end)}
Acknowledgements
I thank my supervisor, Assistant Professor Laurent Capocchi, and my previous supervisor, Professor Jean-François Santucci, for guiding me through this research. Thank you to all the staff and students of the Sciences for the Environment (SPE) laboratory, past and present, for their constant support and friendship. My family gave me unflagging support throughout the preparation of this thesis. Finally, I thank my children, Amedeo, Maria-Vittoria, and Leonardo, for everything. I dedicate this thesis to the people of Corsica who never stop exploring.
These doctoral studies were conducted under the supervision of Laurent CAPOCCHI Assistant-Professor with the accreditation to direct research (HDR). The work submitted in this thesis is a result of original research carried out by myself, in collaboration with others, while enrolled as a PhD student in the Department "Sciences pour l'Environment" (SPE) laboratory UMR CNRS 6134 at the University of Corsica, Pasquale Paoli. It has not been submitted for any other degree
Chapter 7 Agent Atomic DEVS Model Specification
An agent entity is modeled as a DEVS atomic model with two input ports and on the output port as (Figure 7.1): Fig. 7. 1 The DEVS agent atomic model with its two input ports In 0 and In 1 used to receive the initialization message (for the Q matrix) and the set (s, r, d) from the DEVS environment model. The two output ports Out 0 (resp. Out 1 ) are used to send the action a (resp. Q matrix) to be evaluated by the environment model.
Agent =< X,Y, S, δ int , δ ext , λ ,t a > where:
The functions U pdateQ and ChooseAction are specified as follows.
The U pdateQ function depends on the choice of the learning algorithm : Q-Learning or SARSA. Environment =< X,Y, S, δ int , δ ext , λ ,t a > where:
The first port In 0 is used to receive the action a that should be evaluated. The last input ports are used to receive event from generators to eventually be considered in the environment.
Efficient Frontier Semi Variance Implementation returns, the pct_change function of Python is used. To calculate the portfolio variance and the standard deviation (volatility), the np.dot and np.sqrt Python functions are used, respectively. Assuming that there are 252 operational days in a calendar year, to calculate portfolio annual gains and losses for the stocks are found by multiplying the daily volatility by the pct_change function of Python is used.
4. Construction of the Portfolio Optimization (using PyPortfolioOpt library [START_REF] Martin | Pyportfolioopt: portfolio optimization in python[END_REF]):
First, we pick the option of importing:
• EfficientFrontier from PyPortfolioOpt library pypfopt.efficient_frontier,
• risk_models from pypfopt,
• expected_returns, EfficientSemivariance from pypfopt.
Second, to calculate gains and losses, annual covariance and portfolio gains and losses, the functions expected_returns.mean_historical_return (µ) and expected_returns.returns_from_prices functions are used.
Then we calculate the best maximum Sharpe ratio based on the performance of William Sharpe on volatility [START_REF] Sharpe | Asset allocation. Managing investment portfolios: A dynamic process[END_REF]. Next, we compute the ESV using µ and historical_returns.
Finally, to build the best portfolio policy, we import DiscreteAllocation and get_latest_prices from pypfopt.discrete_allocation and then use the get_latest_prices function to get the latest prices, define the total value of the portfolio, and finally print DiscreteAllocation to obtain our optimized portfolio.
Chapter 10 LSTM Implementation
The LSTM model is introduced in the following two steps:
• Data Description The data used in this case study were collected from YFinance, which covers the index data from September 2012 to November 2022. Furthermore, LSTMs are sensitive to the scale of the input data, which is why the data was normalized to the range of 0-to-1. Data are divided into two parts: Training data sets with 85% observations are used to train our model, and the remaining 15% are used to test the accuracy of our model prediction.
• Model Design We create an LSTM model with two LSTM layers. The first layer has 1000 neurons and the second 500 neurons. The number of neurons was selected by a trail-and-error process. The output of the hidden state of each neuron in the first layer is used as input to our second LSTM layer. We have two dense layers, where the first layer contains 50 neurons, and the second dense layer, which also acts as the output layer, contains 1 neuron. The network is trained for 1 epoch, and a batch size of 1 is used. Adam optimizer and a mean squared error loss were used. |
04103674 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2022 | https://shs.hal.science/halshs-04103674/file/WP8.pdf | Julie Brun Bjørkheim
Kristoffer Berg
Frida Kvamme
Tiril Eid Barland
Annette Alstadsaeter
email: [email protected]
Andreas Økland
email: [email protected]
Kristo↵er Berg
Increasing Cross-Border Ownership of Real Estate: Evidence from Norway
Keywords: Globalisation, Real estate, Foreign Ownership, Tax Havens JEL Codes: R39, F21, H26
This paper is the first to estimate the full extent of foreign-owned commercial and residential real estate in a country, including both direct and indirect ownership. We utilise unique Norwegian administrative data with reliable market value estimates and country-level ownership information. Overall, 2 percent of Norwegian real estate assets were foreign-owned in 2017, while this share amounts to 10 percent for assets owned by Norwegian corporations. Foreign ownership has increased over the last decade, and ownership from tax havens even more rapidly. Ownership from neighbouring countries and Luxembourg is especially large.
Introduction
Globalisation has opened real estate markets to foreign investors, but little is known of the full magnitude or of the anatomy of foreign ownership in real estate markets. NGOs and news organisations have documented purchases of expensive real estate by foreign buyers. However, it is unclear whether these eye-catching stories are representative for the total foreign ownership. Recent research is just starting to shed light on the anatomy and country distribution of o↵shore real estate ownership, either through shell companies as documented in London (Bomare and Le Guern Herry 2022), of residential real estate as documented in France (Morel and Uri 2021), or in tax havens like Dubai (Alstadsaeter, Planterose, Zucman, and Økland 2022).
We provide the first comprehensive overview of the stock and development of foreignowned real estate in a country, utilising de-identified, administrative data on Norwegian real estate for the period of 2011-2017, both corporate-owned and individually owned. We combine tax administration's annual estimates of the market value of real estate, both residential and commercial, with detailed ownership information on both individual and firm level, and with country-level information on foreign ownership. We find that overall, 2 percent of the value of Norwegian privately held real estate are owned by foreigners at year-end 2017, while this share amounts to 10 percent when we zoom in at real estate owned by corporations. Further, we find that foreign share of corporate-owned real estate has been increasing over the last decade. [START_REF] Bomare | Automatic exchange of information and real estate investment[END_REF] Ownership from tax havens has increased even more rapidly.
Insight into the magnitude and structure of foreign ownership of real estate is of great importance to tax administrations, policymakers, and society. First, there are indications that inflow of foreign capital may increase prices in property markets (Sá and Wieladek (2015); Sá (2016)). Second, ownership transparency is obfuscated for both government authorities and local communities when the ownership chain involves a foreign corporation. This means that owners that are unknown to stakeholders, either because of distance or secrecy, might wield influence over important infrastructure and development of urban areas and rural communities without being held accountable, which is in a particular worry in times of political instability. The obfuscation also makes foreign ownership of real estate a red flag for those monitoring money laundering risk (OECD (2019); FATF (2007); European Parliamentary Research Service (2019)). Real estate is a well-known tool for tax fraud and money laundering (Collin, Hollenbach, and Szakonyi 2022). Lastly, increasing international automatic exchange of financial information between tax administrations has made it harder to hide true ownership of o↵shore financial assets over the last years. However, this information exchange does not include real estate, making real estate investments an even more attractive tool for obfuscating true ownership of wealth (Bomare and Le Guern Herry 2022).
The existence of a centralised property register and annual estimates of the market value of real estate for tax purposes2 makes Norway an ideal laboratory to get a more complete overview of foreign ownership of real estate in Western countries. More importantly, the tax administration collects detailed information on the owners of shares in Norwegian corporations, including their residence country. The shareholder register covers all shareholders of all corporations and enables us to look through domestic group structures and the invisibility cloak that the domestic shell companies usually represents. We are thus able to single out all real estate owned by either foreign corporations and individuals, and attribute most of the privately owned real estate to the first country of location after crossing the Norwegian border.3
In our analysis, we utilise the same precisely imputed market values as used by the tax administration for wealth tax assessments. This is complemented with the financial statements of listed corporate groups, which are not subject to report imputed market values for wealth tax purposes. We estimate the total value of privately owned real estate to USD 1,057 billion in 2017.
The data reveal sizeable foreign ownership. 2 percent (USD 24 billion) of the Norwegian real estate owned by corporations and individuals at year-end 2017 is traced to foreign owners. Foreign ownership is most prevalent in the real estate owned by corporations. The total value of real estate owned by foreigners through Norwegian corporations is more than three times larger than the total value of the real estate owned directly by foreigners -18.8 billion vs. 5.3 billion, even though most Norwegian real estate is owned directly by individuals and reported in the personal tax return. The foreign ownership share is thus 10.5 percent and 0.6 percent among corporations and individuals respectively. Although not all foreign ownership is problematic, our estimates illustrate the scope of the potential negative consequences of foreign ownership of real estate.
We zoom in on the sample owned by corporations and uncover two underlying trends: First, foreigners own an increasing share of the Norwegian real estate owned through corporations.
More than 10 percent of corporate owned real estate was foreign-owned in 2017, a marked increase from around 6-7 percent in the years from 2011 to 2014. Second, the share owned from tax havens is growing even more rapidly. In 2011, 31 percent of the foreign-owned real estate was owned through tax havens. This increased in the following years, to 38 percent in 2017. Luxembourg ownership accounts for 3/4 of this increase. Only Norway's neighbouring country Sweden was a larger owner than Luxembourg in 2017, when we look at all real estate ownership, both personal and through corporations.
The United Kingdom, Finland, and the United States then followed. Half of the top 20 jurisdictions are well-known tax havens. The ownership through tax havens becomes nationals. But we, along with domestic tax auditors, are not in the position to identify this. even more apparent when the real estate ownership is scaled by the GDP of the owners' home countries. The British Virgin Islands is the top jurisdiction by this measure, with ownership equal to more than 25 percent of GDP. The four jurisdictions Bermuda, Guernsey, Luxembourg, and Jersey are next, all with ownership equivalent to approximately 4 to 5 percent of GDP. Then follows Cyprus, with 2.5 percent of GDP. Remaining countries have ownership below 1 percent of GDP.
Our estimates can serve as a useful benchmark for future studies of foreign ownership in other comparable real estate markets, especially for those that only have imperfect information about property values. Norway is an open, medium-sized economy which ranks 19 out of 38 OECD countries in terms of GDP. There are no strong factors that points to either especially high (like a high number of tourists or many bordering countries) or especially low (like foreign ownership restrictions) foreign ownership shares in the property market. The foreign-owned real estate in Norway amounts to 7 percent of GDP, which for the European Union would mean foreign ownership of EUR 1,012 billion in 2021, with for instance EUR 250 billion in Germany and EUR 174 billion in France.
The remainder of this paper is organised as follows. The next section outlines the current literature on foreign owned real estate. Section three describes the data we use and how we use it to construct our estimates. Section four presents the estimates, while section five concludes.
Current literature
The true extent of foreign ownership of real estate has not yet been quantified for any country or major city, with a few notable exceptions. Morel and Uri (2021) find that 1.5 percent of French residential real estate -worth USD 140 billion -was owned by non-residents at the end of 2019. This is higher than our estimate for Norwegian housing, which is 0.5 percent.
Morel and Uri (2021) also note that almost no French residential real estate is owned by non-residents through corporations. This type of ownership is evidently more prevalent in the UK, where the government also publishes lists of properties owned by UK and foreign companies. Bomare and Le Guern Herry (2022) calculate that the value of real estate in England and Wales (both commercial and residential) owned directly by either foreign individuals or foreign companies was at least USD 359 billion in January 2018. To reach this number, they among other things use information from the Panama Papers and related leaks to impute the country background of the ultimate owners of property-owning corporations registered in tax haven. This e↵ort shows that 19.5 percent -USD 28 billion out of USD 144 billion -of the real estate owned by foreign companies was in the end owned by UK nationals.
The most significant progress in mapping of country-by country ownership of real estate located in tax havens is in a companion project (Alstadsaeter, Planterose, Zucman, and Økland 2022), where we estimate that USD 146 billion of foreign capital is invested in Dubai real estate. The estimates build on several leaks of detailed information on more than 800,000 Dubai properties, compiled and shared by the US-based Center for Advanced Defense Studies (C4ADS). The results show that more than 70 percent of the Dubai properties owned by Norwegian taxpayers were not duly reported to the Norwegian Tax Administration.
The literature on foreign real estate investments is larger than the literature on real estate ownership, as the flows are easier to observe than stocks. Devaney, Scofield, and Zhang (2019) find foreign buyers in 4.1 percent of the transactions (representing 13.4 percent of the value) in a sample of high-value transactions in large US cities, with Canada, UK, Germany, and China as the major buyer countries. Cvijanović and Spaenjers (2021), in a paper that investigate foreign buyers in the Paris property market, observe that 4.6 percent of buyers were resident foreigners, while 2.8 percent were non-resident foreigners. Most foreign purchases were made by nationals of Italy, Great Britain, the United States, Portugal, and China.
Data and methodology
This section first outlines how tax returns let us construct the Norwegian real estate stock, complete with market values and covering real estate owned by both individuals and corporations. It then outlines how the shareholder register let us assign this stock to owners and countries. At last we discuss how sensitive our estimates are to adjusting the definition of ownership.
Real estate wealth from tax returns
Our data lets us analyse the ownership of the complete, private-owned Norwegian real estate stock. 4 The data cover the full universe of de-identified Norwegian tax returns for persons and corporations for the years 2011 to 2017, provided by Statistics Norway. Foreigners that own Norwegian real estate directly and foreign corporations (both if those that are incorporated or are fully owned from abroad) are also subject to submitting these tax returns. The data return data consists of three separate data sources, all made available to us by Statistics Norway and the Norwegian Tax Administration:
• Individuals' tax returns. The tax returns for individuals include the aggregate market value of real estate owned directly by individuals, disaggregated by type.
• Corporations' tax returns. The Tax return for corporations and other non-personal taxpayers (RF-1028) include the aggregate market value of real estate owned directly by the submitting corporation, disaggregated by type (housing, commercial, other).
• Income statement 2. This is an attachment to the corporate tax returns, and includes the balance sheet of the submitting corporation. We use this to impute the real estate wealth of corporate groups listed on the stock exchange (see more in subsection 3.3).5
The imputed real estate wealth is aggregated on the tax record-level in our data. This means that the unit of observation for the real estate wealth are not the value of each property, but the total value of properties owned directly by each corporation or individual, observed at the corporation-or individual-level. For instance, if a corporation is registered in the public land registry as the sole owner of a car park worth USD 3 million and as one of two equal owners of an o ce building worth USD 50 million, we observe a real estate holding of USD 28 million for this corporation. The real estate wealth are in most instances reported after tax rebates. We scale up the taxable real estate wealth in the tax returns to correct for the tax rebates, which is something we can do as the real estate wealth is reported by type. 6 This data is available because the estimates of real estate wealth are, directly or indirectly, a part of the wealth tax base. The annual monitoring of wealth makes Norway one out of a few international exceptions. Assets either owned by Norwegians or located in Norway are reported to the tax administration in order to assess each individual's payable wealth tax. The Norwegian wealth tax is levied on all types of wealth, including real estate, shares in listed and non-listed corporations, bank deposits, etc., although with di↵erent tax rebates and valuation rules applied to the di↵erent asset classes. (See Alstadsaeter, Bjørneby, Kopczuk, Markussen, and Røed (2022) for more.)
The wealth tax is a personal tax, paid by individuals that are tax-resident in Norway and non-residents that own real estate directly in Norway. The real estate wealth owned directly by individuals are reported in the individuals' tax returns. 7 In addition, all corporations not subject to report their real estate wealth. It is evident in our data that some of the publicly listed corporations report real estate wealth despite this. For those that do not, we impute the real estate assets reported in the financial statements (see subsection 3.3).
6 The rebates that applied to the wealth in the individuals' tax returns in 2017 (the only year we use the individual tax returns) were: Primary housing: 75 %; Secondary housing: 10 %; Leisure homes: 70 %; Commercial real estate: 20 %; Other: 10 %. The only type of real estate that is reported after rebate in the corporate tax returns is housing. The rebates for secondary housing were: 2011 and 2012: 60 %; 2013: 50 %; 2014: 40 %; 2015: 30 %; 2016: 20 %; 2017: 10 %.
7 Real estate owned by corporations are reported as an underlying value component of the shares owned by the person submitting the tax return. This means that it should not be reported as real estate in the individual's tax return.
have to report the estimated market value of their real estate holdings annually. The reason is that the taxable value of shares are set proportional to the value of the assets in the underlying corporation. 8
The real estate values in the tax returns are estimated market values. This has clear benefits. They are calculated annually by the tax administration, which applies standardised and objective valuation methods. This process means that corporations are not able to manipulate the reported values of their real estate by choosing a specific assessment agency or assessment technique or by the use of base erosion and profit shifting techniques. The methods used are outlined below:
Valuation of commercial real estate. The Tax Administration's valuation method for commercial real estate has three main components: The reported rental income from the property, a discount rate which is updated yearly, and a fixed discount of 10 percent for depreciation. 9 This gives a relatively precise and objective estimate of real estate wealth. 10 An imputed rental income based on municipality, type of property (hotel, industrial plant, storage facility, store etc.), and size of the property is calculated and used if the real estate is not rented out.
Valuation of housing. Market values of housing properties are calculated by the Tax
Administration in cooperation with Statistics Norway. They use maintain a hedonic model that uses information on type of house (detached house, rowhouse or flat), the size of the property, the geographical region, and age of the house. It is based on up to date transaction data that cover most of the Norwegian property market. 8 The exception is shares in listed corporations. The taxable value of listed shares is set equal to the market value of the shares at the end of the year. Listed corporations should not report their real estate holding for this reasons, although some do.
9 To illustrate: A building with an annual rental income of USD 1 million would be valued at USD 13.6 million in 2017, when the discount rate was 6.6 percent, following from this calculation: 1,000,000•0.9 0.066 = 13, 636, 364.
10 See Dahl and Fougner (2019) for discussion of geographical bias in discount rates and more background on valuation methods
Assigning real estate wealth to owners and countries
The Norwegian shareholder register contains information on all Norwegian and foreign shareholders of Norwegian corporations. We utilise the full universe of de-identified Norwegian shareholder statements for individuals and corporations for the years 2011 to 2017 with two purposes in mind.11 First, we use it to attribute ownership of real estate owned by corporations to the di↵erent direct and indirect owners of the corporations. The full coverage of shareholders in Norwegian corporations makes it possible to map ownership by ownership share and to see through group structures and chains of holding and shell companies. This is a common obstacle, as corporate ownership often goes through several layers of corporations. The methodology to impute ownership through group structures has previously been described and utilised by Alstadsaeter, Jacob, Kopczuk, and Telle (2021).
Then, we use the shareholder register to attribute the correct country background to each owner. The attribution of the owner-country for real estate wealth reported in the personal tax return is straight-forward. The tax return submitter is the ultimate beneficial owner of the real estate. These submitters have unique identifiers in the data, which makes it possible to connect them to their reported country of residence if they appear in the shareholder register (which depends on them owning shares in a Norwegian corporation).
We complement this with information from the National Population Register if the owner does not appear in the shareholder register. This register classifies the tax return submitters as either "resident" or "emigrated" for the year in question. If the tax return submitter is registered as "resident", we assign the owner Norwegian residency. If the tax return submitter is registered as "emigrated", we assign the country that the owner emigrated to from Norway. 12 The attribution of the correct country background to each owner is more complicated for corporations. The shareholder register only includes the residence countries of the immediate owners of Norwegian corporations. This limits our information about the ultimate beneficial owner in the case where the ownership chain involve a foreign corporation. To illustrate, take the case of a Swedish investor that owns shares in a Norwegian real estate corporation through a holding corporation in Luxembourg. The shareholder in the Norwegian corporation will be registered as from Luxembourg in our data, even though the ultimate owner is Swedish.
We only observe the immediate foreign owner, who is not necessarily the ultimate owner of the cross-border chain of corporations. This means that that the analysis does not show the real country distribution of the ultimate owners. 13 Instead, we show the country distribution of ownership that is visible to Norwegian authorities and to the general public. In the case an owner of a foreign corporation is ultimately a Norwegian resident, most of the negative consequences associated with foreign ownership, like potential tax evasion, secrecy and money laundering, remain relevant.
Real estate wealth owned by listed corporations
Corporations listed on the Oslo Stock Exchange and their underlying corporations are exempted from reporting their real estate wealth in their tax returns. 14 We use a threestep procedure to impute the real estate wealth of these corporate groups.
registered. In the few instances where the tax return submitter is not registered in the population register, and does not appear with a residence country in the shareholder register, we assign the owner country "Unknown, other foreign".
13 The ultimate ownership of real estate by foreign residents may thus be lower than estimates of foreign ownership presented in this paper. Bomare and Le Guern Herry (2022) show that 19.5 percent of the real estate in England and Wales owned by foreign shell companies was ultimately owned by UK nationals. We can use this to illustrate the potential magnitude of this in the Norwegian setting. If 19.5 percent of the corporate-owned real estate owned from tax havens was really owned by Norwegians, that would amount to USD 1.4 billion in 2017. Accounting for this would reduce the foreign share in the total real estate stock from 2.2 to 2.1 percent, and reduce the foreign share of corporate owned real estate from 10.5 percent to 9.7 percent.
14 Although it is evident in our data that some of the listed corporations report real estate wealth in spite of this. For those, we use the wealth reported for tax purposes. This means that if one of the corporations in a corporate group reports real estate wealth in the corporate tax returns, we do not values from the financial statement for the group.
First, we use the identifier of listed corporations in the corporate tax returns to find and select the listed corporations. We then run the algorithm to impute ownership through group structures to find all corporations that are solely owned by one of these listed corporations.
Second, we us financial statement data from the Income statement 2, an attachment to the corporate tax returns, where the corporations are obliged to report their balance sheets. For the listed corporate groups, these balance sheets are reported in accordance with the IFRS framework. We aggregate the real estate reported in the balance sheets for each corporate group. Lastly, we assign the real estate wealth in these balance sheets to the registered shareholders of the listed corporation.
The Norwegian real estate wealth owned by corporate groups listed on the Oslo Stock Exchange accumulate to approximately 10 percent of the real estate wealth owned by nonlisted corporations.
Sensitivity
The implementation of the ownership imputation through layers of corporations, as detailed in subsection 3.2, excludes ownership that is smaller than 0.001 percent in one of the links in the ownership chain (we assign this real estate wealth to Norway). This cut-o↵ is only due to computational constraints and has a very small e↵ect on the estimates. This means that the ownership is proportionally apportioned to the di↵erent owners and their background country based on ownership share in the cases where the corporations owning the real estate have more than one owner. And that there is almost no lower bound for how small these ownership stakes may be. A study with a di↵erent objective could choose a narrower definition of foreign ownership, for instance one that only includes corporations that are fully or majority owned from abroad. This implies that the results presented in this paper are not driven by a large number of small, foreign owners, but a smaller number of foreign investors with the purpose of buying Norwegian real estate or corporations with sizeable real estate positions in Norway. It also confirms that the foreign owners wield considerable control over the real estate they own in Norway.
4 The Norwegian real estate market in numbers 4.1 The real estate stock Table 1 summarises the total value of the stock of privately owned Norwegian real estate in 2017. It shows that the foreign ownership of Norwegian real estate is concentrated in the corporate sector. USD 18.8 billion (10.5 percent) of the USD 179 billion reported in the tax returns of corporations is owned from outside of Norway, while only USD 5.3 billion (0.6 percent) of the 879 billion booked in individuals' tax returns are owned by non-residents. In total, 2.3 percent, USD 24.1 billion of the USD 1,057 billion, of real estate wealth is traced to foreign owners. This corresponds to 7 percent of GDP, as Norwegian mainland GDP was USD 337.9 billion in 2017. 15 15 It is not surprising that the foreign ownership is concentrated in corporations. Ownership of real estate through companies is more advantageous in most cases where the owner is not living permanently at the property. Corporate ownership does for instance give an exemption from stamp duty when property is sold, and incorporation is relatively cheap in Norway. Thus, it has become widespread to own large real estate projects through special purpose companies.
The table also shows how real estate in Norway, like in most countries, is primarily housing. But the foreign ownership is primarily in commercial real estate, in the same manner as it is also concentrated in the corporate sector. USD 15.9 billion out of USD 151.6 billon (10.5 percent) of commercial real estate is foreign-owned. Only USD 5.2 billion out of USD 841 billion (0.5 percent) of housing real estate is foreign-owned.16 3.0 out of 64.5 billion (4.6 percent) of other real estate (leisure home etc.) is foreign-owned.
Development over time
We proceed by analysing how the foreign ownership has developed over time. We limit the scope of this analysis to real estate owned by corporations, as this is where most of the foreign ownership is. The total value of the real estate owned by corporations has increased during the period from 2011 to 2017. The total corporate owned real estate wealth was valued at USD 68.2 billion in 2011, a number that increased steadily to USD 190 billion in 2016, before it fell to USD 179 billion in 2017. The foreign ownership of this real estate increased over this period, with a marked increase starting in 2015. Panel (a) of figure 2 shows how the foreign share varied around 6-7 percent in the years 2011 to 2014, before it increased to 9.2 percent in 2015, 9.7 percent in 2016 and 10.5 percent in 2017.
The foreign share in panel (a) is broken down into ownership from Luxembourg, other tax havens and non-havens in panel (b). In total, 37.7 percent of foreign-owned real estate was owned from tax havens in 2017. This share increased by 6.9 percentage points from 2011 to 2017, an increase which was mainly driven by ownership from Luxembourg. Luxembourg ownership was 10.3 percent in 2011, while ownership from other tax havens was 20.5 percent.
In 2017, ownership from Luxembourg had risen to 15.6 percent, while the ownership share from other tax havens had increased to 22.1 percent. Ownership from Luxembourg as share of foreign ownership peaked in 2016 at 18.2 percent.
Country-by-country ownership
The foreign ownership in 2017 is broken down on individual countries in figure 3. This includes all privately owned real estate, also real estate owned directly by individuals. Panel The ownership from tax havens becomes even more visible when real estate ownership is ranked by share of GDP, as it is in panel (b) of figure 3. This places the British Virgin Islands as the top country, with ownership equal to more than 25 percent of GDP. The four countries Bermuda, Guernsey, Jersey, and Luxembourg are all in the range 4 to 5 percent of GDP. Cyprus then follows with ownership equivalent to 2.5 percent of GDP. The remaining countries has ownership below 1 percent of GDP.
Conclusion
The last decade has brought us a lot closer to grasping the size of the fortunes that are hidden in o↵shore bank accounts. But non-financial asset classes are a gap in our understanding of the size and distribution of o↵shore wealth. We contribute with the first comprehensive overview of the stock and development of foreign-owned commercial and residential real estate in a country, including both direct and indirect ownership. We combine the real estate wealth reported by individuals and corporations for the purpose of serving the Norwegian wealth tax with the comprehensive shareholder register, which lists the shareholders of Norwegian corporations. This makes it possible to assign the real estate wealth to the ultimate Norwegian owner or the immediate foreign owner.
We find that 2 percent of Norwegian real estate is owned from abroad at the end of 2017.
The share increases to 10 percent when we only look at real estate owned by corporations.
The foreign-owned real estate amounts to 7 percent of GDP, which for the European Union Tables Table 1: 8 billion is assigned to the category "Unknown, emigrated", owners that have lived in Norway, but emigrated, and is not registered with a residence country in the shareholder register. USD 0.6 billion is assigned to the category "Unknown, other foreign", owners that are not registered in the demographic registry, and does not appear with a residence country in the shareholder registry. Panel (b) shows the total values for each country, scaled to the GDP of the home country.
Figure 1
1 Figure 1 illustrates how sensitive the foreign ownership and ownership share are to a narrower definitions of foreign ownership. Panel (a) shows that the real estate wealth (in corporations) that is traced to foreign owners falls when the cut-o↵ is tightened. Still, almost
(a) shows that the top six countries are Sweden (USD 4.6 billion), Luxembourg (USD 3.0 billion), the United Kingdom (USD 2.6 billion), Finland (USD 1.8 billion), the United States (USD 1.4 billion) and the Netherlands (USD 1.3 billion). Sweden and Finland, and to some extent the United Kingdom, are Norway's neighbouring countries. The last close neighbour, Denmark (USD 0.8 billion), is ranked eighth among the countries. Luxembourg and the Netherlands are well-known tax havens. But they are not the only tax havens among the top ownership countries. Half of the top 20 countries are well-known tax havens. United Kingdom and Switzerland (7th, USD 0.8 billion) are also popular destinations for rich, Norwegian emigrants. 17 Luxembourg is, based on anecdotal evidence, a popular tax haven among Norwegians, especially those holding real estate. 18 It is also popular among global real estate funds and investors (for instance, Norges Bank Investment Management, the Norwegian sovereign wealth fund, established their European real estate investment arm in Luxembourg).
would mean foreign ownership of EUR 1,012 billion in 2021, with EUR 250 billion in Germany and EUR 174 billion in France. There has been a noticeable increase in the foreign ownership share among corporations between 2011 and 2017. The tax haven ownership as share of the foreign ownership among corporations increased from 31 to 38 percent between 2011 and 2017, driven by more ownership from Luxembourg. Both trends seem to have been induced or amplified around the same time as Norway and 43 other countries in 2014 committed to introduce the automatic exchange of financial information under the Common Reporting Standard, which made real estate increasingly attractive for those who seek to obfuscate their true wealth. The sizeable ownership from major secrecy suppliers highlights the need for more comprehensive ownership registries and the extension of automatic exchange of information agreements to cover real estate. The consequences of such deficiencies in ownership mapping became visible in the aftermath of the Russian invasion of Ukraine in February 2022. Economic sanctions were put in place for a long list of individuals across Western economies. But enforcement Financial Action Task Force Note. Menkho↵, L. and J. Miethe (2019). Tax evasion in new disguise? examining tax havens' international bank deposits. Journal of Public Economics 176, 53-78. Morel, R. and J. Uri (2021). L'augmentation des investissements immobiliers des nonrésidents est tirée par les expatriés. Technical report, Banque de France Bulletin No. 237: Article 6. OECD (2019). Money laundering and terrorist financing awareness handbook for tax examiners and tax auditors. OECD. Sá, F. (2016). The e↵ect of foreign investors on local housing markets: Evidence from the UK. CEPR Discussion Paper DP11658. Sá, F. and T. Wieladek (2015). Capital inflows and the us housing boom. Journal of Money, Credit and Banking 47 (S1), 221-256.
Figures
Figures
Figure 1 :
1 Figure 1: Sensitivity to cut-o↵s of ownership share
Figure 2 :
2 Figure 2: Corporate owned real estate (2011-2017) (a) Foreign share of total
Figure 3 :
3 Figure 3: Top foreign countries, real estate wealth (2017)
Top real estate owning sectors, share of foreign/domestic owned real estate (2017) Billions of USD. This table shows the total value of privately owned Norwegian real estate at the end of 2017, how this is distributed among Norwegian and foreign owners, and how it is distributed between di↵erent types of ownership and types of real estate. Other real estate include leisure homes, land, etc. The list of tax havens is reported in appendix A. Average USDNOK conversion rate for 2017 (8.2630) used.
Individual Corporate Housing Commercial Other Total
Total real estate 878.8 178.5 841.3 151.6 64.5 1,057.3
Foreign owned 5.3 18.8 5.2 15.9 3.0 24.1
Tax haven 0.5 7.1 0.5 6.4 0.7 7.5
Foreign owned share 0.6 % 10.5 % 0.6 % 10.5 % 4.7 % 2.3 %
Tax haven share 0.1 % 4.0 % 0.1 % 4.2 % 1.0 % 0.7 %
Notes:
Table 2 :
2 Top real estate owning sectors, share of foreign/domestic owned real estate This table shows the share of total real estate wealth, either foreign or domestic, held by each corporate sector. These are the top 10 corporate sectors, ranked by real estate ownership. The total value of real estate in each sector is reported in the memo column.
Domestically Owned from Memo:
Notes:
We restrict the analysis across time to corporate-owned real estate, as the estimates of corporate-owned and personally owned real estate are retrieved from di↵erent sources. 78 percent of foreign-owned real estate in
was corporate-owned.
The Norwegian wealth tax covers a broad range of assets, including all real estate located in Norway (even real estate owned by non-residents). This requires annual value estimates of properties to calculate the tax base, which is reported in the tax returns of individuals and corporations not listed on the stock exchange (henceforth non-listed corporations). These self-reported data by the non-listed corporations are audited by the Norwegian Tax Administration, which disposes vast third-party reported information. This includes central registers that strive to record all owners of properties.
However, our insights in ownership structures stops in the first foreign country. This means that if the ultimate owner of for instance a Luxembourg company that holds Norwegian property is a Norwegian, we would then over-estimate foreign ownership. Our analysis documents a substantial share of real estate ownership from well-known tax havens, highlighting how this type of ownership obfuscates the identity and the country of origin of the true owners of Norwegian real estate.[START_REF] Bomare | Automatic exchange of information and real estate investment[END_REF] find that 19.5 percent of real estate in England and Wales owned by foreign corporations are really owned by UK
This includes all real estate that is not fully owned by the public sector (state, regions or municipalities).
Corporations listed at the Oslo Stock Exchange (including corporations fully owned by these), are
This is the is underlying material for the Norwegian Shareholder register. Every Norwegian corporation is obliged to submit the yearly Shareholder register statement (RF-1086) to the tax administration. This is the foundation for the Shareholder's tax report(RF-1088). This report presents a summary of individuals' and organisations' shares in Norwegian limited companies. The population of the shareholder register is thus nearly the same as for the tax returns. Still, there will be a non-response rate, especially when the owner is a foreign corporation that does not have a tax incentive to report ownership.
We assign the collective term "Unknown, emigrated" if the country the owner emigrated to is not
This may be explained by the high homeownership rate and small rental property market in Norway. It may also be partly explained by measurement error, if some corporations report rental housing as commercial real estate. Given the relative high concentration of foreign ownership through corporations, this would lead to an underestimation of foreign ownership in the Norwegian housing sector, and an overestimation of foreign ownership in commercial real estate.
See for instance: https://kapital.no/reportasjer/naeringsliv/2021/10/09/7746183/eksilmilliardaerene
See for instance: https://www.dn.no/magasinet/eiendom/luxembourg/skatteparadiser/endrerosjo/1350-norske-eiendommer-eies-fra-luxembourg-skjulte-eiere-bak-milliardverdier/2-1-1150470
and the Norwegian Tax Administration for comments and input. We are grateful for financial support from the Research Council of Norway, grant number 325720. Økland gratefully acknowledges support from NMBU to finance a research stay at EU Tax Observatory during Spring 2022, project number 1211130114. Any errors are our own.
The views expressed here are those of the author(s) and not those of the EU Tax Observatory. EU Observatory, and the Norwegian Tax Administration for comments and input. We are grateful for financial support from the Research Council of Norway, grant number 325720. Økland gratefully acknowledges support from NMBU to finance a research stay at EU Tax Observatory during Spring 2022, project number 1211130114.
soon proved challenging, as complex corporate structures and the use of trusts and tax havens hid true beneficial ownership of assets, enabling assets to be hidden in plain sight.
Appendix A Tax haven list
We base our list of tax havens on Menkho↵ and Miethe (2019), with the following modification: We include the Netherlands and the United Arab Emirates. There are no observations in our data for Antigua & Barbuda, Dominica, Grenada, Saint Lucia, Saint Vincent & The Grenadines, San Marino, Maldives, Cook Islands, Samoa, Anguilla, Aruba, Curacao, Montserrat, Nauru, Niue, Tonga, Sint Maarten. Thus, we do not consider these.
The final list we use: Bahamas, Barbados, British Virgin Islands, Cayman Islands, Saint Kitts And Nevis, Turks And Caicos Islands, Belize, Costa Rica, Panama, Hong Kong, Macao, Singapore, Andorra, Guernsey, Jersey, Cyprus, Gibraltar, Isle Of Man, Liechtenstein, Luxembourg, Malta, Monaco, Switzerland, Mauritius, Seychelles, Bahrain, Bermuda, Vanuatu, Liberia, Malaysia, Chile, Trinidad & Tobago, Uruguay, Marshall Islands, Netherland Antilles, Virgin Islands (US), Lebanon, Jordan, the Netherlands, Austria, Belgium, Ireland
B Sectoral breakdown
Table 2 shows how the real estate wealth owned by corporations is distributed among the di↵erent corporate sectors, and how this sectoral breakdown looks for the foreign-owned real estate or of domestically-owned real estate separately. It shows that the real estate activities sector is by far the largest real estate owner, followed by the construction sector, the oil and gas sector, the retail trade sector and the food products sector in terms of total value. The chemicals sector and the oil and gas sector notably have a sizeable foreign ownership presence, while they have a smaller share of the real estate owned by Norwegians. The sectors real estate activities and construction of buildings holds a larger share of the domestically-owned real estate than of the foreign-owned. |
04103693 | en | [
"spi.nrj",
"sde",
"spi.signal"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04103693/file/WE2023_PO_005_i4SEE.pdf | 1. We gather 10-minute SCADA data before and after the installation of the retrofit(s)
to evaluate the impact on wind turbine performance, aiming for a timeframe of 12months or more before and 6-months or more after the installation for accuracy.
2. Using only the 10-minute dataset, we automatically filter out periods of abnormal turbine operation, using state detection algorithms to ensure strict filtering and optimal data quality without the need for event logs.
3. To enhance the dataset, we couple the filtered 10-minute dataset with external environmental data from a weather API provider. Depending on the location of the turbine, this allows us to obtain new variables such as pressure, temperature, humidity, rainfall, and snow events. These additional variables enable us to perform air density correction and further filtering.
4. We match 10-minute data points from "before" and "after" periods, based on all environmental variables to allow accurate comparison and evaluation of turbine performance, ensuring both datasets cover the same ranges of conditions and events, and eliminating environmental variation that may affect the results.
Properly matching each environmental variable in both the "before" and "after" datasets is crucial for reliable results. Finally, the statistical distribution of each environment variable should be identical in both datasets.
Wind turbine operators and owners are presented a range of performance enhancing retrofit options by OEMs and third parties. Such offerings are accompanied by specific details of the expected performance improvements in terms of AEP (Annual Energy Production).
These claims can be very difficult to verify independently. The current best practice for AEP gain verification, as dictated by the IEC, would involve a PCV (power curve verification) campaign before and after the retrofit is performed on the turbine. These campaigns require the use of additional and independent wind measurement device(s).
Unfortunately, the cost of carrying out such verification campaigns is sometimes higher than the retrofit itself, and therefore can only be performed on selected candidate turbines of a wind farm. This results in relatively high levels of uncertainty surrounding the overall gain that can be achieved at a wind farm or fleet level.
In this study we investigated the use of SCADA 10-minute data as a pragmatic means of quickly and efficiently assessing potential AEP gains.
Abstract
Results
Method
Regarding uncertainties
Core assumption & limitation
When using SCADA data for portfolio-level analysis, it is almost impossible to obtain calibration reports and mounting information for each sensor of each turbine. In many cases, sensors such as nacelle anemometers and temperature sensors do not even undergo any traceable calibration. Due to these limitations, it is not possible to perform a full propagation of uncertainties (Type A+B) in a SCADA-based analysis.
Therefore, in the current implementation, we have restricted our analysis to provide only Type A uncertainties, i.e. statistical uncertainties.
Assessing wind turbine performance upgrades using only 10-minute SCADA data
Julien TISSOT, Christopher GRAY i4SEE TECH GmbH, Austria PO.005
Visit our homepage at i4see.com
This comparison analysis is highly dependent on the Ceteris Paribus assumption, which assumes that all relevant variables are held constant, except the one being studied. In our case, we assume that the sensor measuring the environmental variables remain unchanged throughout the periods, while the power output has the potential to change due to the retrofit.
However, the longer the analysis period, the greater the likelihood that this assumption may be false. For example, changes or defects in the nacelle anemometry or controller updates can occur, potentially affecting the accuracy of the analysis.
To mitigate this risk, we have developed several pre-processing steps as well as a set of best-practice guidelines that should be followed. Nevertheless, the key to ensuring data quality is a clear and strict planning of the retrofit campaign.
The graph presented above illustrates the similarity between the windrose plots generated from the processed datasets before (left), and after (right) the upgrade.
5. Finally, we perform a standard PCV analysis on both datasets, similar to the IEC process, and calculate the AEP using the wind distribution from the entire dataset.
Conclusions
• 10-minute SCADA data can be used for assessing wind turbine upgrades with a careful pre-processing of the dataset and a clear campaign planning.
• Trend analysis over a large wind farm campaign can provide valuable insights on the quality of wind turbine retrofits, often surpassing those obtained from a standard adhoc PCV campaigns.
• This method offers a pragmatic and cost-effective approach for wind farm operators and owners who wish to assess the effectiveness of retrofits.
Once the AEP is calculated for both the "before" and "after" periods, the influence of the upgrade on AEP, or the "DELTA AEP" can be computed. Through our experience, we found that aggregating these deltas at the wind farm level and studying the general trend in results provides insights into the actual gains of retrofit installations, particularly when the wind farm includes control turbines that did not receive upgrades.
Once all pre-processing steps have been applied to the dataset, the data analyst should obtain two datasets where the wind turbine was in normal operation and the environmental conditions were precisely identical. This filtered and pre-processed dataset can then be used for the assessment of the upgrade.
Ceteris Paribus, a comparison is now possible.
Example of filtered and processed datasets before (left), and after (right) the upgrade.
In grey, the raw dataset, in blue the final processed dataset, ready for PCV analysis. |
04025173 | en | [
"info.info-cr",
"stat.ml"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-04025173/file/ARES2022-1.pdf | Joséphine Delas
email: [email protected]
Christopher Neal
Frédéric Cuppens
Nora Boulahia-Cuppens
Evading Deep Reinforcement Learning-based Network Intrusion Detection with Adversarial Attacks
Keywords: Intrusion/anomaly detection and malware mitigation;, Computing methodologies → Machine learning adversarial machine learning, adversarial examples, intrusion detection, reinforcement learning, evasion attacks
An Intrusion Detection System (IDS) aims to detect attacks conducted over computer networks by analyzing traffic data. Deep Reinforcement Learning (Deep-RL) is a promising lead in IDS research, due to its lightness and adaptability. However, the neural networks on which Deep-RL is based can be vulnerable to adversarial attacks. By applying a well-computed modification to malicious traffic, adversarial examples can evade detection. In this paper, we test the performance of a state-of-the-art Deep-RL IDS agent against the Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) adversarial attacks. We demonstrate that the performance of the Deep-RL detection agent is compromised in the face of adversarial examples and highlight the need for future Deep-RL IDS work to consider mechanisms for coping with adversarial examples.
CCS CONCEPTS
• Security and privacy →
INTRODUCTION
Concern about security attacks on modern connected systems such as internet-connected devices or critical data servers has been growing for the past two decades. Intrusion Detection Systems (IDSs) are thus widely used as an automatic way of detecting potential threats within network connections, and their performances are constantly challenged to cope with the development of increasingly sophisticated cyberattacks.
Supervised Learning (SL) has introduced a whole new set of capabilities into IDS technology, leading to spectacular progress in intrusion detection tasks [START_REF] Buczak | A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection[END_REF]. Still, a particularly difficult task for an IDS remains the detection of previously unseen anomalies (i.e. zero-day attacks). Reinforcement learning (RL) is a promising lead in IDS research, as it constitutes an adaptive and responsive environment suitable for online training, resulting in simple and fast prediction agents [START_REF] Hu | Reinforcement Learning for Adaptive Cyber Defense Against Zero-Day Attacks[END_REF][START_REF] Lopez-Martin | Application of deep reinforcement learning to intrusion detection for supervised problems[END_REF]. However, the most efficient RL-based IDS implementations use Deep Neural Networks (DNNs) at their core, which have been shown to be vulnerable to adversarial examples [START_REF] Huang | Adversarial attacks on neural network policies[END_REF][START_REF] Lin | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents[END_REF]. These attacks involve slightly modifying data samples in order to mislead a classification model. Previous work has evaluated the effects of adversarial examples on DNN-based IDSs [START_REF] Amine Merzouk | A Deeper Analysis of Adversarial Examples in Intrusion Detection[END_REF], yet little is known about the vulnerability of RL-based detection methods to adversarial examples.
In this paper, we investigate the performance of a state-of-the-art Deep-RL intrusion detection agent when exposed to adversarial attacks. Caminero et al. [START_REF] Caminero | Adversarial environment reinforcement learning algorithm for intrusion detection[END_REF] present a novel approach that has been shown to outperform other RL and SL-based detection models. They trained a Deep-RL agent in an adversarial environment using the NSL-KDD dataset [START_REF] Tavallaee | A detailed analysis of the KDD CUP 99 data set[END_REF]. In this paper, we show how adversarial examples generated using two methods [START_REF] Goodfellow | Explaining and harnessing adversarial examples[END_REF][START_REF] Kurakin | Adversarial examples in the physical world[END_REF] can evade the detection of the agent. In keeping the consistency with initial studies in this domain, we consider white-box individual attacks where the intruder has access to the parameters of the model [START_REF] Lin | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents[END_REF][START_REF] Papernot | The Limitations of Deep Learning in Adversarial Settings[END_REF].
The remainder of the paper is organized as follows. Section 2 provides an overview of influential works applying RL to IDSs and a review of adversarial example generation methods for Deep-RL agents. The methodology for this paper is provided in Section 3, where we describe the dataset, the RL detection agent, and the adversarial attacks used in our experiments. In Section 4, we present the results achieved by adversarial examples on the performance of the agent. A discussion of the immediate practicality of these attacks and an outline for future work is provided in Section 5. Lastly, Section 6 provides some concluding remarks.
RELATED WORK
RL techniques have an extensive range of applications in cybersecurity due to their adaptive nature and the rapidity of their predictive models. The first works concerning RL and intrusion detection were published in the early 2000s and present mostly innovative works using tabular methods. Servin et al. [START_REF] Servin | Multi-agent Reinforcement Learning for Intrusion Detection[END_REF] use a Q-learning algorithm based on a look-up table to detect network intrusions, whereas Xu et al. [START_REF] Xu | A Reinforcement Learning Approach for Host-Based Intrusion Detection Using Sequences of System Calls[END_REF] introduce Temporal Difference (TD) learning algorithms for live detection.
More recently, the development of Deep-RL algorithms has further improved the performances of IDS models [START_REF] Lopez-Martin | Application of deep reinforcement learning to intrusion detection for supervised problems[END_REF]. In particular, Caminero et al. [START_REF] Caminero | Adversarial environment reinforcement learning algorithm for intrusion detection[END_REF] present an innovative multi-agent deep reinforcement learning model that outperforms previous tabular methods, as well as several other DNN models. Their algorithm is based on the concurrency of two different agents to improve the predictions.
Despite the remarkable performance shown by RL agents in intrusion detection, there is a concern about their reliability in the presence of adversarial attacks. Since Deep-RL agents rely on DNNs, they could be vulnerable to malicious inputs, chiefly adversarial examples. Behzadan et al. [START_REF] Behzadan | Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks[END_REF] first explored the effect of adversarial examples on Deep Q-Networks (DQNs). The authors use two well-known attacks, namely, Fast Gradient Sign Method (FGSM) [START_REF] Goodfellow | Explaining and harnessing adversarial examples[END_REF] and Jacobian-based Saliency Map Attack (JSMA) [START_REF] Papernot | The Limitations of Deep Learning in Adversarial Settings[END_REF], to perturb the training of a game-learning agent. They also demonstrate the transferability of adversarial examples between agents. Huang et al. [START_REF] Huang | Adversarial attacks on neural network policies[END_REF] show how an adversary could interfere with the operations of a trained RL agent. The authors use FGSM to generate adversarial examples in both white-box and black-box settings by utilizing the transferability property [START_REF] Papernot | Practical Black-Box Attacks against Machine Learning[END_REF]. In their study, Kos et al. [START_REF] Kos | Delving into adversarial attacks on deep policies[END_REF] compare the effectiveness of adversarial examples with random noise. They show how the value function can indicate opportune moments to inject perturbations and how adversarial re-training can enhance the resilience of RL agents [START_REF] Goodfellow | Explaining and harnessing adversarial examples[END_REF]. Lin et al. [START_REF] Lin | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents[END_REF] introduce two novel methods to attack Deep-RL agents using adversarial examples. These are referred to as the strategically-timed attack, which aims to introduce perturbations at critical moments, and the enchanting attack, which aims to lure an agent to a certain state maliciously. Using these methods, the authors demonstrate they are able to significantly decrease the accumulated rewards collected by a DQN and an Asynchronous Advantage Actor-Critic (A3C) agent on five different Atari games.
These previous studies demonstrate that Deep-RL agents are vulnerable to well-crafted adversarial examples. They propose different methods for attacking Deep-RL agents before and after training, as well as, in white-box and black-box settings. It has even been suggested to remediate the effect of adversarial examples against Deep-RL agents using adversarial re-training [START_REF] Kos | Delving into adversarial attacks on deep policies[END_REF]. While there is a rich body of work studying how adversarial examples can degrade the performance of Deep-RL models, these previous works investigate attacks against agents used in control problems, particularly the playing of Atari video games. Such models are significantly different from the agent presented in this paper, as we will develop later in Section 3.2, since the successive states are independent of the action taken in the previous step, thus affecting the learning process. In addition, evading an intrusion detection model involves targeting a specific class (labeling malicious connections as normal behavior); while working most of the time with imbalanced datasets [START_REF] Yilmaz | Addressing Imbalanced Data Problem with Generative Adversarial Network For Intrusion Detection[END_REF]. For these reasons, we notice a gap in the literature concerning the understanding of adversarial attacks against Deep-RL-based intrusion detection agents and present this work as an initial building block toward filling this gap.
METHODOLOGY
First, we present the dataset that we use for the training and the validation of the the detection agent. Then, we describe the Deep-RL detection agent used in our experiments, as proposed by Caminero et al. [START_REF] Caminero | Adversarial environment reinforcement learning algorithm for intrusion detection[END_REF]. Finally, we outline the adversarial attacks we use against the agent.
Dataset
For comparative studies of our results, we opt for the commonly used NSL-KDD dataset [START_REF] Tavallaee | A detailed analysis of the KDD CUP 99 data set[END_REF]. This dataset is widely used in similar research papers, and particularly in Caminero et al. [START_REF] Caminero | Adversarial environment reinforcement learning algorithm for intrusion detection[END_REF] to validate the agent.
Each record is composed of 41 network features: 38 continuous (such as the duration of the connection) and 3 categorical. A record is labeled as either normal or an attack. There are 22 different attack types in the training set (therefore, 23 different label outcomes) but 38 in the testing set: an efficient detection model will have to detect anomalies it has not encountered during training. From this basis, a few preprocessing steps were applied: categorical features were one-hot encoded, and non-binary features were normalized (i.e. zero mean, standard deviation equal to one).
Finally, in this work, we aim to mislead the model into classifying an attack record as a normal one (i.e. a false negative classification). Therefore, we do not need to be able to differentiate the 23 anomaly type. Instead, we group them into 4 classes of attacks. The approximately 120,000 samples are thus distributed into the following classes: Normal (53.46%), Denial-of-Service (DoS) (36.46%), Probing (PROBE) (9.25%), Remote-to-Local (R2L) (0.79%), and User-to-Root (U2R) (0.04%).
Detection Model
We work with a state-of-the-art Deep-RL intrusion detection agent that has been shown to outperform other DNN and Deep-RL methods on the KDD-NSL dataset [START_REF] Caminero | Adversarial environment reinforcement learning algorithm for intrusion detection[END_REF]. The agent is referred to as Adversarial Environment using Reinforcement Learning (AE-RL); since it enhances its learning phase by using an adversarial environment to select training samples. It is composed of two concurrent agents: the first agent is the classifier that predicts the labels for each sample, whereas the second agent is a selector that acts as a simulated environment and feeds sample records to the classifier. Therefore, the second agent is only used during training to obtain a more robust model and is not involved in the attack detection.
The classifier is a Deep Q-Network (DQN) agent [START_REF] Mnih | Human-level control through deep reinforcement learning[END_REF], described in Figure 1. With the states (record features) as input, its goal is to choose the best action according to its Q-function [START_REF] Richard | Reinforcement learning: An introduction[END_REF]. The Qfunction is simulated by a fully-connected, 3-layer neural network with 100 units per layer, that is trained to approximate the optimal
Q * (S t , A t ) = R t +1 + γ max a (Q(S t +1 , a)), (1)
where Q * is the optimal Q-function, S, A, and R are the states, actions, and rewards, γ is the discount factor, and t and t + 1 are the timesteps.
During training, the error between the target and the predicted Q-value is back-propagated through the model's parameters. It is calculated with Huber loss, which is quadratic if the absolute difference falls below 1 and linear otherwise. This loss function provides smoothness near zero while being less sensitive to outliers than the squared error loss. After predicting the state's Q-value, the agent chooses the action according to an ϵ-greedy policy [START_REF] Richard | Reinforcement learning: An introduction[END_REF]: randomly with a probability of ϵ, else the one that maximizes the Q-value. The ϵ value is high at the beginning and reduces over the course of the training process. When the training is done, ϵ is set to zero in order to optimize the prediction.
In this setup, the states are data records issued from the dataset and the actions are the different possible label outputs. The reward is set to 1 if the classifier is correct and 0 otherwise. Finally, the discount factor γ is set to a value close to zero, since the states do not influence one another and each state is independent of the precedent.
When transitioning from one step to the next, selecting random samples from the dataset is not the most efficient solution due to the unbalanced nature of the dataset. The selector agent instead chooses which anomaly category to pull the next state and attempts to find the most difficult records for the classifier. The selector's algorithm is DQN with Huber loss and epsilon-greedy policy, similar to the first agent, but the rewards are opposite. That is, -1 if the classifier chooses correctly and 0 otherwise. The two agents are considered concurrent because of this method for providing rewards.
Once the training is complete, the prediction phase only consists of passing a record through a small fully-connected neural network and choosing the maximum output. This simple architecture allows for efficient classification, which is critical in intrusion detection tasks.
Adversarial Attacks
When the training is complete, we use adversarial examples to mislead the agent on test data. A perturbation is computed using the following generation methods and added to the original test data. This perturbation will corrupt the prediction of the Q-values and influence the decision of the agent. These attacks were implemented using the Adversarial Robustness Toolbox (ART) library [START_REF] Nicolae | Adversarial Robustness Toolbox[END_REF].
Fast Gradient Sign Method.
The first attack we use is the Fast Gradient Sign Method (FGSM) introduced by Goodfellow et al. [START_REF] Goodfellow | Explaining and harnessing adversarial examples[END_REF]. This method exploits the gradient of the loss function, which usually serves to update the parameters of the model. Instead, the gradient is propagated back to the inputs and its sign guides the perturbation. An adversarial example x ′ is formed by adding the perturbation amplitude ϵ with the sign of the gradient to an original example x. Equation 2 describes this perturbation, where ∇ is the gradient function, J θ is the loss function with regards to the parameters θ , and l is the true label of the example.
x ′ = x + ϵ • siдn(∇J θ (x, l)) (2)
3.3.2 Targeted Fast Gradient Sign Method. FGSM is an untargeted attack by definition, as it does not aim to misclassify the adversarial example towards a specific class. However, it would not be in the interest of attackers to misclassify an attack as another type of attack since evading detection implies classifying the attacks as normal traffic. To targeted a specific class using FGSM, we perform the update in Equation 3 where l ′ is the target class.
x ′ = x -ϵ • siдn(∇J θ (x, l ′ )) (3)
3.3.3 Basic Iterative Method. Kurakin et al. [START_REF] Kurakin | Adversarial examples in the physical world[END_REF] introduce the Basic Iterative Method (BIM) as an extension of FGSM. The idea is to apply small perturbations over several steps to create more precise adversarial examples. Additionally, a clipping method is used at each step to prevent features from exceeding valid intervals. Generally, increasing the number of iterations will produce finer perturbations and can lead to more subtle adversarial examples. However, there is a trade-off, as computing these small steps is typically slower to produce adversarial examples than non-iterative methods.
Targeted Basic Iterative Method.
Applying BIM involves using FGSM, as outlined in Equation 2, to generate an adversarial example for some unspecified class and may not necessarily serve the goal of the attacker. Using the BIM process with targeted FGSM, as outlined in Equation 3, produces an adversarial example for a particular class.
RESULTS
In this section, we present the results of our experiments. We evaluate the performance of the trained agent using the test set, of approximately 30,000 samples, with and without adversarial perturbation. In all adversarial attacks, we set the maximum amount of perturbation to ϵ = 0.1.
Two-class Attack Detection Facing Adversarial Examples
This section involves experiments using two-class detection, where the detection agent assigns a label of Normal or Anomaly to each sample. We only consider the generic FGSM and BIM attacks; since the attacker intends to make anomalous packets appear legitimate.
The performance of the detection agent is shown in Figure 2, the accuracy and F1-scores are shown in Table 1, and the confusion matrices of the detection agent for the label decisions are shown in Figure 3. No Attack. In this case, the training set is balanced (i.e. 53% normal and 47% anomalies), which allows the agent to learn the two classes accurately. In Figure 3, we can see that 79% of the anomalies are detected by this model, with 84.81% accuracy and 84.09% F1score. These results correspond to state-of-the-art performance on NSL-KDD, even though Figure 2 shows that about 2800 anomalies are classified as normal traffic.
No
Fast Gradient Sign Method. We observe, in Figure 3, a significant drop in the number of true positives from both classes, but what is particularly interesting is the false positive rate from the Normal class. Indeed, it rose from 0.21 in the baseline model to 0.28 with the FGSM examples, which means that this attack increased the number of suspicious connections undetected by the model. Basic Iterative Method. BIM is supposed to generate more precise perturbations, finding new paths to escape the prediction. In Table 1, we notice a drop in the performance of the model compared to the baseline; the accuracy drops from 84.81% to 75.84%, and the F1-score also drops from 84.09% to 66.71% for the anomaly class. However, we notice in Figure 2 that most of the misclassified items were initially labeled as Normal, and the model was lured into labeling them as Anomaly. A targeted attack, in a multi-class context, can improve the deception by aiming toward the Normal class for all examples.
Multi-class Attack Detection Facing Adversarial Examples
This section involves multi-class detection, where the agent must choose a label of Normal or one of the attack categories of DoS, PROBE, R2L, and U2R. The performance of the detection agent is shown in Figure 4, the accuracy and F1-scores are shown in Table 2, and the confusion matrices of the detection agent are shown in Figure 5.
No Attack. Without the presence of adversarial examples, we see that the agent has an overall good performance with an accuracy of 83.70%. The F1-scores for the Normal and DoS categories are 83.33% and 91.32% respectively. However, the agent shows weak performance on R2L and U2R attacks, where the F1-score is below 40%. This is common in many classifiers because these attack types are hard to detect and are underrepresented in the dataset. Figure 5 shows a noticeable intensity on the diagonal, especially for the Normal, DoS, and Probe categories. The R2L examples are often classified as Normal while the U2R are spread across different categories.
Untargeted Fast Gradient Sign Method. Applying adversarial perturbations using FGSM shows an important drop in the performance of the agent. We see a large number of misclassifications for all classes in Figure 4. The accuracy of the agent drops to 75.58%, while the F1-Score of all classes is substantially lower. The false positive rate of the class Normal shows that most of the examples are misclassified in this category. The same results are shown in Figure 5, as the diagonal is less intense and the Normal column (corresponding to the examples classified as Normal) is more intense.
Targeted Fast Gradient Sign Method. With targeted FGSM, adversarial examples are pushed toward the Normal target class. We see a noticeable impact in Figure 4 with an even higher number of false positives in the Normal class. The accuracy and F1-score for the Normal label drop to 56.37% and 65.89% respectively. The confusion matrix in Figure 5 shows a very intense concentration in the Normal column. This indicates that targeted attacks can be more interesting for attackers who want to evade an RL-based IDS.
Untargeted Basic Iterative Method. By applying the untargeted BIM method, we find no real change to the performance of the detection agent compared to when no attack is present. This can be explained by the perturbation steps limit (set to 100). With no target class for this attack, the perturbations do not go far enough in a particular direction to modify the class of the sample from the viewpoint of the detection agent. Without any computation limitations, we would expect this method to cause a more severe impact on the detection performance.
Targeted Basic Iterative Method. With targeted BIM, we see the most drastic degradation in the performance of the detection agent. This method can perturb more precisely anomalous samples to the Normal class. The accuracy on Normal samples drops to 28.47% and the F1-score for all labels drops below 20%. The substantial amount of misclassifications of anomalous packets as Normal is demonstrated in Figures 4 and5 Recently study [START_REF] Amine Merzouk | Investigating the practicality of adversarial evasion attacks on network intrusion detection[END_REF] has identified several invalidation properties in adversarial examples generated on 3 intrusion detection datasets that prevent their implementation. These properties include out-ofrange values, corrupt binary values, multiple categories belonging, and corrupt semantic relations. For the purpose of this work, we demonstrate that current adversarial attacks can bypass Deep-RL detection agents at the data level, but we do not solve the practicality issues at the network level. An avenue for future work is to investigate the impact of other adversarial attacks on Deep-RL IDSs, including in the more realistic black-box setting. The detection agents should also be trained on recent datasets that are more representative of modern network traffic and attacks. The practicality concerns should be addressed by applying clipping functions and penalties to restrict the perturbation. Semantic relations should be extracted from the data and integrated into the generation methods to produce consistent adversarial examples.
CONCLUSION
Recent research in cybersecurity has used Deep-RL in multiple functions, especially intrusion detection. This approach is promising as it allows more adaptability and faster processing. However, using Deep-RL detection methods opens the door to the threat of adversarial attacks.
In this work, we study the vulnerability of a Deep-RL IDS detection agent when faced with adversarial examples. We train a stateof-the-art Deep-RL detection agent using the NSL-KDD dataset and evaluate its performance with several adversarial attack methods. We demonstrate a substantial deterioration in detection performance when adversarial attacks are used to perturb malicious packets towards being classified as benign. Finally, we discuss the practicality of these adversarial examples and suggest research directions to implement adversarial attacks on real networks.
Figure 1 :
1 Figure 1: Details of the classifier DQN agent for the training phase
.
5Figure 2 :Figure 3 :
23 Figure 2: Performance of AE-RL two-class detection model facing adversarial attacks
Figure 4 :Figure 5 :
45 Figure 4: Performance of AE-RL multi-class detection model facing adversarial attacks
Table 1 :
1 Accuracy and F1-score of AE-RL two-class detection model facing adversarial attacks
Attack FGSM BIM
Label Acc. F1 Acc. F1 Acc. F1
Normal 84.81 84.09 66.87 61.16 75.84 66.71
Anomaly 84.81 85.47 66.87 71.12 75.84 81.04
Table 2 :
2 DoS 94.44 91.32 65.73 120.04 76.36 46.23 93.76 90.71 24.81 6.38 PROBE 95.40 77.36 72.09 119.68 89.93 21.83 95.41 78.27 79.62 13.87 R2L 90.25 37.47 84.68 111.25 88.16 12.17 89.91 39.03 84.19 18.93 U2R 98.10 15.44 98.55 112.83 98.52 15.73 97.76 13.99 97.09 8.13 Accuracy and F1-score of AE-RL mutli-class detection model facing adversarial attacks
No Attack FGSM Targ. FGSM BIM Targ. BIM
Label Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1
Normal 83.70 83.33 75.58 76.40 56.37 65.89 84.90 83.71 28.47 2.52 |
04103721 | en | [
"spi.nrj",
"spi.signal"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04103721/file/WE2023_PO_006_i4SEE.pdf | Julien Tissot
Christopher Gray
Automated rotor imbalance monitoring using high-frequency SCADA data
When imbalanced, the rotor will vibrate at a frequency directly proportional to its speed of rotation, the new metric will therefore rely on a standard spectral analysis:
1. We measure the 1p component in the rotor speed using Fast Fourier Transform (FFT), which corresponds to oscillations occurring once per full rotation of the rotor. The higher the amplitude, the higher the imbalance.
2. Order tracking is used to handle varying rotor speed, providing a robust estimation of the 1p amplitude. If the rotor speed is not available, the generator speed can be converted to an equivalent rotor speed.
3. The amplitude is derived using Signal-to-Noise Ratio, SNR, concepts.
4. The 1p amplitude is computed every 10 minutes which ensures backward compatibility with existing historical 10-minute SCADA data and facilitates comparative analysis among turbines.
The spectral analysis of high-frequency rotor speed signals is a well-established method for revealing information on the state of imbalance in a wind turbine [1,2] . This may be caused by mass imbalance on the rotor blade structure (e.g., an uneven mass distribution during manufacturing or due to gradual water ingress) or aerodynamic imbalance caused by a pitch offsets leading to an uneven torque intake on the blades.
Wind turbines generate a large volume of operational data. In the industry, the standard practice for collecting and analyzing SCADA has been to aggregate data at 10-minute intervals. While manufacturers typically collect data at a higher frequency, this level of detail has not always been made available to the end users, such as wind farm owners and asset managers.
In the past the 10-minute data aggregation interval provided good a balance between data volume and detail, but with the increasing focus on data-driven decision making and the availability of new technologies and tools for data analysis and storage, there is a growing demand for access to higher frequency datasets. With access to data analytics using higher frequencies, wind turbine owners and asset managers will gain deeper insights into turbine performance, identify potential issues more quickly, and ultimately optimize operations.
In this study, we investigate the harmonic analysis of the high-frequency rotor speed signal to detect rotor imbalance, with a focus on ensuring that the final metric is both usable and compatible with current asset management practices. We also present real-life examples, where the new metric provided valuable insights for asset managers on a day-to-day basis. PO.006
Abstract
Visit our homepage at i4see.com
The Nyquist-Shannon sampling theorem dictates that at least 2 points per revolution are needed to avoid aliasing, but ideally, a sampling frequency should provide twice that amount, 4 points per full rotor rotation. Given the range of RPM of a turbine, we only compute the metric when 1hz data are available.
Spectrogram illustrating the presence of harmonics in the high-frequency rotor speed signal. Rotor Speed was overlayed to visualize the direct proportionality. (1 turbine -12hours)
Conclusions
▪ Detection of global rotor imbalance is possible using high frequency SCADA given a minimum sampling rate of 1Hz.
▪ Aggregating the imbalance metric at a 10-minute level ensures compatibility with historical SCADA data, facilitating further comparative analysis.
▪ Monthly distribution analysis of the metric provides insight on a wind farm or portfolio level, allowing for prioritized inspections and maintenance operations.
▪ Individual timeseries analysis of the metric enables the assessment of the impact of maintenance or environmental events on wind turbines.
After computing the imbalance metric every 10 minutes, we analyze its statistical distribution on a monthly basis per turbine. This enables comparison of similar turbines, provided they have a similar high-frequency rotor speed sampling rate:
Imbalance metric monthly distribution comparison for 2 turbines (left) and for the whole wind farm (right). In both cases, we see turbine A has a higher relative imbalance than turbine L.
Left graph: At 14.75 RPM, 1 Hz and 0.5 Hz sampling rate are acceptable but the peak at 1P is more pronounced at 1Hz. Right graph: Summary of sampling frequency requirement for the metric.
A L A B C D E F G H I J K L
Monitoring the trend of this analysis at a portfolio level can provide valuable insights. It can help identify wind turbines that may be experiencing deteriorating health and prioritize their inspection and maintenance operations accordingly. By monitoring the metric monthly timeseries for one or multiple turbines, it is possible to correlate the amplitude of the imbalance with on-site events:
By identifying anomalies in imbalance, asset managers can make informed decisions on the effectiveness of interventions or retrofits.
The left graph illustrates an improvement in rotor imbalance after the OEM performed the annual maintenance. The right graph depicts an episodic and global degradation in rotor imbalance across the entire wind farm, which we found to be correlated with a severe icing event that occurred on this high-altitude wind farm.
Results
Frequency
, Peter; Giebhardt, Jochen (2005). Rotor Condition Monitoring for Improved Operational Safety of Offshore Wind Energy Converters. Journal of Solar Energy Engineering, 127(2), pp. 253 2. M. R. Shahriar, P. Borghesani and A. C. C. Tan, "Speed-based diagnostics of aerodynamic and mass imbalance in large wind turbines," 2015 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Busan, Korea (South), 2015, pp. 796 References |
04103728 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2023 | https://shs.hal.science/halshs-04103728/file/WP12.pdf | David Agrawal
Konstantinos Angelopoulos
Marius Brülhart
Felix Bühlmann
Thomas David
Reto Föllmi
Roland Hodler
Matthieu Leimgruber
Marko Köthenbürger
Dominik Sachs
David Torun
Gabriel Zucman
Enea Baselgia
email: [email protected]
Isabel Z Martínez
email: [email protected]
Marius Brülhart
Felix Bühlmann
Marko Köthenbürger
Behavioral Responses to Special Tax Regimes for the Super-Rich: Insights from Swiss Rich Lists
Keywords: JEL-Classification: H24, H71, H73, R23, C81, D31 super-rich, tax mobility, preferential taxation, tax competition, wealth inequality
as well as participants at numerous conferences and seminars for helpful discussions and comments. Simon
Introduction
It is well-known that Switzerland has not only served as a hiding place for large fortunes of the world [START_REF] Zucman | The Missing Wealth of Nations: Are Europe and the US net Debtors or net Creditors?[END_REF][START_REF] Alstadsaeter | Tax evasion and inequality[END_REF][START_REF] Alstadsaeter | Tax evasion and tax avoidance[END_REF], but is also home to a considerable fraction of the global wealth elite. According to Forbes magazine, in March 2021 the number of billionaires per million inhabitants was 4-more than twice that of the US. In Switzerland, the rich enjoy the discretion that comes with the tradition of (in the meantime abolished) banking secrecy and a mild tax climate.
The tax privileges Switzerland grants in particular to rich foreigners have come under fire both internationally as well as within the country itself. Yet despite the strong interest of policymakers and voters, little is known about the super-rich in Switzerland: who they are, where their wealth comes from, and how their location choices depend on the ingenious tax privileges Switzerland o ers to wealthy foreigners.
In this paper, we fill this gap in the literature by examining the 300 richest individuals and families in Switzerland listed each year in the "BILANZ" magazine. We refer to this tiny wealth elite, which constitutes the top 0.01% wealth holders in Switzerland, as super-rich and measurement of top wealth. 2 Our second contribution is to present a detailed picture of the super-rich in Switzerland. We study the structure and dynamics of wealth at the very top of the distribution, including the influence of inheritances, the industry composition of the wealthy elites, the geographic distribution, the role of wealthy foreigners, and intra-generational wealth mobility. We show that the wealthy are predominantly male or entire (extended) families. The share of women is below 10% and there are no signs of an increasing number of women among the super-rich. Average age is beyond 60 and has been increasing since 1989. The number of top managers has increased, but with a share of 8% they still constitute a fairly small group at the very top of the wealth distribution.
Inheritances are still the main factor for making it to the very top of the wealth distribution in Switzerland: in 2020, 60% of those in the BILANZ rich list were heirs or had married into a wealthy family. Inherited wealth is much more widespread at the top of the wealth distribution in Switzerland than in the US, particularly today: in the US, the share of heirs in the Forbes 400 list has dropped significantly, from 56% in 1982 to 31% in 2018 [START_REF] Scheuer | Taxation and the Superrich[END_REF]. Kaplan and Rauh (2013a) conclude that in the US, access to education at a young age and applying one's skills in the most salable industries has become much more decisive than an extensive wealth background in making it to the top of the wealth distribution. In Switzerland, in contrast, the share of top wealth owned by heirs fluctuates between 60-80% over the entire period from 1989 to 2020, with no clear trend. We thus find no support for an increasing importance of meritocratic principles in accessing the top of the wealth distribution in Switzerland. The importance of inheritance is also reflected in the high persistence of the same individuals and dynasties over time in the BILANZ data. Over the past two decades, wealth mobility within the tiny group of the super-rich has even declined: 71% of those listed in 2000 were still present five years later. Fifteen years later, 83% of those who were on the list in 2015 were still listed five years later.
Our data also shows the importance of super-rich foreigners in Switzerland. Since the turn of the century, about 50% of the individuals in the data are foreign-born (compared to 30% in the total resident population), and these super-rich foreigners are, on average, somewhat richer than their Swiss-born peers.
Wealthy foreigners enjoy significant tax privileges in Switzerland, as they can opt for expenditure-based taxation (often referred to as lump-sum taxation or "tax deals"). This scheme is only available to foreign nationals with no labor income earned in Switzerland.
Rather than their actual income and wealth, a mix of living expenses reported by the taxpayer and expenses assumed by the tax law serve as the tax base, to which then the regular income and wealth tax rates are applied. Especially for the super-rich foreign-of the UK "non-dom" system. In contrast to large elasticities documented in earlier papers, they find hardly any out-migration response one year after limitations on the "non-dom" system were imposed. They estimate that less than 3% left the UK in response (implied elasticity: 0.02). The "non-dom" system, however, is tailored to individuals with investment incomes abroad. They do not need to be super-rich, and they may (and often do) earn large labor incomes in the UK (taxed on a regular basis), and are allowed to possess UK nationality. These taxpayers may therefore be more strongly attached to the UK than the super-rich foreigners we study in the Swiss context. Furthermore, due to data limitations, Advani et al. (2022a) cannot study how the e ect evolves after several years, nor how the policy change a ected in-migration. Our estimates, in contrast, refer to the overall drop in super-rich foreign-born taxpayers in the abolition cantons, including new arrivals who now settle in cantons still o ering them tax privileges.
A limitation of the Swiss setting is that we cannot relate the percentage change in super-rich foreigners in a canton to the percentage change in the e ective tax rate. The stipulated tax rates themselves did not change. What di ers between regular taxation and expenditure-based taxation is the definition of the tax base. Unfortunately, we do not know the di erence between the synthetic expenditure tax base and the true income and wealth tax base of the eligible taxpayers. We can therefore not compute wealth tax elasticities implied by our estimates, which would make it possible to compare our results directly to those in [START_REF] Moretti | Taxing Billionaires: Estate Taxes and the Geographical Location of the Ultra-Wealthy[END_REF].
The remainder of this article is organized as follows. In Section 2, we relate this work to the existing literature on wealthy elites. Section 3 describes the data, followed by descriptive analysis of the super-rich and the origins of their wealth in Section 4. Section 5 analyzes the role of preferential tax treatment for the location choices of the super-rich.
We provide some concluding remarks in Section 6.
Related Literature on Wealth Elites
Over the past two decades, a renewed interest in the concentration of income and wealth at the top of their respective distribution in the long run has emerged. Most of these studies compute top income and wealth shares based on tax data (see, e.g., [START_REF] Dell | Income and Wealth Concentration in Switzerland over the Twentieth Century[END_REF][START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF]Martínez, 2017, for Switzerland;[START_REF] Kopczuk | Top Wealth Shares in the United States, 1916-2000: Evidence from Estate Tax Returns[END_REF] for the US; Piketty et al., 2006, for France;or Roine and Waldenström, 2009, for Sweden;Alvaredo et al., 2018, for the UK;[START_REF] Atkinson | Concentration among the Rich[END_REF][START_REF] Atkinson | Top Incomes in the Long Run of History[END_REF][START_REF] Roine | Long-run trends in the distribution of income and wealth[END_REF], provide extensive overviews). Unlike surveys, wealth tax returns they do not su er from sampling errors [START_REF] Vermeulen | Estimating the top tail of the wealth distribution[END_REF] and they are available over many decades if not centuries. Nevertheless, and even when wealth tax data is available, the study of wealth and its distribution is fraught with greater di culties than the study of income. 5In the absence of administrative tax data in many countries, another strand of the literature has started to estimate the distribution of wealth using surveys and rich list data. As surveys typically do not capture the upper part of the (wealth) distribution well, various authors have supplemented surveys by including individuals from rich lists (e.g., [START_REF] Vermeulen | How fat is the top tail of the wealth distribution?[END_REF], for the US, the UK, Germany, France, Italy, Spain, the Netherlands, Belgium, Austria, Finland, and Portugal;Bach et al., 2019, for France, Germany, and Spain;[START_REF] Disslbacher | On Top of the Top -Adjusting wealth distributions using national rich lists[END_REF] for 14 European countries).
We contribute to these two strands of the literature with estimates of the share of wealth going to the top 0.01% based on rich list data, which we compare to and benchmark against estimates by [START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF] that are based on wealth tax statistics.
We find a top 0.01% wealth share of approximately 16% in recent years, which is about one-third larger than the estimates based on wealth tax data. This analysis along with a detailed discussion on the limitation of rich list and tax data to accurately measure top wealth, can be found in Appendix D.
Although the empirical research on wealth inequality has made considerable progress over the past two decades (see, e.g., the review by [START_REF] Zucman | Global wealth inequality[END_REF], we still know relatively little about who the people at the absolute top of the wealth distribution are, how they got there, and how long they stay at the top. A minor strand of the literature has examined a variety of factors, particularly how important inheritances are in making it to the absolute top of the wealth distribution. Kaplan and Rauh (2013a) show that Americans in the Forbes 400 are less likely to have inherited their wealth today than they did back in the 1980s. They conclude that this decline in the importance of family wealth is largely due to the major improvements in information technology that allows skilled individualssuperstars-to apply their talents to much larger amounts of capital (see also Kaplan andRauh, 2013b, and[START_REF] Scheuer | Taxing the superrich: challenges of a fair tax system[END_REF][START_REF] Scheuer | Taxation and the Superrich[END_REF]. This finding is generally supported by [START_REF] Korom | The enduring importance of family wealth: Evidence from the Forbes 400, 1982 to 2013[END_REF], who note, however, that family wealth still matters in the sense that it reduces the likelihood of falling o the Forbes 400 list. We add to this literature in Section 4 by analyzing Switzerland in similar fashion. Recent studies have further started using rich list data to study other phenomena such as tax avoidance [START_REF] Moretti | Taxing Billionaires: Estate Taxes and the Geographical Location of the Ultra-Wealthy[END_REF], political influence [START_REF] Salach | Political connections and the super-rich in Poland[END_REF], and corporate ties (Advani et al., 2022b).
The BILANZ Rich List Dataset, 1989-2020
The BILANZ is a Swiss business magazine that publishes an annual rich list in Switzerland since 1989-similar to the Forbes 400 in the US. We have collected the data from the BILANZ rich list for all years from 1989 to 2020 from hard copies. Since its first edition in 1989, two major factors have influenced the composition of the BILANZ rich list. First, the number of ranking entries fluctuated significantly in the first ten years, from 100 in 1989 to 300 in 1999, remaining stable thereafter. Second, until 1993 only Swiss citizens were included in the rankings. 6We have collected the following yearly information from the BILANZ magazine: individual respectively family name, net wealth (in intervals), industry information, the canton of residence (the subnational Swiss states are called cantons), and a series of dummy variables indicating whether the entry refers to a family (vs. an individual), whether the individual is a CEO or has a similar top managerial role, and whether the individual is female. We supplement this data with the following manually collected information: dates on birth and death, a foreign-born dummy variable, a variable that categorizes the origin of wealth (inherited, through marriage, self-made), and a dummy variable indicating whether wealth foundation occurred prior to or after WW2. In addition, we capture the reason why someone has entered or exited the sample. The manually collected data are taken from the prologues and short profiles in the BILANZ magazine as well as from various online sources (e.g., newspapers, Wikipedia, and other websites). The panel dataset is described in detail in Appendix A. Table C1 reports yearly number of observations and a set of summary statistics of our dataset.
Data Limitations.
The limitations of using rich lists for economic research have been discussed extensively in the literature (see, e.g., [START_REF] Davies | The distribution of wealth[END_REF][START_REF] Atkinson | Concentration among the Rich[END_REF][START_REF] Piketty | Wealth and inheritance in the long run[END_REF][START_REF] Bach | Looking for the missing rich: Tracing the top tail of the wealth distribution[END_REF][START_REF] Handreke | Who is How Rich, and Why? -Investigating Swiss Top Wealth 1989-2017[END_REF]. As we introduce a novel data source, we want to transparently discuss several crucial limitations which may be particular of the Swiss rich list data and should consequently be considered in any empirical analysis and interpretation of this data. 7First, the methods used by the BILANZ are mostly unknown and of journalistic nature.
Some super-rich individuals may be more news-worthy than others, and this may influence who enters or exits the panel at some point. The assumptions underlying the decision to add new entries or remove existing entries, as well as the criteria for assigning an entry to a specific industry, are not fully disclosed by the BILANZ magazine, and thus we often cannot conclusively track changes. Similarly, the method of wealth estimation is by and large unknown and may potentially di er between entries, as comparable information is not available for all individuals, ultimately leading to inaccuracies or di erences in wealth estimates.8
Second, the net wealth estimates in the Swiss rich list are considerably less granular than those in the Forbes 400 rich list. BILANZ reports net wealth in intervals that span a range of 50 million for the "poorest" entries, and a range of up to one billion Swiss Francs for the richest entries. This results in two drawbacks. First, multiple individuals or families are assigned to the same wealth interval, which does not allow us to provide a unique ranking within each interval. Second, "smaller" changes in net wealth-up to 50 million for the poorest and up to a billion Swiss Francs for the richest-are not captured, limiting wealth mobility analyses. Note that throughout all analyses, we use the average of the lower and upper bounds of the reported wealth intervals.
Third, and perhaps most concerning, the Swiss rich list does not use a uniform unit of observation. The ranking entries may be individuals or families. 9 Moreover, the observation unit sometimes does not remain constant over time either: individuals become families and later in some cases appear again as individuals. This is not only a drawback of the Swiss rich list, but is also inherent for Germany [START_REF] Bach | Looking for the missing rich: Tracing the top tail of the wealth distribution[END_REF] and Austria [START_REF] Eckerstorfer | Correcting for the missing rich: An application to wealth survey data[END_REF], for instance. For the US, on the other hand, this problem is far less prevalent, as the Forbes 400 list includes far fewer family entries. The Swiss rich list contains a relatively large number of families in the ranking, and their number has increased significantly in recent years (see Table C1). As expected, family observations are significantly richer than individuals, by an average of approximately 50% over the 2013-2020 period. 10
Despite these data limitations, the BILANZ rich list is a valuable complementary data source to survey and administrative data to study the super-rich and top wealth dynamics.
The key advantage of our unique panel data is that we can use market value estimates of net wealth along with socioeconomic characteristics and ancillary information, providing valuable additional insights into the evolution of the enormous fortunes at the top end of the wealth distribution over the past 30 years. Unlike more populous countries as the US,
where the Forbes 400 list only covers the top 0.00025% of the population (see [START_REF] Kopczuk | Top Wealth Shares in the United States, 1916-2000: Evidence from Estate Tax Returns[END_REF][START_REF] Roth | What's Trending in Di erence-in-Di erences? A Synthesis of the Recent Econometrics Literature[END_REF], the Swiss rich list captures a relatively large fraction of the wealth distribution at the top end-roughly the top 0.01%.
Summary Statistics.
Appendix Table C1 gives an overview of the observations and amounts reported in our BILANZ panel dataset. The unbalanced panel includes 8,057 ranking-year observations covering a total of 898 individuals (or families) which belong to a total of 711 di erent families. Real average wealth increases over time. After 1999, when the number of individuals is stable and foreigners are included, average real wealth was 1.71 billion (in 2020 Swiss Francs). 11 Median real wealth was significantly lower at 0.64 billion, reflecting the highly right-skewed wealth distribution among BILANZ's richest. The 300 richest in Switzerland are therefore relatively poor compared to the Forbes 400. In the Forbes 400
9 In a few rare cases, individuals who are not related are grouped as collectives, e.g., because they are joint owners of a venture and their assets cannot be distinctly associated with a single individual. 10 We take this into account when calculating top wealth shares (see Appendix Section D.1 for details).
Families and Individuals
Between 30% and 50% of all observations are recorded as families, and this percentage has steadily increased in recent years (Figure 1). Among individuals, we observe that the Swiss wealth elite is predominantly male. The share of women among the superrich individuals fluctuates around 10% over the period from 1989 to 2020. There is no indication that the share of women has risen in recent years, if anything we observe the opposite.
Figure 2 displays the average age of all individual observations in our panel dataset.
With an average age of more than 60 years, the wealth elite in Switzerland is relatively old, and has been growing older over the past two decades. The observed rise in mean age of the super-rich in Switzerland contrasts with the US, where the Forbes 400 have become younger on average in recent years [START_REF] Scheuer | Taxing the superrich: challenges of a fair tax system[END_REF]. The temporary decline in the average age in the second half of the 1990s can be explained in part by the entry of several new economy entrepreneurs into the ranking.
Foreigners
The super-rich living in Switzerland belong to an international elite. Figure 3 shows the share of non-Swiss-born super-rich as well as their share in BILANZ total top wealth.
Since the first inclusion of foreigners in 1993, we observe a steady increase of foreign-born residents among the super-rich to over 50% by 2010. Since then, the share of foreign-born super-rich has declined to about 47%, but is still well above the overall foreign-born share of the resident population of 30%. The share of top wealth held by foreign-born super-rich fluctuates around 60%. Hence, the foreign-born super-rich are on average wealthier than those born in Switzerland.
This comparison reveals that wealthy foreigners living in Switzerland are heavily overrepresented at the top of the wealth distribution. Consequently, foreigners residing in Switzerland, and in particular those subject to expenditure-based taxation, need special consideration in the analysis of top wealth dynamics and concentration. We come back to the role of expenditure-based taxation for the location choices of the super-rich in Section 5. The top of the wealth distribution has historically been made up of individuals and families who live o the income from their property rather than their labor income (see [START_REF] Piketty | Capital in the 21st Century[END_REF]. Since the mid-1990s, however, it has been observed that the salaries of the top 0.01% of income earners in Switzerland have risen significantly faster than average incomes [START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF]. This has eventually led to the emergence of a new class of super-rich, the top managers. Figure 4 shows the entry and rise of the top managers in the list of the 300 richest in Switzerland. Their share was on the rise, especially between 2003 and 2013, to reach 8% of all observations. Since then, their number among the 300 richest is slightly declining. Notwithstanding the rapid rise in the first decade of the 20th century, the overall importance of managers in the Swiss wealth elite remains modest.
Another sign that old fortunes are still significantly more pertinent at the absolute top of the distirubiton is reflected by the fact that the share of top wealth held by managers (red line) is significantly lower than their frequency in the ranking. Thus, while some managers have made it to the top of the wealth distribution, they are still relatively poor compared to the traditional super-rich. indicates the relative frequency of managers in total observations. The lower red line represents the share of total BILANZ wealth held by managers. The sharp increase in managers' share of wealth in 2011 is the result of Glencore's IPO, which turned the four Swiss-resident Glencore managers Ivan Glasenberg, Daniel Mate, Aristotelis Mistakidis, and Tor Peterson into billionaires over night.
Industry Composition
Figure 5 the industries into which the fortunes of the super-rich are invested. In the 1990s, four industries in particular stood out, accounting for the following share of total top wealth in 1990: (i) trade, retail 21.4%; (ii) banking, insurance, finance industry 12.7%;
(iii) pharmaceuticals, chemistry, biotechnology 12.6%; and (iv) industry, manufacturing 11.0%. Over time, the importance of these industries in regard to their share of total top wealth has declined. Their combined share in top wealth fell from 57.6% in 1990 to 43.7% in 2020. Top wealth in Switzerland has become more diverse. This is also reflected in the category "shareholdings, investments (including real estate)", 14 which rose from 10.1% in 1990 to 16.7% in 2020. Note however that since many individuals and families are increasingly invested in a range of companies across multiple industries, a distinct assignment to a particular industry can be di cult.
Perhaps surprisingly, the fashion and textile industry (+5.7pp.) and the food, drink and tobacco industry (+5.3pp.) have seen the largest growth in their share of top wealth over the past three decades (apart from shareholdings). This increase is due in particular to the rapid growth in net assets of six individuals or families, two of which moved to Switzerland after 2010. The joint net worth of Jorge P. Lemann (Anheuser-Busch InBev), Charlene de Carvalho-Heineken (Heineken), the heirs of Klaus J. Jacobs (various businesses), Gerard Wertheimer (Chanel), the Perfetti family (Perfetti Van Melle; moved to Switzerland in 2011) and Alexandre Van Damme (Anheuser-Busch InBev; moved to
Switzerland in 2016) rose from about 18.9 bn in 2009 to 79.0 bn in 2020 (real terms).
While in the US six of the top 10 ranks of the Forbs 400 list are occupied by self-made billionaires from the new economy-Bill Gates (Microsoft), Mark Zuckerberg (Facebook), Larry Ellison (Oracle), Steve Ballmer (Microsoft), Larry Page (Google), and Sergey Brin (Google)-such individuals are nowhere to be found in Switzerland. 15 Although the top wealth share of the new economy in Switzerland grew from 0.5% in 2000 to 2.3% in 2020, it still remains unimportant overall. 16 Superstars from the world of sports and entertainment may be the most prominent on the list, but really only play a marginal role among the super-rich in Switzerland. In general, the industry composition of top wealth in Switzerland is markedly di erent to that of the US (see [START_REF] Korom | The enduring importance of family wealth: Evidence from the Forbes 400, 1982 to 2013[END_REF] we report their results in Appendix Table C12). Industry, 1989Industry, -2020 Note: This figure shows the share of total BILANZ wealth by industry between 1989-2020. For a more concise visualization, various industries have been grouped together. For more information on the industries, see the corresponding section in Appendix A and Table A2.
Wealth Mobility
There are essentially three ways to become rich: (i) either through one's own work and savings, (ii) through inheritance, or (iii) by marrying into a large family fortune [START_REF] Piketty | On the Long-Run Evolution of Inheritance: France 1820-2050[END_REF]. These paths to prosperity are guided by fundamentally di erent economic forces and are arguably critical to society's acceptance of the prevailing level of inequality. When people believe that there is a legitimate, albeit small, chance of becoming (super-)rich through one's own e orts and work, they are more willing to accept higher levels of inequality [START_REF] Alesina | Intergenerational mobility and preferences for redistribution[END_REF].
In this section, we shed light on intra-and intergenerational top wealth mobility in Switzerland. Two questions are thereby of main interest. First, how important are inheritances and what is the share of self-made super-rich? Second, how likely are the super-rich to remain at the top of the wealth distribution and how large is wealth mobility within the top?
Inherited or Self-made Wealth
A key di erence in the process of wealth accumulation is whether wealth is self-made or whether most of it is obtained through inheritance or marriage. We categorize the origin of wealth in our data as follows: (i) self-made, (ii) inherited, (iii) acquired through marriage. Figure 6 illustrates the importance of these di erent origins for the observations in our BILANZ data. Throughout the entire period, only approximately 30-40% of all super-rich can be categorized as self-made. Thus, the vast majority of the super-rich are still heirs today, while marriage plays only a very minor role to enter the club of the super-rich in Switzerland.
Figure 7 shows the share of non-self-made wealth (i.e., the sum of (ii) and (iii)) in the BILANZ dataset's total top wealth. 17 The overall share of inherited wealth in total top wealth has fluctuated between 60% and 80% in the period from 1989 to 2020. These fluctuations are due in particular to the wealth dynamics of the non-Swiss-born super-rich (blue line). 18 For the Swiss-born, we see a high share of inherited wealth of about 80% throughout the past 30 years. Moreover, a comparison of Figures 6 and7 reveals that, on average, heirs are significantly richer than self-made super-rich.
Even though the shares and especially the fluctuations in Figure 7 should be interpreted with care, the overall pattern contrasts sharply with the experience in the US: the share of heirs was and is much more prevalent at the top of the wealth distribution in Switzerland than in the US, particularly today. Specifically, the share of heirs in the Forbes 400 has dropped significantly, from 56% in 1982 to 31% in 2018 [START_REF] Scheuer | Taxation and the Superrich[END_REF], whereas it declined only modestly in Switzerland. From this, we conclude that changes in top wealth are much less dynamic in Switzerland than in the US. Particularly, as a native 17 To be precise, we define the share of inherited wealth shown in Figure 7 as: 1 minus the wealth share of first generation founders. This definition has been used elsewhere in the literature (see, e.g., Kaplan andRauh, 2013a and[START_REF] Scheuer | Taxing the superrich: challenges of a fair tax system[END_REF][START_REF] Scheuer | Taxation and the Superrich[END_REF]. 18 The sharp surge of close to 10 percentage points in 2013, for instance, is essentially due to the death of IKEA founder Ingvar Kamprad-Switzerland's richest self-made man at the time-who passed his fortune on to his sons.
Swiss, an inheritance seems to be the primary prerequisite for making it to the top of the wealth distribution, and this prerequisite has become noticeably more important again in recent years. Note: This figure displays the share of inherited and non-self-made wealth, respectively, in total BILANZ wealth. The origin of wealth is categorized in our data as: (i) self-made, (ii) inherited, or (iii) acquired through marriage. Following the literature (see, e.g., Kaplan andRauh, 2013a and[START_REF] Scheuer | Taxing the superrich: challenges of a fair tax system[END_REF][START_REF] Scheuer | Taxation and the Superrich[END_REF], we define the share of inherited wealth as 1 minus the wealth share of first generation founders. The share of inherited wealth is defined as non-self-generated wealth: category (ii) + category (iii) as a fraction of total BILANZ wealth. The black line shows the share of inherited wealth for all observations. The red and blue dashed lines respectively show the same share according to whether the observations were born in Switzerland or abroad.
Furthermore, many of the super-rich residing in Switzerland have been wealthy for several generations. Appendix Figure C2 shows the share of today's top wealth founded before World War II. This share will inevitably decline over time, as new industries emerge replacing old ones-and as long as there is a certain degree of social mobility into the top.
It is all more striking that this fraction has remained stable since 2010. We take this as tentative evidence that social mobility at the top of the wealth distribution has slowed down in recent years.
Persistence in the BILANZ rich list
The static view on the share of inheritances in total top wealth does not provide a comprehensive understanding of how dynamic the evolution of wealth is at the top. Therefore, we turn to the persistence of the super-rich at the top of the wealth distribution.
Figure 8 shows that of the top 300 in 2010 (blue line), 95% were still listed among the BILANZ richest in 2011, and ten years later, in 2020, that figure was still 68%. The super-rich may drop out of the top 300 for several reasons: (i) they are no longer wealthy enough, (ii) they left Switzerland, or (iii) their wealth has been dispersed, for instance, because they are deceased. 19With the data available, we cannot precisely quantify which reasons are responsible for which proportion of drop-out observations. Figure C4 shows, however, that of the 92 observations dropping out between 2010 and 2020, only 20 (22%) had assets of less than 200 million real Swiss Francs in 2010, suggesting that too little wealth is not the primary reason for leaving the BILANZ rich list.
Three key insights can be derived from Figure 8. First, persistence of top wealth is in general very high, and moderately higher in Switzerland than in the US (see Scheuer, 2020 for a comparison). Second, the probability of dropping out of the top wealth group decreases over time in all periods between 2000 and 2020, as can be inferred from the flattening of the curves. Third, and most importantly, wealth persistence of the superrich increased gradually and significantly between 2000 and 2020, most notably from 2000 to 2005.20 Note: This figure shows, for the four di erent periods indicated, the persistence rates of those included in the Swiss rich list. Looking at the black line, for example, shows that 91% of the observations listed in 2000 were still reported in 2001. After 5 years, in 2005, 71% and after 10 years, in 2010, 61% are still listed. Survival rates are based on a panel of family dynasties, rather than individuals (see Appendix A for details). For the one-to 10-year survival rates based on individual observations, see Figure C3.
Mobility among the Super-rich
So far, we have seen that many super-rich remain relatively tenaciously at the top of the wealth distribution. But how do the super-rich move within the top end of the wealth distribution? Unfortunately, because the wealth brackets of the BILANZ rich list are rather large and very unequal in size, we are not able to rank the rich with enough precision to compute mobility matrices or run rank-rank regressions, as is done, for example, in the literature on intra-and intergenerational income mobility (e.g., [START_REF] Auten | Income Mobility in the United States: New Evidence from Income Tax Data[END_REF][START_REF] Chetty | Where is the Land of Opportunity? The Geography of Intergenerational Mobility[END_REF].
To shed light on wealth mobility within the 300 richest in Switzerland, we estimate the intragenerational wealth elasticityfor those observations who are present in our dataset over a 10-year period using the following regression specification:
ln(wealth i,t+10 ) = -+ -ln(weath i,t ) + Á i (1)
where ln(wealth i ) is real log wealth at time t and t + 10, respectively. We find some mobility in the individual observations for both ten-year periods, with a larger dispersion in the first decade. Overall, however, the intra-generational wealth elasticity is high, indicating low mobility at the very top of the wealth distribution-also in comparison with overall wealth mobility in Switzerland (see [START_REF] Moser | Vermögensentwicklung und -mobilität. Eine Panelanalyse von Steuerdaten des Kantons Zürich 2006-2015[END_REF][START_REF] Martínez | In It Together? Inequality and the Joint Distribution of Income and Wealth in Switzerland[END_REF]. This elasticity has further increased over time, from 0.79 in the first decade of the new millennium to 1 in the 2010-2020 period. Essentially, this suggests that the low positive wealth mobility at the top has, on average, decelerated to zero mobility. This is further supported by the increase in the R 2 from 0.67 to 0.80, confirming that initial wealth has become a very strong predictor of future wealth. While this is certainly a simple exercise to estimate wealth mobility, the results suggest that wealth mobility at the top of the wealth distribution declined markedly and statistically significantly from the first to the second decade of the 2000s. Note: This figure shows a scatter plot for real log net wealth for the period 2000 to 2010 (red dots) and for 2010 to 2020 (blue diamonds), respectively. We report slope estimatesand the R 2 from OLS regressions in the corresponding color. Both regression coe cients are statistically significant at the 1% level. The gray shading surrounding the gradients represent the 95% confidence intervals. The analysis here is based on family observations rather than individual observations (for details on the two panel identifiers see Appendix A). This means that if, for instance, a super-rich individual dies within the observation period, but his heir is listed in the last year of the analysis, then this observation does not drop out. We only use observations in the mobility analysis that are present in both the first and last year of the analysis. The small written text under the figure displays the dropout rate. Figure C4 in the Appendix provides the same analysis for sub-periods.
How Rich are the Super-rich?
In 1989, the Ho mann-Oeri-Sacher family, led by Paul Sacher, ranked first on the BILANZ rich list with a fortune of 10.3 billion (in real terms as of 2020). Some thirty years later, in 2020, the rich list in Switzerland is led by the three sons of late IKEA founder Ingvar Kamprad, with a total estimated net worth of 55.5 billion Swiss Francs.
However, not only the very richest in Switzerland, but also the broader Swiss wealth elite has become significantly richer over the past three decades. The number of billionaires (in real terms of 2020) residing in Switzerland has risen from 45 in 1993 to 128 in 2020 (see Appendix Figure C1).
Figure 10 shows the evolution of top wealth and aggregate wealth over time. Top wealth and total private wealth have grown at roughly the same rate since the turn of the 21st century. Compared to aggregate private net wealth, growth in top wealth is more volatile over the business cycle, with faster growth in boom periods but, conversely, declining more sharply in downturns. Since 2012, however, we observe a significantly steeper increase in net wealth of the first 10 entries in the rich list, indicating a marked concentration of wealth at the absolute top of the wealth distribution (see also Table C2).
According to the rich list data, the super-rich in Switzerland, who correspond to the top 0.01% of wealth holders, own around 16% of the total private wealth in the economy (compared to 12% when estimated based on tax data). However, as we discuss in Appendix D, we conclude that this is an upper bound, while top wealth shares based on wealth tax data likely tend to underestimate top wealth concentration. C1). For details on the total net private wealth series, see [START_REF] Baselgia | Wealth-Income Ratios in Free Market Capitalism: Switzerland, 1900-2020[END_REF].
Preferential Taxation and the Location Choice of Super-rich Foreigners
In this section, we analyze the e ect of expenditure-based taxation-a preferential tax scheme available to wealthy foreigners-on the location choices of the super-rich within Switzerland. For our causal identification of the e ect, we exploit that between 2010 and 2014 several cantons have abolished this practice, while it has remained in place in others. Several studies have shown that taxpayers, especially the rich, tend to sort into low-tax cantons and municipalities within Switzerland (e.g., [START_REF] Schmidheiny | Income Segregation and Local Progressive Taxation: Empirical Evidence from Switzerland[END_REF][START_REF] Martínez | Mobility Responses to the Establishment of a Residential Tax Haven: Evidence From Switzerland[END_REF][START_REF] Brülhart | Behavioral Responses to Wealth Taxes: Evidence from Switzerland[END_REF]. In particular, [START_REF] Schmidheiny | Tax-induced mobility: Evidence from a foreigners' tax scheme in Switzerland[END_REF] show that foreigners adjust their location choice in response to tax changes they face after having lived in Switzerland for five years. However, we are the first to study the e ect of the abolition of expenditure-based taxation in Swiss cantons on the location choices of the super-rich. Detailed tax data to study this question is unfortunately not accessible, and tax administrations have not released statistics on the number of former expenditure-based taxpayers who remained in the respective cantons or moved away after the abolition of the special tax treatment. To shed light on this crucial policy question, we therefore exploit our newly compiled BILANZ dataset to estimate the impact of eliminating expenditurebased taxation on the location decision of the super-rich.
A Tax Privilege for Wealthy Foreigners: Expenditure-Based Taxation
Wealthy foreigners without Swiss citizenship who take residence in Switzerland but do not earn any labor income in Switzerland can opt for a preferential tax treatment known as expenditure-based or lump sum taxation (sometimes mistakenly referred to as "tax deals"). This preferential tax scheme is explicitly aimed at attracting wealthy foreigners to Switzerland. Swiss citizens are not eligible. While expenditure-based taxpayers can have labor income earned abroad, they cannot earn any type of labor income within Switzerland. A French tennis player, for example, could not play the Basel ATP without having to give up the preferential tax treatment, as this would be considered work. And a businesswoman may live in Switzerland and manage her foreign firms, but not firms that are registered in Switzerland, without losing her right to expenditure-based taxation.
As married couples always file jointly in Switzerland, both spouses have to fulfill the requirements.
The scheme has been in place in di erent cantons since the late 19th century and was introduced at the federal level in 1934. In its origin, the goal was to levy some form of tax on wealthy foreigners who would spend several months each year in Switzerland as long-term tourists. In 1948, expenditure-based taxation was harmonized across cantons and the federal state, but di erences in tax base definitions remain to this day. Similar tax regimes exist in the UK (known as the "non-dom" system, dating back to 1799), Belgium, Austria, and Italy. Under the "non-dom" system, however, eligible taxpayers are allowed to work in the UK, but claim their permanent domicile to be outside of the UK. Investment income from abroad is only taxed when transferred into the UK. Under the Swiss system, eligible taxpayers claim Switzerland as their main domicile, but are not allowed to earn labor income within Switzerland. All incomes from abroad can be transferred freely to Switzerland.
As the name suggests, the tax base for these taxpayers is not their true income and wealth, but their total annual living expenses. These are defined broadly and include the cost of living for themselves and their dependents (whether they live in Switzerland or abroad), expenses for house personnel and maintenance, as well as other recurring expenses around the world, e.g., for private jets, yachts, holiday homes, or large estates and lands abroad. Cantonal and federal tax laws define some minimum values for the cost of living (and hence, for the tax base), which can be found in Appendix Table C3. 21 While there are written rules and guidelines regarding the estimation of all kinds of expenses, tax authorities asses the tax base case by case.
The regulations and minimum requirements of expenditure-based taxation are primarily designed to mimic income of wealthy foreigners. The income tax base is replaced with the estimated expenses. For the wealth tax, a multiple of the expenses, typically by factor 20, serves as wealth tax base in most cantons (see Appendix Table C3 for details).22 Importantly, expenditure-based taxpayers di er from regular taxpayers only in terms of the tax base. The standard tax rates defined in the cantonal and federal tax laws are applied. Foreigners have an incentive to opt for this form of taxation if their overall living expenses as defined by the tax authorities are lower than their true income. A further advantage of the scheme is that it can significantly reduce the cost of tax filing and tax compliance across countries. Given that for the super-rich living expenses are likely to be significantly lower than their true global income from labor and capital, on average expenditure-based taxation will reduce the tax burden for such individuals substantiallyalthough we assume variation in the mismatch between the true and expenditure-based tax base to be large. 23 By design of this special tax treatment, eligible taxpayers belong to the top of the income and wealth distributions. In 2018, 4,557 persons-slightly less than 0.1% of all taxpayers-were subject to expenditure-based taxation in Switzerland.
Abolition of Expenditure-Based Taxation Across Cantons
Expenditure-based taxation has become the subject of heavy criticism over the past decade, both from outside and within the country. In light of these discussions, several cantons proposed to abolish this practice, usually taking the question to the ballots. Over this period, the number of super-rich living in German-speaking Switzerland has increased at the expense of French-speaking areas. In addition, rural and alpine cantons in particular appear to have gained in attractiveness for the super-rich. This is in line with findings on the importance of nature and local amenities (e.g., the availability of land, proximity to lakes, mountain views) for the location choice of rich households [START_REF] Young | Millionaire Migration and the Taxation of the Elite: Evidence from Administrative Data[END_REF].
Trends in Location Choice
The major losers in the intercantonal competition for super-rich households (and potential taxpayers) are the cantons of Zurich, Geneva, and Vaud. In contrast, the largest in-
Estimating Location Choice of Super-rich in Response to Tax Privileges for Wealthy Foreigners: Di erence-in-Di erences Setting
To quantify the causal impact of the elimination of expenditure-based taxation on the location choices of the super-rich, we conduct two di erent empirical analyses. In this section, we turn to a Di erence-in-Di erences (DD) setting and estimate cumulative event studies, showing how the e ect of abolishing the policy played out over time. The second, alternative approach that arises from a spatial equilibrium model is described in Section 5.5.
Di erence-in-Di erences Specification
To examine how the removal of expenditure-based taxation a ects the location decisions of the super-rich, we first estimate various specifications of a two-way fixed e ects DD model of the type
ln N c,t = -DD • c,t + ◊ c + ◊ t + c • t + X c,t + ' c,t , (2)
where ln N c,t is the log number of super-rich living in canton c at time t. • c,t is a treatment dummy that equals 1 if canton c abolishes expenditure-based taxation in year t.
In all specifications, the treatment is defined in the year of the statutory removal (e.g.,
• ZH,2010 = 1 for the canton of Zurich). Once a canton is treated, it remains treated forever.
◊ c and ◊ t capture canton and year fixed e ects, respectively, and c •t denotes a cantonspecific linear trend. Thus, the model specified in Equation ( 2) absorbs (i) all unobservable time-invariant canton-specific characteristics by ◊ c , (ii) all unobservable canton-invariant time-specific e ects by ◊ t , and (iii) stable canton-specific di erences in growth of the number of super-rich by c • t. The vector X c,t adds the following time-varying canton controls in logarithms: (i) top average net-of-tax rate on wealth, (1 ≠ • w ), (ii) top average net-of-tax rate on income, (1 ≠ • y ), (iii) top average net-of-tax rate on bequests, (1 ≠ • b ), (iv) total population (a proxy for urbanization), and (v) the share of foreigners in total population (a proxy for internationalization).27 -DD is our parameter of interest. It captures the e ect of the abolition of expenditurebased taxation on the number of super-rich in the treatment cantons compared to the nontreatment cantons that did not abolish the preferential tax scheme. Since one can presumably preserve more of one's income and wealth when being taxed under expenditure-based taxation (i.e., if • c,t = 0), we expect a negative sign for -DD .
We are aware that our analysis has four limitations. (i) From a theoretical perspective, we would ideally want to relate the percentage change in the number of super-rich to the percentage change in the e ective net-of-tax rate. Specifically, dividing -DD by the percentage change in the e ective net-of-tax rate would lead to the implied mobility elasticity with respect to taxation. However, due to the nature of the preferential tax treatment, what changes prima facie is not the legal tax rate, but the tax base. Given the lack of any o cial information on the true income and wealth tax bases under expenditurebased taxation, we cannot calculate the change in the e ective net-of-tax rate-and thus the implicit tax elasticity. 28 We are therefore limited to estimating the percentage change in the number of super-rich with respect to the removal of the preferential tax treatment.
(ii) In addition, our estimates are a ected by measurement error. Because we do not observe actual expenditure-based super-rich taxpayers, but rather infer from the place of birth an individual's non-Swiss citizenship, we measure our outcome of interest with error. This will render the estimates less precise and hence increase standard errors. The imprecision also a ects our independent variable of interest, the policy change, as not all foreign-born may be a ected by the treatment. This type of measurement error leads to a downward bias in OLS estimates. Therefore, our estimate is a lower bound. Given that we do obtain relatively precise, consistent, and large estimates, we conclude that measurement error in our setting is small compared to the e ect size.
(iii) As the abolition of expenditure-based taxation in one canton may well lead to within-country migration, it follows that our treatment is reasonably likely to spill over to a control canton, which ultimately would imply a violation of the stable unit treatment value assumption (SUTVA). Concerns regarding a severe STUVA violation might be alleviated by two points. First, migration to a non-treatment canton is not the main mechanism driving our results (see Tab. C9; more on this later). Second, in Section 5.5, we employ an alternative estimation approach-arising from spatial equilibrium in a lo-cation decision model-that parametrically precludes spillovers to non-treated units. As both our estimation procedures yield highly similar results, STUVA seems not (seriously) violated in our DD analysis.
(iv) Finally, we are aware that the static two-way fixed e ect (TWFE) model described in Equation ( 2) may su er from bias if treatment e ects are heterogeneous across cantons or over time [START_REF] Borusyak | Revisiting Event Study Designs[END_REF][START_REF] De Chaisemartin | Two-way fixed e ects estimators with heterogeneous treatment e ects[END_REF] Goodman-Bacon, 2021; see [START_REF] Roth | What's Trending in Di erence-in-Di erences? A Synthesis of the Recent Econometrics Literature[END_REF] and De Chaisemartin and d'Haultfoeuille (2022) for a summary of this fast growing literature on TWFE estimation). We address these concerns in a robustness analysis applying the novel estimators proposed by Callaway and Sant'Anna (2021) and [START_REF] Sun | Estimating dynamic treatment e ects in event studies with heterogeneous treatment e ects[END_REF].
Event Study Specification
The key identifying assumption of the DD identification strategy is that the log number of super-rich foreigners would have evolved the same in cantons that did abolish expenditurebased taxation and those that did not. Panel a) of Figure 12 indicates that this is may not be the case, and that we need to correct for canton-specific time trends.
To assess the validity of the parallel trends assumption in our main specification, we turn to event studies. The event study design further allow us to study the dynamics driven by the policy changes. In particular, we estimate the following distributed-lag model in logs using OLS:
ln N c,t = 5 ÿ j=≠3 " j • c,t≠j + ◊ c + ◊ t + X c,t + ' c,t . (3)
As before, ln N c,t is the log number of super-rich living in canton c at time t and • c,t is a treatment indicator for the removal of expenditure-based taxation. ◊ c and ◊ t again refer to canton and year fixed e ects, respectively, and the vector X c,t adds the same time-varying canton controls as in the DD analysis. As shown by [START_REF] Schmidheiny | On Event Studies and Distributed-Lags in Two-Way Fixed E ects Models: Identification, Equivalence, and Generalization[END_REF], the model in Equation ( 3) is identical to a specification of an event study with binned endpoints 4 years before and 5 years after the event.
The cumulative e ect j years after the reform can be obtained from the distributed-lag coe cients " as
-DD j = Y _ _ _ _ ] _ _ _ _ [ ≠ q ≠1 k=j+1 " k if ≠4 AE j AE ≠3 0 if j = ≠1 q j k=0 " k if 0 AE j AE 5 (4)
Normalizing to the pre-reform year, i.e., -DD ≠1 = 0, we show the dynamic reform e ect -DD j relative to the year prior to the abolition of expenditure-based taxation between abolisher and non-abolisher cantons.
DD Results
Table 1 shows the two-way fixed e ects DD estimates using standard OLS. The abolition should only a ect location choices of foreign super-rich, hence we estimate Equation (2) for foreign-born super-rich in the BILANZ dataset over the period 1999 to 2020 (Panel A).
In the sample of Swiss-born super-rich (Panel B), in contrast, we would not expect to see any e ects. Given that foreign-born make up almost half of all super-rich (see Figure 3), we also run the estimation on the full sample of the super-rich in the BILANZ rich list (Panel C), to see whether responses by foreign-born super-rich are large enough to be reflected in the full sample. In Panel D, finally, we estimate Equation 2 for the period 2003-2017 using an even broader population: all taxpayers with net wealth exceeding 10 million Swiss francs as reported in o cial wealth tax statistics. 29Column 1) reports the estimates with only time-and canton-fixed e ects. Estimates from this specification suggest that eliminating expenditure-based taxation reduces the number of super-rich by 26-31% across all sub-samples. For the Panels B to D, however, this result seems to be driven by canton-specific time trends. Once we control for these trends (Column 2), the e ect vanishes. That is, the coe cient changes sign and / or is close to zero with large confidence intervals. Only for the foreign-born super-rich (Panel A), the coe cient remains large and negative, although it is not statistically significantly di erent from zero. Sequentially adding controls in Columns 3) through 7) does not significantly change the magnitude of the point estimate. Once we control for the general tax environment in the canton, the share of foreigners, and population growth (Column 7), we find a coe cient of -0.28, significant at the 90% level. Note, however, that because our outcome variable is measured with error, OLS estimates are less precise by definition. In all the other samples that include the non-treated population, canton-specific time trends absorb most of the variation across cantons and the coe cients remain insignificant, even after the inclusion of further controls.
Robustness
Our results are robust to a series of adjustments and alterations in the empirical estimation.
First note that our results on the foreign-born super-rich are not driven by the inclusion of canton-specific time trends, but hold in all specifications including controls even in the absence of such trends (see Appendix Table C7, Panel A). The estimated coe cient of interest drops to -0.22, but are statistically significant at the 95% level, even when all controls are included.
In our setting, timing of treatment varies across cantons, and the treatment could be heterogeneous across cantons, hence the TWFE estimator may be biased. In Appendix Overall, the DD specifications suggest that super-rich foreigners have been responsive
to the abolition of expenditure-based taxation. The removal of expenditure-based taxation in a canton reduces the number of foreign-born super-rich by 20-30%, while it had no e ect on the location choices of Swiss-born super-rich-just like one would expect, given the nature of the tax policy. The response of rich, foreign-born taxpayers, however, seems to be too small to be detected in larger samples that include all super-rich.
Our results are in line with findings by [START_REF] Moretti | Taxing Billionaires: Estate Taxes and the Geographical Location of the Ultra-Wealthy[END_REF], who find that the number of Forbes 400 individuals fell by 35% in US states that still apply estate taxes compared to those that do not. In contrast, results of Advani et al. (2022a) suggest that less than 3% of previous "non-dom" taxpayers left the UK after their eligibility for the scheme was lifted. These estimates, however, are limited to the year immediately following the policy change, thereby not allowing for a longer response time, and they do only include out-migration. Our estimates also include prevented in-migration after abolition of the preferential tax scheme. As we show in Appendix Table C9, the drop in super-rich foreigners relative to other cantons stems from new arrivals who now chose other cantons for their primary residence. Furthermore, the "non-dom" tax scheme is available to UK citizens and for individuals who earn labor income within the UK. These with positive values and thus identical observations-suggests that PPML leads to somewhat less negative coe cients in this setting. Given this and the fact that OLS excludes cantons that cannot compete in tax competition for the super-rich, it is hardly surprising that the observed e ect is smaller.
individuals are likely more attached to the UK than foreign-born residents without labor income in Switzerland.
Event Study Results
Figure 13 presents the event study estimates for the foreign-born super-rich. In Figure 13(a), we show the dynamic e ects when the treatment indicator • c,t is defined as in the DD-analysis (i.e., • c,t = 1 in the year of the statutory removal in the respective canton).
The pre-treatment estimates are slightly positive but stable and statistically not significant, even without controlling for potential confounders. Hence, the identifying parallel trends assumption holds in our DD setting. This finding stands somewhat in contrast to the other samples (see Figure C5 in the Appendix), where evidence points towards negative pre-trends. Abolishing cantons had a declining share of wealthy taxpayers, but, interestingly, not a declining share of super-rich foreigners prior to their reforms (in line with the evidence presented in Figure 12). In fact, we find a common, similar negative time pattern in all other samples that seems to be largely independent of the policy reforms. This explains why controlling for canton-specific time trends in the DD analysis profoundly changes the estimates for these other groups of super-rich and wealthy taxpayers, while having no e ect on the foreign-born super-rich.
The statutory removals of expenditure-based taxation were typically preceded by public discussions, the political decision-making process, and the popular votes held in the respective cantons. Some a ected taxpayers may have anticipated the results and made location choices accordingly, e.g., by not moving to one of the cantons that was considering abolishing expenditure-based taxation. To account for such potential anticipatory e ects, we additionally perform the analysis normalizing the coe cients to the year t ≠ 2, i.e., the second year prior to the removal of the preferential tax treatment, in Figure 13(b).31
However, the results hardly change at all. The negative location e ect of the foreign-born super-rich with respect to the elimination of expenditure-based taxation does not materialize immediately after the reform.
Table C9 shows that a majority of the treated units in our sample is still present in the treatment regions five years after the respective abolition of the preferential tax scheme. This implies that our estimates are not driven by the super-rich who move to a non-treated canton, but rather by new arrivals who settle in non-treated cantons still o ering a preferential tax treatment to foreigners. A suggestive interpretation of this finding is that fix costs of migrating are high even for super-rich individuals. However, once super-rich households decide to move, tax considerations play a vital role in their location choice.
Overall, the event-study analysis a rms the above findings and suggests that the location decision of the foreign-born super-rich is sensitive to the abolition of expenditure-based taxation, at least for "movers". In both panels, the red line with circles corresponds to a specification of Equation 3, which contains only year and canton fixed e ects. The blue line with diamonds corresponds to a specification that additionally contains the full vector of time-varying cantonal controls Xc,t. Point estimates are reported with their corresponding 90% confidence intervals based on two-way clustered standard errors by canton and year. Figure C5 displays the analogous results for all super-rich, Swiss-born super-rich, and rich taxpayers.
Estimating Location Choice of Super-rich in Response to Tax Privileges for Wealthy Foreigners: A Spatial Equilibrium Approach
In this section, we turn to a second, alternative estimation approach, arising from spatial equilibrium in a location choice model, as proposed in [START_REF] Moretti | The E ect of State Taxes on the Geographical Location of Top Earners: Evidence from Star Scientists[END_REF] and [START_REF] Agrawal | Relocation of the rich: Migration in response to top tax rate changes from Spanish reforms[END_REF]. 32
Stock Ratio Estimation of Super-rich across Canton-Pairs.
Following the approach presented in Agrawal and Foremny (2019), we compare the number of super-rich across all canton pairs, and estimate how these relationships have been a ected by the unilateral abolishment of expenditure-based taxation by some cantons.
We first describe the formal empirical model, followed by a discussion of the identifying assumptions. The pivotal idea is to compute for each year the log ratio of the stock of the super-rich for each canton-pair-which then serves as the dependent variable-using our BILANZ dataset. To compute such canton-pair ratios, we restrict our sample to cantons that consistently host at least one super-rich person per year. The six smaller cantons (UR, SH, AR, AI, NE, and JU) that cannot successfully compete for super-rich foreigners are therefore not considered in this analysis. This leaves a total of 20 cantons and thus 20 ◊ 19/2 = 190 unique canton-pair combinations. With this much larger number of observations relative to the DD analysis in Section 5.4, we can include a larger set of fixed e ects and linear time trends, obtain more statistical power, and apply three-way clustered standard errors.
We estimate the following pairwise model
ln 3 N d,t N o,t 4 = -SR • do,t + ◊ d + ◊ o + ◊ t + do • t + X do,t + ' do,t (5)
where ln (N d,t /N o,t ) is the log ratio of the super-rich across canton-pairs, where d denotes destination and o the origin canton. 33 This notation uniquely captures all cantonpair combinations. Because of how we define the right-hand side variables, it does not matter whether a canton enters the model in the numerator as a destination, or in the denominator as origin canton. As such, 32 For the theoretical models, the interested reader is referred to [START_REF] Moretti | The E ect of State Taxes on the Geographical Location of Top Earners: Evidence from Star Scientists[END_REF] for a flow model, and Agrawal and Foremny (2019) for a stock version of the same model. We confine here to our modified empirical model.
• do,t © • d,t ≠• o,
33 Note that we do not actually have aggregate data on flows, but only on stocks, so there is e ectively no origin or destination. However, we stick to the notation of [START_REF] Agrawal | Relocation of the rich: Migration in response to top tax rate changes from Spanish reforms[END_REF]. This phrasing is helpful for discussing the empirical set-up, as we do not have to refer to some arbitrary reference canton.
34 Note that we start from a situation where • do,t = 0 for all canton-pairs, since all cantons grant preferential taxation.
Then, for instance, if canton d abolishes expenditure-based taxation in year t but canton o does not, • do,t = 1 ≠ 0 = 1 (or vice versa due to the symmetry imposed • do,t = 0 ≠ 1 = ≠1). Moreover, note that • do,t = 1 as long as canton d does not ◊ d , ◊ o , ◊ t capture destination, origin, and year fixed e ects, respectively. Thus, ◊ d and ◊ o capture amenities and all time-invariant policies in the destination and origin cantons. The term do • t denotes a linear time trend for each canton-pair combination. The vector X do,t adds the same control variables as employed in the DD analysis. Here, however, the controls are included in the vector X do,t as log ratios (i.e., log di erentials) for each of the canton pairs. For example, in the case of top average wealth-tax rates as
[(ln(1 ≠ • w d,t ) ≠ ln(1 ≠ • w o,t )],
which is the log net-of-wealth-tax rate di erential between each canton-pair.
-SR is our parameter of interest. The interpretation of -SR is as follows: removing expenditure-based taxation in canton d while holding the policy fixed in canton o, makes people more likely to move away from canton d, or more likely to stay in canton o, respectively, as one can preserve more of its income and net wealth from being taxed. This leads to a decrease in the stock of super-rich in canton d relative to canton o. If the canton of origin o abolishes expenditure-based taxation but canton d does not, the interpretation is vice versa. If either both cantons, d and o, or neither of them abolish expenditure-based taxation, there is no policy change to di erentially a ect the stock of super-rich-we define • do,t = 0, enforcing that -SR is zero for for such a canton-pair by construction. Consequently, by putting more parametric structure on the problem-in contrast to the DD analysis-, we rule out spillovers to other, una ected cantons. 35 Correct identification based on the model presented in Equation ( 5) relies on the condition that, in the absence of policy reforms (i.e., no changes in • do,t ) and given the set of fixed e ects, linear time trends, and control variables, the canton-pair stocks of the superrich remain constant over time. Any canton-pair-specific unobservable factor correlated with both the elimination of expenditure-based taxation and the migration behavior of the super-rich between a canton-pair may jeopardize our identification strategy. Introducing linear time trends for each canton-pair combination separately is a conservative estimation procedure that likely captures much of the variation across canton-pairs. Moreover, the event study analysis above provided evidence that the location decisions of the foreign-born super-rich do not precede but follow the policy reforms.
Stock Ratio Results
Table 2 presents the estimation results of Equation ( 5) for the years 1999 to 2020. As before, the results are presented separately for foreign-born super-rich (Panel A) and other samples (Panels B to D). Column 1) shows the estimates with only destination, origin, and year fixed e ects. Again, we find a negative response of approximately 35% across all sub-samples.
The results on the foreign-born super-rich in Panel A are very robust to the inclusion o er but canton o does o er expenditure-based taxation. Consequently • do,t will switch back to zero if and only if canton d reintroduces expenditure-based taxation in year t or canton o removes it. 35 As we find highly comparable results in the DD and stock ratio analyses, we conclude that the STUVA in our DD analysis is not significantly violated.
of canton-pair specific trends, do • t (Columns 2 .), and the addition of further controls in Columns 3) to 7). The removal of expenditure-based tax privileges led to a 32-37% decline in the stock of super-rich compared to non-reform cantons. We use three-way clustered standard errors and our results are statistically significant at the 1% level.
For the Swiss-born super-rich (Panel B), all super-rich (Panel C), or the merely wealthy taxpayers (Panel D), we again find that the coe cient of interest flips sign once that we include canton-pair specific linear trends. Considering the Swiss-born super-rich in Panel B, we find a small but positive and significant e ect in some specifications. A speculative explanation for this positive coe cient is that with the exodus of some foreign-born superrich from cantons that have abolished expenditure-based taxation, the supply of highend housing increased in these cantons, which may have led to these positive migration responses of some Swiss-born super-rich. However, these estimates are very sensitive to the inclusion of additional covariates.
Summing up, our results show that the abolition of expenditure-based taxation resulted in a medium to long-run decline of about 30% in the stock of foreign-born super-rich in reform cantons, while the number of Swiss-born super-rich and wealthy taxpayers in abolishing cantons remained una ected.
Robustness
While unlikely, the negative estimate found in the stock-ratio analysis could be endogenous or driven by some form of spurious correlation, rather than by the policy reforms we analyzed. To address this potential threat, we conduct a placebo test (similar as in [START_REF] Agrawal | Relocation of the rich: Migration in response to top tax rate changes from Spanish reforms[END_REF], where we shift the treatment indicator, • do,t , by five years into the pre-treatment period, and re-estimate Equation ( 5). If our identifying assumption holds and the e ects we find in our main analysis are indeed driven by the actual policy change, a placebo policy change that did not happen should in turn not be correlated with the number of super-rich foreigners in a canton. Table 3 confirms this. We find no significant correlation between the stock ratio of the foreign-born super-rich pre-treatment and the post-treatment policy changes across all specifications. The last two rows in Table 3 show that the null correlation on the placebo treatment is not due to simple sample selection, as we continue to find highly significant negative e ects for the true policy change.
Conclusion
We have compiled a new dataset on the super-rich residing in Switzerland based on the BILANZ magazine rich list covering the years 1989-2020, and enhanced it with further biographical information. This dataset allows us i) to describe the super-rich in Switzerland over the past three decades, which coincide with an increase in income and wealth inequality in the country, and ii) to estimate the location choices of super-rich foreigners in Switzerland with respect to a preferential tax treatment the country has been o ering to the global wealth elite for more than a century.
Our descriptive results reveal two distinctive features of the wealthy elite in Switzerland. First, we have shown the importance of inheritances at the top, and how sluggish the wealth dynamics of the super-rich are in Switzerland, particularly when compared to the US. We estimate that only some 40% of the super-rich in Switzerland are self-made, compared to roughly 70% in the US. While managers are on the rise among the super-rich, they still only make up about 7-8%, and own about 2% of the wealth belonging to the 300 richest individuals and families in Switzerland. Once individuals make it to the very top, they are likely to stay at the top, and intra-generational mobility has even decreased over the first two decades of the 21st century.
Second, we have documented the importance of foreigners at the very top of the wealth distribution. We find that foreign-born individuals make up approximately 50% of the super-rich, and they own 60% of top wealth. Hence, they are on average even wealthier than their Swiss-born peers.
The high share of foreigners at the very of the distribution can likely be explained by the preferential tax treatment Switzerland o ers to super-rich foreigners, who are eligible for expenditure-based taxation. While we cannot quantify the pull e ect of this policy at the international level, we provide first-time evidence of how sensitive super-rich foreigners are to this policy when it comes to their choice where in Switzerland they decide to reside. More specifically, we exploit the abolition of expenditure-based taxation in some cantons, using two alternative identification strategies. Both approaches suggest that location choices of the super-rich are sensitive to taxation: the abolition of these preferential tax treatments reduces the stock of super-rich in a canton by about estimated 30%. Based on suggestive evidence, the e ect is mainly driven by new arrivals, who chose to move to those cantons still o ering them tax privileges. Based on our data, the push-e ect of the abolition seems to be rather small. Besides, we put some earlier estimates on the top 0.01% wealth share into perspective, showing that wealth concentration in Switzerland is likely somewhat higher than previously assumed. We provide a discussion of why existing estimates based on tax statistics tend to underestimate wealth concentration.
A Data Appendix I
In this section, we provide a comprehensive description of all variables and definitions included in our panel dataset.
id_pers. This variable is an individual observation identifier. An id_pers can represent a single individual or a family (or, in exceptional cases, some other kind of collective).
id_fam. This variable is a family identifier that links di erent individual observations (id_pers) that belong to the same family (collective). This allows family wealth to be tracked in the panel dataset over a longer period of time, since in some cases individuals die and their heirs are subsequently listed by the BILANZ magazine. In some cases, the BILANZ has split or aggregated a family's assets among di erent members without any change in the family structure being apparent.
name. This variable contains first and last name of individuals or the (family-) name in case of a collective including the description of the type of collective. For example:
Stephan Schmidheiny (id_pers=3); Familie Ringier (id_pers=12); Erben Oscar Weber (id_pers=30). year. This variable indicates the corresponding year.
n_magazine. This variable gives the number of ranking entries as shown on the magazine cover of the corresponding annual edition. The number of ranking entries recorded by the BILANZ magazine has varied considerably from 100 to 250 in the first 10 years (see Table C1). Since 1999, the BILANZ ranking includes 300 entries each year. In the early years, the number of ranking entries in the BILANZ magazine does not necessarily correspond exactly to the number given on the cover. Moreover, since our panel dataset only covers Swiss residents, the number of observations per year in our panel is always slightly below the number on the cover (see n_panel). wealth_mean. This variable is simply the arithmetic mean of the variables wealth_low and wealth_high. wealth_mean is our main variable of interest and shows net wealth per observation (in nominal millions of Swiss Francs). We frequently represent wealth_mean as a real variable by deflating it by the Swiss CPI.
ranking. This variable indicates the rank of each observation within the rich list per year.
Note that since BILANZ magazine estimates net worth in intervals, multiple ranking entries have the same net worth estimate and consequently the same position in the ranking.
family. This dummy variable indicates whether the observation represents a single individual (family=0) or whether it is a family or some other collective (family=1).
female. This dummy variable indicates whether an individual observation (family=0) is male (female=0) or female (female=1). manager. This dummy variable indicates whether an observation is a manager (man-ager=1) or not.
swiss. This dummy variable indicates if the observations (including family observations)
are Swiss citizens (swiss=1) or not. This information was collected from the texts in the BILANZ magazine and supplemented by manual Internet search. The quality of this variable is limited, as it was di cult in many cases to assign a nationality. The variable indicating whether someone was born outside Switzerland (foreignborn) is certainly more reliable and should preferably be used.
foreignborn. This dummy variable indicates whether the observations (including family observations) were born outside (foreignborn=1) or inside (foreignborn=0) Switzerland.
This information was collected mostly by manual Internet search.
foreigners_in_ranking. This dummy variable indicates whether foreigners (non-Swiss citizens) are also included in the ranking (foreigners_in_ranking=1) or not. Before 1993, only Swiss citizens were covered by the BILANZ magazine. Therefore, this variable is equal to 1 in 1993 and thereafter.
old_wealth_WW2. This dummy variable indicates whether the foundation for wealth was laid before 1945 (old_wealth_WW2 =1) or not. This information was recorded by the BILANZ magazine in the 1993 issue. For observations from other years, we have added this information by manual Internet search. canton. This variable indicates the canton of residence per observation and year. See Table A1 for the canton codes. industry_1; industry_2; industry_3. For each ranking entry, the BILANZ magazine records information one or more industries in which the observation is active. We recorded this information in the 3 variables industry_1, industry_2 and industry_3. We have assigned the information from the BILANZ magazine to one of 26 di erent industries.
See Table A2 for details on the various industries and codes).
industry_main. For many ranking entries, the the BILANZ magazine assigns multiple industries. With the information from the magazine, it is impossible to disaggregate net wealth per ranking entry to the di erent industries in which the observation is operating.
In order to investigate how aggregate BILANZ net wealth has evolved by di erent industries over time, we have assigned a characteristic industry to each observation per year in the variable industry_main. The classification and coding again follows Table A2.
wealth_origin. The variable wealth_origin is a categorical variable that indicates the origin of wealth. Where wealth_origin=1 stands for wealth acquired through marriage, wealth_origin=2 stands for inherited wealth and wealth_origin=3 for self-made wealth.
We follow a definition in the literature (see Kaplan andRauh, 2013a and[START_REF] Scheuer | Taxing the superrich: challenges of a fair tax system[END_REF][START_REF] Scheuer | Taxation and the Superrich[END_REF] and define self-made wealth as wealth of first-generation founders. This information was collected from the texts in the BILANZ magazine and supplemented by manual Internet search.
birth_date. This variable indicates the date of birth if the observation is an individual (family=0). For some observations, we were unable to determine the exact date of birth and recorded only the year of birth. This information was collected from the texts in the BILANZ magazine and supplemented by manual Internet search.
death_date. This variable indicates the date of death if the observation is a deceased individual. For some observations, we were unable to determine the exact date of death and recorded only the year of death. Individuals that are still living are coded as alive.
This information was collected from the texts in the BILANZ magazine and supplemented by manual Internet search.
entryreason & exitreason. The variables entryreason and exitreason specify the reason for ranking entry respectively exit as categorical string. The quality of this variable is limited as it was challenging to identify a reason for entering or leaving the rankings for many observations. Accordingly, this variables contain many unexplained values and should be used with caution. See Table A3 for details on the definitions of entryreason and Table A4 for exitreason, respectively.
id_link. This variables provides information about which di erent individual observations belong systematically together by referring to the old id_pers. In some cases, individuals are grouped into collectives in certain years and then listed individually again later. We have created this variable to track such incidents. In this way, it is possible to quickly find out which observations have been grouped di erently over time.
Unexplained
The ranking entry of an individual or a family in the specific year cannot be explained.
Entered
The ranking entry of an individual or a family can be explained by an increase in estimated wealth over the threshold triggered by a specific incident within the last year.
Re-entered
The ranking entry of an individual or a family can be explained by an increase in estimated wealth over the threshold triggered by a specific incident within the last year and the individual or family had previously dropped out because of lacking wealth.
Migration
The ranking entry of an individual or a family can be explained by migration into Switzerland within the last year.
Control transfer
The ranking entry of an individual or a family can be explained by a transfer of operative control over wealth which was accounted for in the prior year.
Inheritance
The ranking entry of an individual or a family can be explained by a transfer of ownership of wealth which was accounted for in the prior year.
Family aggregation
The ranking entry of a family can be explained by the aggregation of wealth which was accounted for in the prior year, attributed to multiple individuals or families.
Collective aggregation
The ranking entry of a collective can be explained by the aggregation of wealth which was accounted for in the prior year, attributed to multiple individuals or collectives.
Family aggregation with members previously not in the ranking
The ranking entry of a family can be explained by the aggregation of wealth which was only partly accounted for in the prior year, attributed to one individual or family or multiple individuals or families, and of wealth which was partly not accounted for in the prior year.
Collective aggregation with members previously not in the ranking
The ranking entry of a collective can be explained by the aggregation of wealth which was only partly accounted for in the prior year, attributed to one individual or family or multiple individuals or families, and of wealth which was partly not accounted for in the prior year.
Family split
The ranking entry of an individual or a family can be explained by the splitting of wealth which was attributed to a family in the prior year.
Collective split
The ranking entry of an individual or a family can be explained by the splitting of wealth which was attributed to a collective in the prior year.
Start foreigners
The ranking entry of an individual or a family in 1993 can be explained by the fact that it was the first year including non-Swiss in the data.
Start ranking
An individual or a family entered the ranking in 1989.
Note:
The table displays and describes the various categorical strings of the variable entryreason.
Table A4: Description of Exit Reasons exitreason description
Unexplained
The ranking exit of an individual or a family in the specific year cannot be explained.
Not enough wealth
The ranking exit of an individual or a family can be explained by a decrease in estimated wealth under the threshold triggered by a specific incident within the last year.
Emigration
The ranking exit of an individual or a family can be explained by emigration out of Switzerland within the last year.
Control trasfer before death
The ranking exit of an individual can be explained by a transfer of operative control over wealth which is accounted for in the coming year.
Inheritance before death
The ranking exit of an individual can be explained by a transfer of ownership over wealth which is accounted for in the coming year. death
The ranking exit of an individual can be explained by its death within the last year.
Family aggregation
The ranking exit of an individual or family can be explained by the aggregation of wealth which is attributed to a family in the coming year.
Collective aggregation
The ranking exit of an individual or collective can be explained by the aggregation of wealth which is attributed to a collective in the coming year Family split
The ranking exit of a family can be explained by the splitting of wealth which is attributed to another individual or family or multiple individuals or families in the coming year.
Collective split
The ranking exit of a collective can be explained by the splitting of wealth which is attributed to another individual or family or multiple individuals or families in the coming year.
End ranking
An individual or a family included in the 2019 ranking.
Note: The table displays and describes the various categorical strings of the variable exitreason.
B Data Appendix II
The regression results presented in Table 1 include in their full specification the five timevarying canton controls below. Thereby, all control variables (i) to (v) are introduced in logarithmic form. In the empirical specification presented in Table 2 the five time-varying canton-pair controls are included in the vector X do,t as log ratios or log di erentials for each of the canton pairs. For example in the case of (i) as
[(ln(1 ≠ • w d,t ) ≠ ln(1 ≠ • w o,t )],
which is the log net-of-wealth-tax rate di erential between each canton-pair.
(i) Top average wealth-tax rates. This variable contains the average personal wealth tax rate (i.e., including cantonal, municipality and parish taxes) by canton for an unmarried taxpayer without children with gross wealth of 10 million Swiss Francs. 36 Cantonal average wealth tax rates are aggregated from all Swiss municipalities for the period 1998-2018. For the years 2009-2019 these data are available directly from the FTA. 37 Parchet (2019) has computed consolidated tax rates at municipal level for all municipalities in Switzerland between 1983 and 2012. 38 We are very grateful to Raphaël [START_REF] Parchet | Are Local Tax Rates Strategic Complements or Strategic Substitutes?[END_REF] for providing us with his data (for the period [1998][1999][2000][2001][2002][2003][2004][2005][2006][2007][2008][2009][2010][2011][2012][2013][2014]. This enables us to construct top average wealth tax rates for all Swiss municipalities for the entire period from 1999-2018.
The top average wealth tax rates at the cantonal level are constructed by weighting the tax rates by the number of taxpayers in each municipality. 39 (ii) Top average income-tax rates. This variable contains the average personal income tax rate (i.e., including cantonal, municipality and parish taxes) by canton for an unmarried taxpayer without children with annual gross income of 1 million Swiss Francs. This variable is constructed analogously to the one above and also builds on the data compiled by [START_REF] Parchet | Are Local Tax Rates Strategic Complements or Strategic Substitutes?[END_REF].
(iii) Bequest-tax rates. To control for cantonal di erences in bequest taxes, column (4) includes two di erent tax rates for the entire 1999-2018 period. The first tax rate reflects the percentage of tax due on an inheritance of 500,000 Swiss Francs bequeathed to direct descendants. The second tax rate analogously includes the percentage in the case of an inheritance of 500,000 Swiss Francs to an unrelated person. Both tax rates 36 For the canton of Basel-Stadt, we had to rely on the wealth tax rate on gross wealth of 5 million Swiss francs due to data limitations. However, since we exploit variation over time, this should not be an issue.
37 See: https://www.estv.admin.ch/estv/de/home/allgemein/steuerstatistiken/fachinformationen/ steuerbelastungen/steuerbelastung.html 38 Details on the construction of these tax rates can be found in the online appendix of his paper. 39 The data of the taxpayers can be obtained from the FTA: https://www.estv.admin.ch/estv/de/home/allgemein/ steuerstatistiken/fachinformationen/steuerstatistiken/direkte-bundessteuer.html refer to the tax burden at the cantonal capital. We have gathered these data from the annual publication "Steuerbelastung in den Kantonshauptorten". 40
Proxies for (iv) urbanization and (v) internationalization. We approximate urbanization by total cantonal population. Similarly, we use the share of foreigners in the total population as a proxy for internationalization. Note: This tables provides some summary statistics of our BILANZ panel dataset. Column (1) indicates the number of ranking entries as shown on the magazine cover of the corresponding annual edition. Columns (2)-( 5) show the total, family, male, and female number of ranking entries per year recorded in our panel dataset. Columns ( 6)-( 8) display the mean, median and standard deviation of real net wealth (in billions of 2020 Swiss Francs) per year. While columns ( 9) and ( 10) present the mean net wealth of the family and single individual observations separately. Columns ( 6)-(10) were deflated using the Swiss CPI. or seven times the rent (in the case of rental property). For persons without an own household (in case of hotel stays), three times the pension price for accommodation and meals is considered as the minimum taxable income. However, the minimum taxable income must be at least the amount shown in column 3. The higher of these two amounts of minimum taxable income is taxed at the statutory tax rate. In the canton of Thurgau, the following applies: 10 times the rental value or owner-occupied rental value or 4 times the pension price. The wealth tax base is, however, not statutorily specified, but the sum of the cantonal income and wealth taxes paid must be at least 150,000 Swiss francs per year. In most cantons, the wealth tax base is a simple multiple of the minimum taxable income and is taxed at the ordinary rate. The information shown in the table is taken from cantonal websites, cantonal tax laws or in some cases was provided to us via email by cantonal tax authorities. and for 2010 to 2020 (blue diamonds), respectively. We report slope estimatesand the R 2 from OLS regressions in the corresponding color. All regression coe cients are statistically significant at the 1% level. The gray shading surrounding the gradients represent the 95% confidence intervals. The analysis here is based on family observations rather than individual observations (for details on the two panel identifiers see Appendix A). This means that if, for instance, a super-rich individual dies within the observation period, but his heir is listed in the last year of the analysis, then this observation does not drop out. We only use observations in the mobility analysis that are present in both the first and last year of the analysis. The small written text under the figures displays the dropout rate. How does the top 0.01% wealth share (the largest fractile we can cover with the rich list data) based on our BILANZ data compare to these existing series? In this extension, we describe the methods to estimate the top wealth share and compare our results to those in [START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF].
C.2 Additional Figures
D.1 Methodology
To estimate top wealth shares based on our BILANZ data that are comparable to estimates in [START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF] based on tax data, we define the unit of observation, the reference population, and the total wealth denominator as follows.
Total Wealth Denominator. To calculate the top 0.01% wealth share, we set net BILANZ wealth in relation to total aggregate private wealth at market values.42
Tax Units. From our BILANZ dataset, we do not have any information on whether entries listed as individuals are married or not. If BILANZ observations are married, the estimated net assets are indeed more akin to the net assets of a joint household and, more importantly, of only one tax unit-as in Switzerland, married couples have to file taxes jointly. Thus, when calculating the wealth share of the top 0.01% wealth group, our unit of analysis are tax units rather than adults in Switzerland. We do this mainly to increase comparability with the top wealth shares previously estimated based on wealth tax statistics [START_REF] Dell | Income and Wealth Concentration in Switzerland over the Twentieth Century[END_REF][START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF]. In addition, this makes our estimates rather more conservative, as we do not treat every observation as if they referred to a single adult individual. If we used the adults as unit of observation, we would have to include a larger number of entries from the BILANZ rich list to calculate the share of the riches 0.01% of the total adult population (which is larger than the total of tax units in the country). This could lead to an important overestimation of wealth concentration.
We calculate the total number of tax units in the country as the adult population minus half of the married adults, using o cial population statistics.43
Accounting for families. As family entries represent multiple tax units, we cannot use the raw BILANZ observations to calculate top wealth shares, as we would overestimate top wealth concentration. To address this issue, we divide all family observations and their corresponding wealth by 5. The overall result is robust to the choice of this divisor (Panel b) of Figure D1 shows wealth shares for divisor values of 3, 5, and 7). This approach significantly increases the number of person-year observations in our data.
Estimation of Top 0.01%. After splitting all family observations, we re-rank the entries of the rich list according to their wealth. To calculate the wealth share of the top 0.01%, we then use the rich list entries per year that equal to the number of tax units representing the top 0.01%, and divide their summed wealth by total private net wealth of the economy.
D.2 Top Wealth Shares in Comparison with Prior Estimates
Panel a) of Figure D1 shows our estimates of the top 0.01% wealth share in comparison to the estimates by [START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF]. According to our preferred specification (where we split all family observations and their net wealth by 5), the top 0.01% owned close to 17% of the economy's total private wealth in 2019. This share has turned out to be remarkably stable, ranging between 16 and 17% over the past decades-except for the strong business cycle e ects around the Great Recession. These new estimates are about one-third larger than estimates based on wealth tax statistics. Moreover, while our new estimates show a relatively stable pattern over time, estimates based on tax data show an increase in the wealth share in the hands of the top 0.01% of approximately 50% between 1997 and 2016. As shown theoretically in [START_REF] Atkeson | Rapid Dynamics of Top Wealth Shares and Self-Made Fortunes: What Is the Role of Family Firms?[END_REF], the stability of this top wealth share in Switzerland (compared to, e.g., the US) might, however, well be explained by the empirical fact that the number of self-made super-rich (see Section 4.2) and thus the mobility from the bottom to the top in the wealth distribution is lower in Switzerland than in the US. Also in terms of level, these new top share appear to be relatively high in international comparison (see Saez and Zucman, 2016 for the US; [START_REF] Alvaredo | Top wealth shares in the UK over more than a century[END_REF] for the UK.; Garbinti et al., 2020 for France;Albers et al., 2020 for Germany). Note: Panel a) compares the top 0.01% wealth share based on the BILANZ data with previous estimates using the wealth tax statistics (FTA). The approach and data used to compute the top 0.01% wealth share based on the BILANZ data are described in Section D.1. The wealth share of the top 0.01% based on wealth tax statistics is taken from [START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF] and updated here accordingly. For details on the method and data, we refer to the original paper. Panel b) provides a sensitivity test with respect to the choice of divisor in the treatment of family observations. The numbers along the top row show the number of tax units representing the top 0.01% in each year.
To put the two di erent top 0.01% wealth shares into perspective, it is important to note the di erences in the underlying data sources. On the one hand, expenditure-based taxpayers and double counting lead to a downward bias in top shares based on Swiss wealth tax statistics. On the other, measurement error in the BILANZ rich list is likely to bias the estimates upwards. We discuss these sources of potential bias in turn.
Measurement errors in rich list data. When it comes to the BILANZ data, two sources of measurement error may lead to an inflation of top wealth. First, certain assets, such as art and other collectibles, are included in the BILANZ' wealth estimates but not in total private wealth, our denominator to compute top shares based on BILANZ wealth data.
Second, although the BILANZ cites some evidence that a fairly large number of foreign super-rich on their list have settled in Switzerland for tax reasons, not all of the wealth reported in the BILANZ may be part of total Swiss net wealth. The BILANZ magazine seeks to capture the global wealth of the super-rich residing in Switzerland and, in part, their families-but not all family members necessarily also reside in Switzerland. Such a domestic approach is particularly problematic for the super-rich as they are members of a truly global elite. More generally, various super-rich in the BILANZ list own multiple properties and residences across the globe. Hence, the determination of primary residence and tax domicile may be ambiguous for at least some of the listed super-rich. Due to the large fortunes of the super-rich, a handful of observations wrongfully attributed to the Swiss tax base may considerably a ect the results.
Undervaluation in tax data I: double-counting of tax units. As a result of the federal tax system in Switzerland where wealth is only taxed at the cantonal but not at the federal level, double or multiple counting arises in the wealth tax statistics. This typically occurs when a taxpayer owns real estate in a canton other than their primary residence. The same taxpayer enters the statistic twice: (i) in the canton of primary residence, where all assets are subject to taxation except the out-of canton real estate; (ii) in the canton they own real estate, where only that real estate is subject to taxation.44
As a result, the statistic dilutes wealth. Since such scenarios are more likely to be the case for taxpayers at the upper end of the wealth distribution, double counting will lead to an underestimation of wealth concentration measured with tax statistics.
Figure 1 :
1 Figure1: The Super-rich by FamilyStructure and Gender, 1989-2020 Note: This figure illustrates the rich list ranking entries by family structure and gender per year. The blue part of the bars shows the number of family observations as a share of all ranking entries. The gray and red parts of the bars show the percentage of female and male observations, respectively.
Figure 2 :
2 Figure 2: Age Structure of BILANZ Ranking Entries, 1989-2020 Note: This figure shows the average age per BILANZ ranking entry from 1989 to 2020. The average age is computed based on individual observations only. The temporary decline in the average age in the second half of the 1990s can be explained in part by the entry of several new economy entrepreneurs into the ranking. The number of observations in the BILANZ rich list remained stable since 1999.
Figure 3 :
3 Figure 3: Share of Top Wealth held byForeign-born Residents, 1989-2020 Note: This figure shows the share of foreign-born residents in relation to the overall number of observations in the Swiss rich list (red line), as well as their share in total BILANZ wealth (black line), from 1989 to 2020. The jump in 1993 is due to the first-time inclusion of foreigners in the Swiss rich list. Even before 1993, a small number of foreign-born super-rich were Swiss nationals and thus ranked. The blue line depicts the percentage of foreigners in the total population. The gray line shows the share of first-generation immigrants in total population, for all people aged 15 and older. The population data are available for download from the FSO: https://www.bfs.admin.ch/bfs/en/home/statistics/population.html
Figure 4 :
4 Figure4: The Rise of TopManagers, 1995Managers, -2020 Note: This figure shows the managers' share in the overall panel data set for the years 1995 to 2020. The upper black line indicates the relative frequency of managers in total observations. The lower red line represents the share of total BILANZ wealth held by managers. The sharp increase in managers' share of wealth in 2011 is the result of Glencore's IPO, which turned the four Swiss-resident Glencore managers Ivan Glasenberg, Daniel Mate, Aristotelis Mistakidis, and Tor Peterson into billionaires over night.
Figure 5 :
5 Figure 5: Share of Top Wealth by Industry, 1989-2020
Figure 6 :
6 Figure 6: Share of Super-Rich by Category of Wealth Origin, 1989-2020 Note: This figure categorizes the observations of our BILANZ data set by origin of wealth into three categories: wealth acquired through marriage (gray), wealth inherited (blue) or self-made wealth (red).
Figure 7 :
7 Figure 7: The Inheritance Share in Top Wealth, 1989-2019
Figure 8 :
8 Figure 8: One-to Ten-year Survival Rates at the Top of the Wealth Distribution
Figure 9 illustrates the results for the periods 2000-2010 and 2010-2020, respectively. The red dots show real log wealth of an individual or family in 2010, relative to their wealth in 2000. Similarly, the blue diamonds indicate the change in real log wealth from 2010 to 2020.
Figure 9 :
9 Figure 9: Top Wealth Mobility, 2000-2020
Figure 10 :
10 Figure 10: Evolution of Top Wealth and Total Private Wealth, 1995-2020 Note: This figure shows the development of BILANZ wealth clustered by di erent ranking entries compared to the total private net wealth of the Swiss economy. The Top 10, Top 100, Top 200 represent respectively the first 10, 100 and 200 entries in the BILANZ rich list. For 1995 and 1996, our panel does not include enough observations to show the evolution of the Top 200 (see TableC1). For details on the total net private wealth series, see[START_REF] Baselgia | Wealth-Income Ratios in Free Market Capitalism: Switzerland, 1900-2020[END_REF].
(a) Stock 2009 (per 100,000 inhabitants) (b) Net absolute changes in the super-rich, 1999
Figure 11 :
11 Figure 11: Regional Distribution and Net Changes of the Super-rich, 1999-2019 Note: Panel a) of this figure shows the number of BILANZ ranking entries by canton of residence (per 100,000 inhabitants) in 2009. Panel b) shows the net changes in the number of super-rich between 1999 and 2019. Panel c) indicates cantons that abolished expenditure-based taxation all between 2010 and 2014 (for details on the cantonal reforms and their timing, see TableC4).
Figure 12 :
12 Figure 12: Share of Super-rich living in an "Abolition" Canton Note: Panel (a) shows the share of the foreign-born super-rich living in cantons (AR; BS; BL; SH; ZH) that have eventually abolished expenditure-based taxation. Likewise, Panel (b) shows the share of Swiss-born super-rich. The dashed vertical line in 2010 indicates the year in which the first canton (ZH) abolished expenditure-based taxation, further abolitions took place in 2012 and 2014 (see Tab. C4 for details). The solid red lines are best linear fits before and after 2010.
30
Comparing the standard OLS with the PPML estimation results based on Panel D-which includes only observations
Figure 13 :
13 Figure 13: Cumulative Event Study -Foreign-born Super-rich Note: This figure shows the cumulative e ects given by Equation (4) for the foreign-born super-rich. Panel a) presents the estimation results when the treatment indicator •c,t is defined as in the DD-analysis (i.e., for instance • ZH,2010 = 1 for the canton of Zurich). Panel b) reports the analogous estimation results when the treatment indicator •c,t is introduced with a one-year lead (i.e., for instance • ZH,2009 = 1 for the canton of Zurich). In both panels, the red line with circles corresponds to a specification of Equation 3, which contains only year and canton fixed e ects. The blue line with diamonds corresponds to a specification that additionally contains the full vector of time-varying cantonal controls Xc,t. Point estimates are reported with their corresponding 90% confidence intervals based on two-way clustered standard errors by canton and year. FigureC5displays the analogous results for all super-rich, Swiss-born super-rich, and rich taxpayers.
n_panel.
This variable indicates the number of ranking entries covered in our panel dataset per year. wealth_low & wealth_high. The BILANZ magazine estimates net wealth per ranking entry in intervals. The two variables wealth_low and wealth_high capture the interval limits. The two variables thus indicate the lower and upper bounds, respectively, of the net wealth estimate per ranking entry in nominal millions of Swiss Francs.
Figure C1 :
C1 Figure C1: Real Wealth Billionaires in Switzerland, 1989-2020 Note: This figure shows the number of real wealth billionaires (measured in 2020 Swiss Francs) in Switzerland between 1989-2020. Nominal net wealth is deflated by the Swiss CPI. Note that the leap from 1991 to 1993 is due to the first-time inclusion of foreigners in the Swiss rich list.
Figure C2 :
C2 Figure C2: Share of Top Wealth originating before WW II, 1989-2020 Note: This figure shows the share of today's top wealth whose origins predate World War II. The vertical red line indicatesthe first-time inclusion of foreigners in the Swiss rich list. The first sharp drop in this share from 64% in 1992 to 53% in 1993 is attributable to the first-time inclusion of foreigners in the Swiss rich list. It seems that super-rich foreigners who entered the sample in 1993 were less likely than Swiss nationals to have laid the foundation for their fortunes before the mid-20th century. Since 1995, this share kept declining (with fluctuations over the business cycle) from 50% to some 40% in 2010. How large these fortunes were at that time cannot be concluded from the figure shown, nor do we have any information that would allow us to do so.
Figure C3 :
C3 Figure C3: One-to Ten-year Survival Rates at the Top of the Wealth Distribution Note: This figure shows, for the four di erent periods indicated, the persistence rates of those included in the Swiss rich list. Note that this survival rates are based on individual observations rather than family observations (for details on the two panel identifiers see Appendix A). For more detailed explanations see Figure 8.
Figure C4: Top Wealth Mobility, 2000-2020 Note: Panel a) shows a scatter plot for real log net wealth for the period 2000 to 2005 (red dots) and for 2000 to 2010(blue diamonds). Analogously, Panel b) shows the scatter plot for real log net worth for the period 2010 to 2015 (red dots) and for 2010 to 2020 (blue diamonds), respectively. We report slope estimatesand the R 2 from OLS regressions in the corresponding color. All regression coe cients are statistically significant at the 1% level. The gray shading surrounding the gradients represent the 95% confidence intervals. The analysis here is based on family observations rather than individual observations (for details on the two panel identifiers see Appendix A). This means that if, for instance, a super-rich individual dies within the observation period, but his heir is listed in the last year of the analysis, then this observation does not drop out. We only use observations in the mobility analysis that are present in both the first and last year of the analysis. The small written text under the figures displays the dropout rate.
Figure C5 :
C5 Figure C5: Cumulative Event StudyNote: This figure shows the cumulative e ects given by Equation (4) for the Swiss-born super-rich (Panel a&b), all super-rich (Panel c&d), and rich taxpayers (Panel e&f).The left figures present the estimation results when the treatment indicator •c,t is defined as in the DD-analysis (i.e., for instance • ZH,2010 = 1 for the canton of Zurich). Whereas the right figures report the analogous estimation results when the treatment indicator •c,t is introduced with a one-year lead (i.e., for instance• ZH,2009 = 1 for the canton of Zurich). The red lines with circles always correspond to a specification of Equation 3, which contains only year and canton fixed e ects. The blue lines with diamonds correspond to a specification that additionally contains the full vector of time-varying cantonal controls Xc,t. Point estimates are reported with their corresponding 90% confidence intervals based on two-way clustered standard errors by canton and year. Figure13displays the analogous results for the foreign-born super-rich.
Figure D1: Top 0.01% Wealth Share in Switzerland, 1997-2019
4 The Super-rich in Switzerland 4.1 Who are the Super-rich?
[START_REF] Moretti | Taxing Billionaires: Estate Taxes and the Geographical Location of the Ultra-Wealthy[END_REF], covering the period 1982-2017, mean real wealth was 3.02 billion (in 2017 dollars) and median real wealth was 1.6 billion. As expected, family observations, which have increased over time, tend to be richer on average than individuals, although there is some variation over time. 12 From previous research, we know relatively little about who the super-rich in Switzerland are. In this section, we provide descriptive statistics from our newly compiled BILANZ dataset.
11
1 CHF is roughly equivalent to 1 US Dollar. Absolute values of net wealth are at constant prices of 2020. To deflate the di erent nominal wealth series we use the Swiss consumer price index (CPI), available for download from the FSO: https://www.bfs.admin.ch/bfs/en/home/statistics/prices/consumer-price-index.html sample of
Table C7 ,
C7 we therefore report results for the foreign-born based on the novel alternative estimators developed by[START_REF] Callaway | Di erence-in-di erences with multiple time periods[END_REF] (Panel B) and[START_REF] Sun | Estimating dynamic treatment e ects in event studies with heterogeneous treatment e ects[END_REF] (Panel C). The estimator by[START_REF] Callaway | Di erence-in-di erences with multiple time periods[END_REF] produces similar results as in the TWFE case without linear trend (Panel A). In Panel C, the magnitude of the estimate shrinks to -0.18. Overall we conclude that the treatment e ect appears to be relatively constant across cantons and over time. Note in particular that all of these treatments took place within 4 years, roughly in the middle of our 21 years time span, and that only 5 out of 26 cantons were treated, leaving us with 21 never-treated control units.Our dataset on the super-rich contains true zeros, i.e., there are a few cantons where no super-rich resides. Apparently, these cantons cannot compete in tax competition for the super-rich. Therefore, when estimating Equation (2) with the log number of super-rich as outcome in a standard OLS model, these observations (cantons) with zeros are dropped from the model, since the logarithm of zero is undefined. As a robustness check, we therefore employ the Poisson pseudo-maximum likelihood (PPML) estimator which can incorporate observations with zeros into the estimation (for a detailed discussion of PPML estimation, see the seminal contribution by[START_REF] Silva | The log of gravity[END_REF]. The results are shown in Appendix TableC8. They generally confirm the findings of Table1. However, in the PPML estimation, the coe cients on the foreign-born super-rich are somewhat smaller and not statistically significant at any conventional level.30
Table 1 :
1 The Abolition of Expenditure-Based Taxation -DD-Estimation This table shows the estimation results of the model presented in Equation (2) using OLS. Panel A uses the number of foreign-born super-rich in our BILANZ dataset as the dependent variable. More detailed results for this sub-sample, including estimation coe cients on the control variables, are shown in Appendix TableC5. Analogously, Panel B employs the number of Swiss-born super-rich (detailed results reported in TableC6in the Appendix). Panel C utilizes the full sample of super-rich and Panel D the number of rich taxpayers (i.e., taxpayers with net wealth greater than CHF 10 million), respectively. The number of observations drops from model (2) to (3) because population-weighted tax controls are only available for the period 1999-2018. Two-way clustered standard errors by canton and year are shown in parentheses, below the coe cients.
Model (1) (2) (3) (4) (5) (6) (7)
Panel A: Foreign-born Super-rich, 1999-2020
-DD ≠0.31*** ≠0.30 (0.09) (0.18) ≠0.32 (0.30) ≠0.33** ≠0.31 (0.15) (0.34) ≠0.29* ≠0.28* (0.15) (0.15)
No. of obs. adj. R 2 411 0.922 411 0.952 375 0.952 375 0.952 375 0.952 375 0.953 375 0.953
Panel B: Swiss-born Super-rich, 1999-2020
-DD ≠0.27*** (0.09) 0.11 (0.08) 0.10 (0.10) 0.11 (0.08) 0.07 (0.10) 0.08 (0.09) 0.07 (0.10)
No. of obs. adj. R 2 466 0.925 466 0.952 421 0.954 421 0.954 421 0.954 421 0.954 421 0.954
Panel C: All Super-rich, 1999-2020
-DD ≠0.26*** (0.09) 0.04 (0.13) 0.04*** (0.00) 0.04 (0.07) 0.02 (0.07) 0.04 (0.05) 0.02 (0.06)
No. of obs. adj. R 2 506 0.955 506 0.970 460 0.969 460 0.969 460 0.969 460 0.970 460 0.970
Panel D: Rich Taxpayers, 2003-2017
-DD ≠0.27*** (0.08) 0.00 (0.04) ≠0.01 (0.04) 0.00 (0.04) 0.00 (0.05) 0.00 (0.04) ≠0.03 (0.04)
No. of obs. adj. R 2 390 0.984 390 0.995 390 0.995 390 0.995 390 0.995 390 0.995 390 0.996
Controls (1) (2) (3) (4) (5) (6) (7)
Canton Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Year Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Canton-specific linear trend No Yes Yes Yes Yes Yes Yes
Top average wealth-tax rates No No Yes Yes No No No No Yes
ú p < 0.10, úú p < 0.05, úúú p < 0.01.
Note:
t is an indicator variable that equals 1 if destination canton d does not o er expenditure-based taxation in year t, but canton o does. Conversely, • do,t equals ≠1 if the destination canton d still provides expenditurebased taxation, but canton o does not. And third, • do,t equals 0 if either both cantons d and o o er expenditure-based taxation in year t or neither of them does, which is the empirically more rare case. Hence, our empirical model imposes full symmetry in the estimated e ects. 34
Table 2 :
2 Stock Ratio Estimation across Canton-PairsThis Tableshowsthe estimation result of the model presented in Equation (5). Panel A uses the number of foreignborn super-rich in our BILANZ dataset as the dependent variable. More detailed results for this sub-sample, including estimation coe cients on the control variables, are presented in TableC10in the appendix. Analogously, Panel B employs the number of Swiss-born super-rich. Again, for more detailed results, see the TableC11in the appendix. Panel C utilizes the full sample of super-rich and Panel D the number of rich taxpayers (i.e., taxpayers with net wealth greater than CHF 10 million), respectively. The number of observations drops from model (2) to (3) because population-weighted tax controls are only available for the period 1999-2018. Standard errors allow for three-way clustering (canton-pair, origin-year, destination-year) and are shown in parentheses beneath the estimates.
Model (1) (2) (3) (4) (5) (6) (7)
Panel A: Foreign-born Super-rich, 1999-2020
-SR ≠0.33*** (0.09) ≠0.36*** (0.11) ≠0.37*** (0.10) ≠0.37*** (0.10) ≠0.34*** (0.10) ≠0.33*** (0.09) ≠0.32*** (0.09)
No. of obs. 3'198 3'198 2'926 2'926 2'926 2'926 2'926
No. of canton-pairs adj. R 2 171 0.920 171 0.952 171 0.953 171 0.953 171 0.954 171 0.954 171 0.954
Panel B: Swiss-born Super-rich, 1999-2020
-SR ≠0.38*** (0.06) 0.13* (0.07) 0.13** (0.06) 0.13** (0.06) 0.07 (0.06) 0.07 (0.06) 0.04 (0.06)
No. of obs. 3'659 3'659 3'279 3'279 3'279 3'279 3'279
No. of canton-pairs adj. R 2 190 0.919 190 0.947 190 0.950 190 0.950 190 0.952 190 0.952 190 0.952
Panel C: All Super-rich, 1999-2020
-SR ≠0.36*** (0.05) 0.03 (0.06) 0.02 (0.06) 0.02 (0.06) 0.01 (0.06) 0.01 (0.06) 0.01 (0.06)
No. of obs. 4'180 4'180 3'800 3'800 3'800 3'800 3'800
No. of canton-pairs adj. R 2 190 0.946 190 0.967 190 0.967 190 0.967 190 0.967 190 0.968 190 0.968
Panel D: Rich Taxpayers, 2003-2017
-SR ≠0.35*** (0.04) 0.02 (0.02) 0.02 (0.02) 0.02 (0.02) 0.02 (0.02) 0.01 (0.03) ≠0.02 (0.03)
No. of obs. 2'850 2'850 2'850 2'850 2'850 2'850 2'850
No. of canton-pairs adj. R 2 190 0.980 190 0.993 190 0.993 190 0.993 190 0.993 190 0.994 190 0.994
Controls (1) (2) (3) (4) (5) (6) (7)
Destination Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Origin Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Year Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Canton-pair-specific linear trend No Yes Yes Yes Yes Yes Yes
Top average wealth-tax rates No No Yes Yes No No No No Yes
Note: ú p < 0.10, úú p < 0.05, úúú p < 0.01.
Table 3 :
3 Placebo Test -Foreign-born Super-rich, 1999-2015
Model (1) (2) (3) (4) (5) (6) (7)
-P BO ≠0.05 (0.08) ≠0.05 (0.11) ≠0.04 (0.10) ≠0.04 (0.10) 0.00 (0.10) ≠0.01 (0.11) ≠0.01 (0.10)
No. of obs. 2'518 2'518 2'518 2'518 2'518 2'518 2'518
No. of canton-pairs adj. R 2 171 0.942 171 0.958 171 0.958 171 0.958 171 0.958 171 0.958 171 0.959
-SR ≠0.17** (0.08) ≠0.27*** (0.10) ≠0.26*** (0.10) ≠0.26*** (0.09) ≠0.25*** (0.09) ≠0.24*** (0.09) ≠0.22** (0.09)
Controls (1) (2) (3) (4) (5) (6) (7)
Destination Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Origin Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Year Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Canton-pair-specific linear trend No Yes Yes Yes Yes Yes Yes
Top average wealth-tax rates No No Yes No No No No Yes
Note: This table shows the result of estimating a simple placebo test of the model shown in Equation (
5
) for the foreign-born super-rich. Instead of the true treatment indicator, we use a placebo treatment indicator lagged by 5 years. The e ective treatment e ects are shown in Panel A of Table
2
. Standard errors allow for three-way clustering (canton-pair, origin-year, destination-year) and are shown in parentheses beneath the estimates. ú p < 0.10, úú p < 0.05, úúú p < 0.01.
Table A1 :
A1 Swiss CantonsThe coding of the cantons follows the standard numbering of the Swiss cantons.
Canton Number Canton Name
1 Z ü r i c h
2 B e r n
3 Luzern
4 Uri
5 S c h w y z
6 Obwalden
7 Nidwalden
8 Glarus
9 Z u g
10 Fribourg
11 Solothurn
12 Basel-Stadt
13 Basel-Landschaft
14 Scha hausen
15 Appenzell Ausserrhoden
16 Appenzell Innerrhoden
17 St. Gallen
18 Graubünden
19 Aargau
20 Thurgau
21 Ticino
22 Vaud
23 Valais
24 Neuchâtel
25 Genève
26 Jura
Note:
Table A2 :
A2 Industry Coding and LabelingThe table shows the name and coding of the industry variables industry_1, industry_2, industry_3 and indus-try_main as specified in our panel dataset.
Industry Code Industry Name
1 pharmaceuticals; chemistry; biotechnology; synthetics; fertilizers
2 trade; retail
3 commodities; commodity trading
4 shareholdings; investments
5 art; various collections (incl. car collections); horse breeding
6 industry; manufacturing
7 food, drinks and tobacco industry
8 banking; insurance; finance industry
9 s e r v i c e s
10 construction (incl. construction materials)
11 machinery
12 media (incl. publishing)
13 real estate
14 watches; jewelry; luxury goods
15 athletes
16 musicians; writers
17 ICT; telecommunications; internet
18 sports industry
19 high-tech industry; electronics
20 restaurants; hospitality; hotels
21 perfumes; cosmetics; beauty care products
22 fashion and textile industry
23 other consumer goods
24 shipping; transportation; distribution; logistics
25 energy and oil industry
26 other
Note:
Table A3 :
A3 Description of Entry Reasons
entryreason description
41 40 See: https://www.estv.admin.ch/estv/de/home/allgemein/steuerstatistiken/fachinformationen/ steuerbelastungen/steuerbelastung.html 41 The data are taken from the FSO: https://www.bfs.admin.ch/bfs/de/home/statistiken/bevoelkerung.html
C Additional Tables and Figures
C.1 Additional Tables
C1: Summary Statistics of the BILANZ Panel Dataset, 1989-2020
Sample size BILANZ real net wealth (in billions of 2020 Swiss Francs)
n (magazine) n (panel dataset) all obs. family obs. individuals obs.
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
Year all obs. all obs. family obs. male obs. female obs. mean median std. dev. mean mean
1989 100 95 42 52 1 0.94 0.48 1.42 1.01 0.89
1990 175 166 61 95 10 0.66 0.33 0.82 0.66 0.66
1991 200 192 68 109 15 0.61 0.31 0.78 0.56 0.64
1992 200 183 75 94 14 0.59 0.30 0.81 0.50 0.65
1993 250 228 88 119 21 0.85 0.29 1.31 0.68 0.96
1994 50 46 16 28 2 2.53 2.84 1.48 2.03 2.80
1995 200 184 74 94 16 1.08 0.39 1.69 0.97 1.16
1996 200 189 75 97 17 1.05 0.39 1.76 1.18 0.97
1997 250 211 81 112 18 1.29 0.39 2.40 1.49 1.17
1998 250 231 94 120 17 1.34 0.39 2.60 1.48 1.24
1999 300 281 108 155 18 1.38 0.38 2.88 1.56 1.26
2000 300 281 105 158 18 1.47 0.48 2.42 1.65 1.36
2001 300 284 99 165 20 1.36 0.48 2.25 1.67 1.20
2002 300 281 98 158 25 1.20 0.37 1.93 1.56 1.02
2003 300 284 99 160 25 1.23 0.47 2.01 1.57 1.06
2004 300 283 103 156 24 1.31 0.47 2.12 1.61 1.13
2005 300 286 103 159 24 1.38 0.57 2.27 1.71 1.20
2006 300 289 104 161 24 1.55 0.56 2.69 1.75 1.43
2007 300 289 109 157 23 1.79 0.66 3.22 1.90 1.72
2008 300 292 111 163 18 1.51 0.64 2.84 1.66 1.42
2009 300 291 110 162 19 1.49 0.55 2.71 1.64 1.39
2010 300 292 112 159 21 1.54 0.64 2.86 1.64 1.47
2011 300 294 120 155 19 1.58 0.64 2.75 1.59 1.57
2012 300 293 123 151 19 1.69 0.65 3.15 1.72 1.67
2013 300 292 123 153 16 1.87 0.75 3.55 2.31 1.55
2014 300 291 126 150 15 1.95 0.75 3.84 2.36 1.64
2015 300 289 127 146 16 2.00 0.75 4.03 2.46 1.65
2016 300 288 130 140 18 2.08 0.76 4.08 2.50 1.73
2017 300 287 131 139 17 2.28 0.75 4.45 2.72 1.92
2018 300 288 137 133 18 2.26 0.75 4.48 2.82 1.75
2019 300 289 133 138 18 2.34 0.74 4.81 2.86 1.89
2020 300 288 134 138 16 2.36 0.75 4.91 2.93 1.88
Table C2 :
C2 Distribution of BILANZ Net Wealth, 1999-2020 This Table shows selected percentiles of the wealth distribution in our BILANZ panel for the period 1999-2020. All net wealth figures are expressed in real terms (in billions of 2020 Swiss Francs). Net wealth was deflated using the Swiss CPI.
Year 10th 25th 50th 75th 99th
1999 0.16 0.27 0.38 1.37 3.83 10.39
2000 0.16 0.27 0.48 1.35 3.77 12.38
2001 0.16 0.27 0.48 1.33 3.73 12.27
2002 0.16 0.26 0.37 1.32 3.71 10.06
2003 0.16 0.26 0.47 1.32 3.69 11.06
2004 0.16 0.26 0.47 1.31 3.65 13.05
2005 0.15 0.26 0.57 1.29 3.61 9.81
2006 0.15 0.26 0.56 1.79 3.57 14.81
2007 0.15 0.25 0.66 1.77 4.56 15.72
2008 0.15 0.25 0.64 1.73 3.47 12.38
2009 0.15 0.25 0.55 1.74 3.48 10.45
2010 0.15 0.25 0.64 1.73 3.46 12.35
2011 0.15 0.25 0.64 2.46 3.45 12.32
2012 0.15 0.25 0.65 2.48 3.47 16.38
2013 0.17 0.27 0.75 2.24 4.23 20.40
2014 0.17 0.27 0.75 2.24 3.73 25.37
2015 0.18 0.33 0.75 2.26 4.28 25.66
2016 0.18 0.33 0.76 2.27 4.30 23.75
2017 0.18 0.38 0.75 2.26 4.78 24.64
2018 0.17 0.32 0.75 2.24 5.48 21.42
2019 0.17 0.32 0.74 2.23 5.46 23.32
2020 0.18 0.33 0.75 2.25 5.50 25.50
Mean 0.16 0.27 0.64 1.77 3.77 14.53
Note:
Table C3 :
C3 Expenditure-based Taxation across Swiss Cantons This table shows in which cantons expenditure-based taxation is applicable and which tax base is taxed at which rates. Five Swiss cantons abolished expenditure-based taxation in the post-2009 period. *In all cantons (except the canton of Thurgau), minimum taxable income is seven times the owner-occupied rental value (in the case of residential property)
canton taxation min. taxable income* wealth tax
(in 1'000 Swiss Francs)
Zürich abolished (Jan. 2010)
Bern Yes 400 real estate within the canton (ord. tari )
Luzern Yes 600 min. 20 x taxable income (ord. tari )
Uri Yes 400 min. 20 x taxable income (ord. tari )
Schwyz Yes 600 min. 20 x taxable income (ord. tari )
Obwalden Yes 400 min. 10 x taxable income (ord. tari )
Nidwalden Yes 400 min. 20 x taxable income (ord. tari )
Glarus Yes 400 min. 20 x taxable income (ord. tari )
Zug Yes 500 min. 20 x taxable income (ord. tari )
Fribourg Yes 250 min. 4 x taxable income (ord. tari )
Solothurn Yes 400 min. 20 x taxable income (ord. tari )
Basel-Stadt abolished (Jan. 2014)
Basel-Landschaft abolished (Jan. 2014)
Scha hausen abolished (Jan. 2012)
Appenzell A.Rh. abolished (Jan. 2012)
Appenzell I.Rh. Yes 400 min. 20 x taxable income (ord. tari )
St. Gallen Yes 600 min. 20 x taxable income (ord. tari )
Graubünden Yes 400 min. 20 x taxable income (ord. tari )
Aargau Yes 400 min. 20 x taxable income (ord. tari )
Thurgau Yes * *
Ticino Yes 400 min. 5 x taxable income (ord. tari )
Vaud Yes 415 15% of income tax liability
Valais Yes 250 min. 4 x taxable income (ord. tari )
Neuchâtel Yes 400 min. 5 x taxable income (ord. tari )
Genève Yes 400 10% of the income tax base
Jura Yes 200 min. 8 x taxable income (ord. tari )
Note:
Table C5 :
C5 Detailed Results: DD-Estimation, Panel A This table shows the detailed estimation results for the sample of foreign-born super-rich (Panel A) shown in condensed form in Table1. Analogously, TableC6presents the detailed estimation results for the Swiss-born super-rich (Panel B). Two-way clustered standard errors by canton and year are shown in parentheses, below the coe cients. ú p < 0.10,
Panel Foreign-born Super-rich, 1999-2020
Model (1) (2) (3) (4) (5) (6) (7)
-DD ≠0.31*** ≠0.30 (0.09) (0.18) ≠0.32 (0.30) ≠0.33** ≠0.31 (0.15) (0.34) ≠0.29* ≠0.28* (0.15) (0.15)
ln top average net- 2.90 21.72 23.59 31.41 20.73
of-wealth-tax rate (66.88) (66.95) (61.08) (67.71) (63.62)
ln top average net-of-income-tax rate ≠1.79 (2.81) ≠1.88 (2.97) ≠1.61 (2.90) ≠0.77 (2.96)
ln net-of-bequest-tax rate 1.62 1.84 1.72
(direct descendants) (3.70) (2.62) (2.65)
ln net-of-bequest-tax rate (unrelated individual) 0.27 (0.93) 0.07 (0.76) ≠0.01 (0.61)
ln share of foreigners ≠2.26 (2.53) ≠2.22 (2.37)
ln total population ≠4.55 (4.29)
No. of obs. 411 411 375 375 375 375 375
adj. R 2 0.922 0.952 0.952 0.952 0.952 0.953 0.953
Controls (1) (2) (3) (4) (5) (6) (7)
Canton Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Year Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Canton-specific linear trend No Yes Yes Yes Yes Yes Yes
úú p < 0.05, úúú p < 0.01.
Note:
Table C6 :
C6 Detailed Results: DD-Estimation, Panel B This table shows the detailed estimation results for the sample of Swiss-born super-rich (Panel B) shown in condensed form in Table 1. Analogously, TableC5presents the detailed estimation results for the foreign-born super-rich (Panel A). Two-way clustered standard errors by canton and year are shown in parentheses, below the coe cients. ú p < 0.10,
Panel B: Swiss-born Super-rich, 1999-2020
Model (1) (3) (4) (5) (6) (7)
-DD ≠0.27*** (0.09) 0.11 (0.08) 0.10 (0.10) 0.11 (0.08) 0.07 (0.10) 0.08 (0.09) 0.07 (0.10)
ln top average net-of-wealth-tax rate ≠4.09 (64.28) ≠19.63 (67.59) ≠2.94 (57.93) (62.78) (65.58) 0.53 ≠1.51
ln top average net- 1.78 1.51 1.80 1.65
of-income-tax rate (2.89) (2.83) (3.20) (3.43)
ln net-of-bequest-tax rate (direct descendants) ≠4.83 (3.44) ≠4.72 (3.34) ≠4.72 (3.24)
ln net-of-bequest-tax rate (unrelated individual) ≠0.16 (0.42) ≠0.22 (0.29) ≠0.18 (0.30)
ln share of foreigners ≠0.72 (1.17) ≠0.74 (1.21)
ln total population 1.30
(4.40)
No. of obs. 466 466 421 421 421 421 421
adj. R 2 0.925 0.952 0.954 0.954 0.954 0.954 0.954
Controls (1) (2) (3) (4) (5) (6) (7)
Canton Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Year Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Canton-specific linear trend No Yes Yes Yes Yes Yes Yes
úú p < 0.05, úúú p < 0.01.
Note:
Table C8 :
C8 Robustness: DD-Estimation with Poisson Pseudo-Maximum Likelihood (PPML)
Model (1) (2) (3) (5) (6) (7)
Panel A: Foreign-born Super-rich, 1999-2020
-DD ≠0.20* (0.12) ≠0.16 (0.14) ≠0.18 (0.14) ≠0.19 (0.14) ≠0.19 (0.16) ≠0.20 (0.17) ≠0.20 (0.15)
No. of obs. 484 484 438 438 438 438 438
pseudo R 2 0.679 0.700 0.702 0.702 0.702 0.703 0.703
Panel B: Swiss-born Super-rich, 1999-2020
-DD ≠0.17* (0.09) 0.11 (0.08) 0.13 (0.08) 0.14 (0.09) 0.13 (0.10) 0.13 (0.10) 0.11 (0.12)
No. of obs. 550 544 497 497 497 497 497
pseudo R 2 0.701 0.718 0.727 0.727 0.727 0.727 0.727
Panel C: All Super-rich, 1999-2020
-DD ≠0.18** (0.09) 0.05 (0.13) 0.07 (0.09) 0.07 (0.08) 0.06 (0.07) 0.05 (0.06) 0.04 (0.06)
No. of obs. 550 544 497 497 497 497 497
pseudo R 2 0.764 0.775 0.782 0.782 0.782 0.782 0.782
Panel D: Rich Taxpayers, 2003-2017
-DD ≠0.30*** (0.06) 0.02 (0.03) 0.03 (0.03) 0.04* (0.02) 0.04 (0.03) 0.05 (0.03) 0.04 (0.03)
No. of obs. 390 390 390 390 390 390 390
pseudo R 2 0.978 0.985 0.985 0.985 0.985 0.985 0.985
Table C9 :
C9 Mobility of Treated Individuals in theThis table shows where the super-rich who lived in a treated canton (one year prior to treatment) resided five years after the treatment: (i) a canton that eventually abolished expenditure-based taxation; (ii) in a canton that did not abolish expenditure-based taxation or (iii) the share of super-rich who fell out of the sample over this 5-year period.
5 Years After Treatment
Share of Individuals remained in moved to
Canton across Treatment Cantons Treatment Canton Non-Treatment Canton left sample
Zürich 78% 67% 9% 24%
Basel-Stadt 8% 83% 0% 17%
Basel-Landschaft 11% 88% 0% 12%
Scha hausen 1% 100% 0% 0%
Appenzell Ausserrhoden 0% - - -
Switzerland 100% 70% 7% 23%
Note:
Table C10 :
C10 Stock Ratio Estimation across Canton-Pairs
Panel A: Foreign-born Super-rich, 1999-2020
Model (1) (2) (3) (4) (5) (6) (7)
-SR ≠0.33*** (0.09) ≠0.36*** (0.11) ≠0.37*** (0.10) ≠0.37*** (0.10) ≠0.34*** (0.10) ≠0.33*** (0.09) ≠0.32*** (0.09)
ln top average net-of-wealth-tax rate ≠32.63 (33.24) ≠4.55 (33.22) ≠2.55 (32.59) 5.47 (32.96) ≠4.50 (32.67)
ln top average net-of-income-tax rate ≠2.68 (1.67) ≠2.90* (1.68) ≠2.69 (1.71) ≠1.93 (1.79)
ln net-of-bequest-tax rate 3.02 2.93 2.85
(direct descendants) (2.01) (2.00) (1.96)
ln net-of-bequest-tax rate 0.47 0.34 0.23
(unrelated individual) (0.38) (0.38) (0.38)
ln share of foreigners ≠1.37 (1.01) ≠1.40 (0.98)
ln total population ≠4.07* (2.07)
No. of obs. 3'198 3'198 2'926 2'926 2'926 2'926 2'926
No. of canton-pairs 171 171 171 171 171 171 171
adj. R 2 0.920 0.952 0.953 0.953 0.954 0.954 0.954
Controls (1) (2) (3) (4) (5) (6) (7)
Destination Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Origin Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Year Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Canton-pair-specific linear trend No Yes Yes Yes Yes Yes Yes
Note: This table shows the detailed estimation results for the sample of foreign-born super-rich (Panel A) shown in condensed form in Table
2
. Analogously, Table
C11
presents the detailed estimation results for the Swiss-born super-rich (Panel B). Standard errors allow for three-way clustering (canton-pair, origin-year, destination-year) and are shown in parentheses beneath the estimates. ú p < 0.10, úú p < 0.05, úúú p < 0.01.
Table C11 :
C11 Stock Ratio Estimation across Canton-Pairs This table shows the detailed estimation results for the sample of Swiss-born super-rich (Panel B) shown in condensed form in Table2. Analogously, TableC10presents the detailed estimation results for the foreign-born super-rich (Panel A).
Panel B: Swiss-born Super-rich,
Model (1) (2) (3) (4) (5) (6) (7)
-SR ≠0.38*** (0.06) 0.13* (0.07) 0.13** (0.06) 0.13** (0.06) 0.07 (0.06) 0.07 (0.06) 0.04 (0.06)
ln top average net-of-wealth-tax rate ≠19.21 (31.69) ≠34.78 (33.79) ≠17.49 (32.11) ≠16.21 (32.00) ≠11.03 (32.08)
ln top average net- 1.91 1.83 2.13 1.70
of-income-tax rate (1.40) (1.43) (1.47) (1.56)
ln net-of-bequest-tax rate (direct descendants) ≠5.37*** (1.89) ≠5.39*** (1.87) ≠5.38*** (1.82)
ln net-of-bequest-tax rate (unrelated individual) ≠0.09 (0.17) ≠0.14 (0.17) ≠0.03 (0.18)
ln share of foreigners ≠0.76 (0.67) ≠0.76 (0.67)
ln total population 3.67
(2.41)
No. of obs. 3'659 3'659 3'279 3'279 3'279 3'279 3'279
No. of canton-pairs 190 190 190 190 190 190 190
adj. R 2 0.919 0.947 0.950 0.950 0.952 0.952 0.952
Controls (1) (2) (3) (4) (5) (6) (7)
Destination Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Origin Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Year Fixed E ects Yes Yes Yes Yes Yes Yes Yes
Canton-pair-specific linear trend No Yes Yes Yes Yes Yes Yes
Note: Standard errors allow for three-way clustering (canton-pair, origin-year, destination-year) and are shown in parentheses
beneath the estimates. ú p < 0.10, úú p < 0.05, úúú p < 0.01.
Table C12 :
C12 Industry Composition Forbes 400
1982 1992 2002 2012 1982-2012
Retail and Restaurant 5.5 11.4 12.8 16.3 10.8
Technology -computer 3.0 5.1 10.2 12 9.0
Technology -medical 0.5 1.8 2.3 2.8 2.3
Consumer goods 13.5 18.4 13.8 11.3 -2.2
Media 14.2 13.9 16 8.8 -5.4
Diversified 19.8 18.7 15.3 11.3 -8.5
Energy 21.8 9.9 6.8 9.8 -12.0
Finance and Investments
Hedge funds 0.5 1.0 2.5 8.3 7.8
Private equity and LBO 1.8 3.3 4.5 6.8 5.0
Money management 2.0 6.1 6 4.3 2.3
Venture capital 0.3 0.5 1 1.3 1.0
Real estate 17.2 10.1 8.8 7.3 -9.9
Note: This table shows the share of total wealth of the Forbes 400 by industry between 1982 and 2012. This table is taken from
[START_REF] Korom | The enduring importance of family wealth: Evidence from the Forbes 400, 1982 to 2013[END_REF]
.
To assess the quality of the rich list wealth estimates, we compute top 0.01% wealth shares series using our newly assembled data, and compare these series to earlier estimates by[START_REF] Föllmi | Volatile Top Income Shares in Switzerland? Reassessing the Evolution Between 1981 and 2010[END_REF] based on wealth tax statistics; see Appendix Section D.
We discuss some shortcomings of the Swiss wealth tax statistics in Appendix Section D.2.
Besides Swiss residents, the BILANZ magazine covers a small number of Swiss citizens living abroad, as well as a few entries from the Principality of Liechtenstein. We exclude those observations from our panel dataset as we are interested in the top wealth dynamics of Swiss residents, which is why our sample is always slightly below 300 (see TableC1for details).
We thank Simon[START_REF] Handreke | Who is How Rich, and Why? -Investigating Swiss Top Wealth 1989-2017[END_REF], an undergraduate student to whom we provided our data for his bachelor thesis, for carefully documenting various weaknesses in the BILANZ data.
It should be noted, however, that according to BILANZ journalists, net wealth tends to be under-rather than overestimated when there is uncertainty in the valuation of assets.
Table C2 in the Appendix further displays selected percentiles of the BILANZ wealth distribution.
In Appendix D, we discuss the implications of expenditure-based taxation for the study of wealth inequality using tax data.
The increase is only to a small extent due to the rise in real estate investments. The top wealth share of real estate increased from 1.0% in 2000 to 2.0% in 2019.
See: https://www.forbes.com/forbes-400/; accessed February 4, 2021.
The new economy (industry
17; see TableA2) is included in the industry "other" in Figure5.12
Note that the survival rates in Figure8are based on family observations rather than individual observations (for details on the two panel identifiers, see Appendix A). This implies, for instance, that if a super-rich individual dies and their heir is newly listed the next year, this observation does not drop out (i.e., the dynasty survives). The persistence is lower when the same analysis is performed on for individuals instead (see Appendix FigureC3). However, the structural pattern hardly changes.
Note that this finding is not a mere data artifact arising from the fact that the BILANZ magazine simply continues to record the same people over and over again. In fact, according to the journalists, a major part of their work consists of finding new super-rich, which is attributed to their interest in constantly presenting fresh faces so as to keep the magazine entertaining.
The law has a provision according to which the tax base is replaced with the sum of all capital incomes earned in Switzerland, namely rental incomes, financial investments, revenue on patents and intellectual property, and pensions from Swiss sources if named sum is larger than the sum of expenses or the stipulated minima. The Federal Law on Expenditurebased Taxation can be found here: https://www.admin.ch/opc/de/official-compilation/2013/779.pdf. For additional explanations see: https://www.efd.admin.ch/efd/en/home/steuern/steuern-national/lump-sum-taxation.html.
22 In an Appendix D, we discuss the implications of this preferential tax treatment for the study of wealth and income inequality in Switzerland based on tax
data.23 Unfortunately, we lack data that would allow us to quantify by how much the true tax bases are undervalued under this preferential tax treatment. Anecdotal evidence suggests that the undervaluation is substantial in certain cases: when the richest Swiss-based billionaire, Ingvar Kamprad, left the country in 2013, it became public that he was not even among the top 15 taxpayers in his longtime tax domicile of Epalinges (a village of less than 10,000 inhabitants), because he was taxed according to his expenditures. See: https://www.nzz.ch/schweiz/minus-ein-pauschalbesteuerter-1.
18106985. 24 Table C4 in the Appendix lists dates and further details on all the popular votes held and the corresponding results.
We choose this year because it predates the reforms we analyze.
As our BILANZ dataset does not contain information on the nationality of individuals, we proxy nationality by country of birth.
We greatly thank Raphaël[START_REF] Parchet | Are Local Tax Rates Strategic Complements or Strategic Substitutes?[END_REF] for providing us with wealth and income tax rate data; we collected bequest tax rates published annually by the federal tax administration in: Steuern in der Schweiz -Charge fiscale en Suisse. The data used as controls and its sources are described in detail in Appendix B.
Unfortunateley, none of the tax administrations we contacted, including the Federal Tax Administration, have been willing to grant us access to the individual data that would allow us to address these questions. Given the considerable uncertainty regarding the extent of the undervaluation of the tax base, back-of-the-envelope calculations of mobility elasticities are not be very meaningful, as lead to an implausibly wide range of elasticity estimates.
The data can be downloaded here: https://www.estv.admin.ch/estv/de/home/die-estv/steuerstatistiken-estv/ allgemeine-steuerstatistiken/gesamtschweizerische-vermoegensstatistik-der-natuerlichen-person.html
We shift the time axis in Figure13(b) to still have four pre-and six post-treatment periods.
For the period 2000-2020, the Swiss National Bank (SNB) provides reliable estimates on aggregate private net wealth at market values as part of the Swiss financial accounts: https://data.snb.ch/en. For years prior to 2000, we use the net private wealth estimates provided in[START_REF] Baselgia | Wealth-Income Ratios in Free Market Capitalism: Switzerland, 1900-2020[END_REF], see Appendix A in[START_REF] Baselgia | Wealth-Income Ratios in Free Market Capitalism: Switzerland, 1900-2020[END_REF] for a detailed description.
The data is available for download from the Federal Statistics O ce (FSO): https://www.bfs.admin.ch/bfs/en/home/ statistics/population.html
For details, see the explanations in the wealth tax statistics: https://www.estv.admin.ch/estv/de/home/ allgemein/steuerstatistiken/fachinformationen/steuerstatistiken/gesamtschweizerische-vermoegensstatistik -der-natuerlichen-person.html
Both authors greatly appreciate financial support through SNSF Grant 176458 "The Influence of Taxation on Wealth and Income Inequality".
The views expressed here are those of the author(s) and not those of the EU Tax Observatory. EU
Online Appendix
Behavioral Responses to Special Tax Regimes for the Super-Rich:
Insights from Swiss Rich Lists Enea Baselgia Isabel Z. Martínez Note: This table compares our main results on the e ect of eliminating expenditure-based taxation on the location choices of the foreign-born super-rich using di erent estimators. Panel A displays the results of the standard TWFE model presented in Equation (2) estimated via OLS, but without including canton-specific linear trends. As in the other TWFE specification we display two-way clustered standard errors by canton and year. Panel B shows the estimates obtained by employing the DD multiple period estimator by [START_REF] Callaway | Di erence-in-di erences with multiple time periods[END_REF]. We employ the default settings of the csdid Stata-package for these estimates (in particular (i) control group are never treated units; (ii) the estimation method is Sant'Anna and Zhao (2020) doubly robust DD estimator (dripw), and (iii) SE are robust). Panel C presents estimates of a simple static specification of the interaction weighted estimator by [START_REF] Sun | Estimating dynamic treatment e ects in event studies with heterogeneous treatment e ects[END_REF]. Specifically, we proceeded as follows. First, we estimate a dynamic model with binned endpoints 4 years before and 5 years after the event (like in Fig. 13). Second, we take the mean of the 5 coe cients after treatment to estimate a simple post-treatment e ect. Again, we employ the default settings of the eventstudyinteract Stata-package for these estimates (in particular (i) control group are never treated units;
(ii) standard errors are two-way clustered by canton and year). ú p < 0.10, úú p < 0.05, úúú p < 0.01.
Controls
Canton The tax base for these individuals is not their actual income and wealth, but is instead based on their total annual living expenses. The the tax base for such taxpayers is also subject to some minimum thresholds stipulated in cantonal tax laws (see Appendix Table C3). As foreigners can opt for this tax treatment, they will typically do so if their tax base derived from their living expenses is lower than their actual tax base.
As the tax base is assessed by tax authorities on a case-by-case approach, we do not know by how much the true income and wealth tax base is undervalued on average (there is no o cial information and/or data on this issue). Anecdotal evidence suggests, however, that expenditure-based taxation is likely to result in a significant downward bias in the assessment of a taxpayer's market wealth (see Section 5.1).
In the Canton of Berne, for instance, for expenditure-based taxpayers only real estate owned in the Canton of Berne-regardless of how much other assets they own (elsewhere)-is subject to wealth taxation. 45 In the canton of Zug, on the other hand, the minimum taxable wealth for expenditure-based taxpayers is 10 million Swiss francs (20 ◊ minimum income tax base of CHF 500'000). Considering that the median super-rich in our dataset owns 640 million Swiss francs, there is potentially still ample room to reduce the tax burden via this preferential tax treatment. Moreover, it should be noted that,
given the minima set in the cantonal tax laws, the share of true wealth that is not taxed due to expenditure-based taxation is likely to increase with true net wealth. Unfortunately, there is no way (with the data available today) to quantify the average (let alone the gradient of) undervaluation of these taxpayers' tax bases. This systematic undervaluation due to expenditure-based taxation, however, is directly translated into wealth tax statistics, in which such taxpayers do not appear with their true but estimated wealth, which consequently leads to some downward bias in top wealth shares estimates.
Conclusion of comparison.
Given these considerations and the limitations of the BILANZ data discussed in Section 3, we believe that wealth tax statistics remain the most reliable source for measuring wealth concentration at the top end for Switzerland.
Furthermore, only wealth tax statistics allow for a long-run analysis of top wealth shares.
45 See: http://www.taxinfo.sv.fin.be.ch/taxinfo/display/taxinfo/Besteuerung+nach+dem+Aufwand
However, what our estimates from BILANZ rich list data indicate is that top shares from tax data likely understate wealth concentration at the very top. |
04025166 | en | [
"info.info-cr",
"stat.ml"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-04025166/file/ARES2022-2.pdf | Axel Charpentier
email: [email protected]
Nora Boulahia
Frédéric Cuppens
email: [email protected]
Reda Yaich
email: [email protected]
Deep Reinforcement Learning-Based Defense Strategy Selection
Keywords: Security and privacy → Network security, Systems security, • Computing methodologies → Machine learning, Modeling and simulation Moving Target Defense, Deception,Deep Reinforcement Learning
Deception and Moving Target Defense techniques are two types of approaches that aim to increase the cost of the attacks by providing false information or uncertainty to the attacker's perception . Given the growing number of these strategies and the fact that they are not all effective against the same types of attacks, it is essential to know how to select the best one to use depending on the environment and the attacker. We therefore propose a model of attacker/defender confrontation in a computer system that takes into account the asymmetry of the players' perceptions. To simulate attacks on our model, a basic attacker scenario based on the main phases of the Cyber Kill Chain is proposed. Analytically determining an optimal solution is difficult due to the model's complexity. Moreover, because of the large number of possible states in the model, Deep Q-Learning algorithm is used to train a defensive agent to choose the best defensive strategy according to the observed attacker's actions.
INTRODUCTION
It is widely acknowledged that in cybersecurity, there is an asymmetry between the attacker and the defender. Indeed, the attacker Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. ARES 2022, August 23-26, 2022, Vienna, Austria © 2022 Association for Computing Machinery. ACM ISBN 978-1-4503-9670-7/22/08. . . $15.00 https://doi.org/10.1145/3538969.3543789 chooses when and how they will try to penetrate or compromise a network or a machine. The attacker can therefore collect information about their target and come back later to compromise it, while it is very difficult for the defender to assess the threats they face.
To reduce this advantage of the attacker, several approaches have been proposed. The first existed long before computer systems and is at the basis of the warfare strategy. It is called deception. It consists in preventing the attacker from obtaining real information about a system by providing them false information in order to make their reconnaissance phase inefficient and thus reduce the probability of successful attacks. This can be done by several means such as perturbation, obfuscation or the use of honeypots [START_REF] Peter | A practical guide to honeypots[END_REF]. Today, deception strategies have developed a lot. Some surveys list and classify existing deception techniques and strategies [START_REF] Fraunholz | Demystifying deception technology: A survey[END_REF][START_REF] Han | Deception techniques in computer security: A research perspective[END_REF][START_REF] Pawlick | A game-theoretic taxonomy and survey of defensive deception for cybersecurity and privacy[END_REF].
Another approach to reduce the attacker's advantage is Moving Target Defense (MTD). The objective of MTD is to eliminate the attacker's time advantage due to the static nature of network infrastructures. This consists of changing the attack surface as well as the exploration surface to make any attack more difficult. This approach is sometimes considered as a deception strategy. However, while the objective of deception strategies is to provide false information to the attacker, the objective of MTD strategies is to prevent the use of previously obtained information, i.e., to make the information previously obtained by the attacker no longer valid. MTD strategies have been the subject of many research articles. Previous surveys have examined existing MTD strategies, each with its own criteria for analysis and classification [START_REF] Cho | Toward proactive, adaptive defense: A survey on moving target defense[END_REF][START_REF] Sengupta | A survey of moving target defenses for network security[END_REF][START_REF] Zheng | A survey on the moving target defense strategies: An architectural perspective[END_REF].
There are many deception and MTD techniques in the literature. However, each of them is not effective against the same types of attacks and permanently maintaining a deception strategy has a cost. It is therefore necessary to be able to choose the most appropriate strategy according to the environment and actions of the attacker.
To address this issue, models for the interactions between the attacker and the defence in a cyber environment have been proposed. A large part of these models is based on game theory (See Section 2). This enables to obtain some theoretical results such as mixedstrategy or pure-strategy Nash equilibrium of the game. However, in general, these models are too basic and do not allow to represent well the complexity of an attacker/defender confrontation, in particular by considering the perception of the different players and the multiplicity of possible states for a computer system.
However, Reinforcement Learning (RL) and more precisely Deep Reinforcement Learning (DRL) has emerged to enable the training of agents in complex environments. This allows for agents to obtain approximations to optimal solutions in models that a formal resolution can't obtain because of their complexity. Moreover, model-free RL algorithms allow to obtain solutions without using the transition probabilities associated with a Markov Decision Process (MDP). DRL also makes it possible to find solutions in models where the number of states or the space of actions can be very large.
This article proposes solutions to the issues outlined above. We summarize our contributions of this work as follows:
• An attacker-defender model considering the asymmetry of the players' perception and the difference in kind between MTD and deception strategies. • An attack scenario integrating Cyber Kill Chain (CKC) stages and based on the attacker's perception. • Use Deep Q-Learning in the model to optimize the choice of defense strategy against an attacker • Experiments to measure the performance of the defense strategy in the model The remainder of the paper is organized as follows. In Section 2, we discuss related works about modeling a confrontation between defender and attacker in a computer system along with the strategy selection of each player. In Section 3, our attacker/defender model considering the perception of each player is presented. We describe how the confrontation game works. The attacker's scenario is then introduced in Section 4. In Section 5, we discuss the use of a DRL algorithm to train agents on our model to find an optimal defender strategy. In Section 6, we provide the data required for the experiments, the results obtained and an analysis of them.
RELATED WORKS 2.1 Game Modeling
To model deception, the perception of the different players must be considered. A common approach is to use game theory to model the interactions between players. There are several types of games to achieve this. We present the most common types of games and some of their uses in the context of protecting a computer system using deception.
A first type of game mentioned in the literature is the Stackelberg Game. This is a type of game in which one player is the leader, often the defender, and the other player is the follower, often the attacker. The leader chooses their actions first, which is observed by the second player who plays next. Clark et al. [START_REF] Clark | Deceptive routing in relay networks[END_REF] use Stackelberg Games to study a jamming defense. They model the interactions between a defender and a jammer in a two-stage game. The defender will generate a false traffic flow. The defender will seek to maximize throughput while minimizing delay and the attacker will choose the fraction of each flow to jam. The existence of a pure-strategy Stackelberg equilibria is shown by the authors. Clark et al. [START_REF] Clark | A gametheoretic approach to IP address randomization in decoy-based cyber defense[END_REF] propose to use a Stackelberg Game to analyze the interactions between an attacker and a network with decoy nodes. The attacker seeks to identify real nodes by looking at the response times of the nodes and the protocols used. The defender chooses when to randomize the IP addresses of devices in the network. This game admits a unique threshold-based Stackelberg equilibrium. Feng et al. [START_REF] Feng | A stackelberg game and markov modeling of moving target defense[END_REF] model defender-attacker interactions by combining a Stackelberg game with an MDP. At the beginning of each round, the defender chooses an MTD strategy for several periods. The attacker chooses a state to attack. An algorithm is designed to choose the best strategy under worst-case. Sengupta and Kambhampati [START_REF] Sengupta | Multi-agent reinforcement learning in bayesian stackelberg markov games for adaptive moving target defense[END_REF] propose a Bayesian Stackelberg Game to model the interactions between an attacker who can be of different types and a system using MTD. A Q-Learning algorithm is used to find an optimal solution.
A second type of game often used to model the interaction between an attacker and a defender in a computer system is Signaling Game. This is a class of two-player games of incomplete information. A first player performs an action and transmits information to the other player with a certain cost which will be higher if the information is false. The second player does not know the nature of the information they received but chooses an action based on it. Pawlick and Zhu [START_REF] Pawlick | Deception by design: evidence-based signaling games for network defense[END_REF] use Signaling Games to analyze the effectiveness of HoneyPots deployments in a computer network. In particular, they show that sometimes the usefulness of the defender can increase when the attacker can detect the luring. La et al. [START_REF] Duy | Deceptive attack and defense game in honeypot-enabled networks for the internet of things[END_REF] propose the use of a Signaling Game to model the interactions between a defender and an attacker in a honeypot-enabled network. This time, the attacker plays first by choosing among several types of attacks while the defender plays second and can use HoneyPots to trap the attacker. Çeker et al. [START_REF] Çeker | Deception-based game theoretical approach to mitigate DoS attacks[END_REF] use Signaling Games to model the interactions between an attacker and a defender whose goal is to protect a server from Denial-of-Service (DoS) attacks while providing a service to legitimate users. The defender can disguise a normal system as a HoneyPot and vice versa. Rahman et al. [START_REF] Mohammad Ashiqur Rahman | A game-theoretic approach for deceiving remote operating system fingerprinting[END_REF] propose a mechanism to limit Remote Operating System Fingerprinting. They use a Signaling Game to model the interactions between the fingerprinter and its target.
Stochastic games are a type of multi-state game where the game is played as a sequence of states. In each of state, each player will choose an action which will have an impact on the new state of the system. A few articles use this type of game to study the impact of a defensive deception strategy on the perception of the attacker in different environments [START_REF] Ahmed H Anwar | Honeypot allocation over attack graphs in cyber deception games[END_REF][START_REF] Horák | Manipulating adversary's belief: A dynamic game approach to deception by design for proactive network security[END_REF]. Anwar et al. [START_REF] Ahmed H Anwar | Honeypot allocation over attack graphs in cyber deception games[END_REF] study the optimal placement of honeypots in an attack graph in order to slow down or prevent an attack. They are interested in the trade-off between security cost and deception reward for the defender. Horák et al. [START_REF] Horák | Manipulating adversary's belief: A dynamic game approach to deception by design for proactive network security[END_REF] propose the analysis of an active deception against an adversary that seeks to infiltrate a computer network to exfiltrate data or cause damage. They use a one-sided partially observable Stochastic Game to study the impact of deception on the attacker's perception. Here, it is assumed that the defender is able to detect the attacker's progress while the attacker lacks information about the system and is therefore vulnerable to deception.
Strategy Selection
Once the game is modeled, the choice of player actions must be optimized. In game theory, some tools are available for this purpose like Mixed Strategy Nash Equilibrium (MSNE). This is an equilibrium in which no player can improve their expected utility if they are the only one to choose another strategy because each player chooses their optimal strategy assuming that the opponent does the same. These equilibria match with the optimal solutions when considering the decisions of other player. This requires a good knowledge of the opponent's strategies and their outcomes. Hu et al. [START_REF] Hu | SOCMTD: selecting optimal countermeasure for moving target defense using dynamic game[END_REF] study the selection of the best countermeasure to maximize defense payoff. This approach uses game theory and more precisely signal game theory. The perfect Bayesian equilibrium is calculated and an algorithm is proposed for the selection of the optimal defense strategy. Experiments in a small environment are done to show the efficiency of the proposed approach.
Some papers propose to find the Nash equilibrium to optimize the choice of a MTD strategy [START_REF] Lei | Optimal strategy selection for moving target defense based on Markov game[END_REF][START_REF] Zhang | Strategy Selection for Moving Target Defense in Incomplete Information Game[END_REF]. [START_REF] Zhang | Strategy Selection for Moving Target Defense in Incomplete Information Game[END_REF] study the selection of the MTD strategy of a defender in a web application environment. The confrontation between the attacker and the defender is done through an incomplete information game. The Nash-Q Learning algorithm is used to find this optimal strategy and its performance is compared to other algorithms such as the Minimax-Q learning algorithm and the Naive-Q learning algorithm in this environment. The results of the experiments show the efficiency of the Nash-Q learning algorithm compared to the others. L. Cheng et al. [START_REF] Lei | Optimal strategy selection for moving target defense based on Markov game[END_REF] use a Markov Game to propose a model considering the multiphase and multistage nature of the confrontation.
Tan et al. [START_REF] Tan | Optimal strategy selection approach to moving target defense based on Markov robust game[END_REF] make a parallel between MTD transformation and change of the attack surface and the exploration surface. They propose a MTD model based on a Markov robust game model. Robust Games can reduce the dependence on prior knowledge for the optimal strategy selection present in other models. As shown by L. Cheng et al. [START_REF] Lei | Optimal strategy selection for moving target defense based on Markov game[END_REF], the use of such a model allows to take into account the multi-stage and multi-state nature. The existence of a robust equilibrium and an optimal solution for this model is proven under certain conditions. An algorithm is proposed to find these and simulations are done to show the efficiency of this approach.
Some articles propose models that are not strictly based on game theory. As MTD strategies are often effective against one type of attack, H. Zhang et al. [START_REF] Zhang | Efficient strategy selection for moving target defense under multiple attacks[END_REF] study the impact of using MTD strategies with one or multiple mutant elements against several attacks. An algorithm based on the Genetic Algorithm is proposed to find the most efficient combination of MTD strategies. It is shown that even in an environment with limited resources it is still important to use multiple mutant elements.
However, as a model becomes more complex, finding equilibria becomes increasingly and computationally intractable. Equilibria as Nash equilibria do not consider past behavior of players which often lead to predict future behavior. Moreover, the fact that the optimal solution found is often a mixed strategy makes its practical use difficult. DRL proposes to obtain an approximate pure strategy of the best strategy while limiting the computational cost. It also allows to find an approximation of the optimal solution in models that are not based on game theory [START_REF] Mnih | Playing atari with deep reinforcement learning[END_REF].
Chai et al. [START_REF] Chai | DQ-MOTAG: deep reinforcement learning-based moving target defense against DDoS attacks[END_REF] propose to use DRL to improve MOTAG [START_REF] Jia | Motag: Moving target defense against internet denial of service attacks[END_REF], a MTD mechanism to counter Distributed DoS (DDoS) attacks. MOTAG uses proxies to transmit traffic between legitimate users and protected servers. By using these proxies, it is able to isolate the external attacker from the innocent clients while shuffling the client/proxy assignments. DQ-MOTAG provides a self-adaptive shuffle period adjustment ability based on reinforcement learning. Experiments show the efficiency of the method used.
T. Eghtesad et al. [START_REF] Eghtesad | Adversarial deep reinforcement learning based adaptive moving target defense[END_REF] propose to find an optimal MTD strategy in the model proposed by Prakash and Wellman [START_REF] Prakash | Empirical game-theoretic analysis for moving target defense[END_REF]. For this purpose, a two-player general-sum game between the adversary and the defender is created. In this model, the attacker and the defender oppose each other for the control of servers, the mechanism of server compromise is simplified. A compact representation of the memory for each player is proposed and enables to act better in the partially observable environment. This paper succeeds in solving this game using a multi-agent RL framework based on the double oracle algorithm which allows to find mixed-strategy Nash equilibrium in games.
S. Wang et al. [START_REF] Wang | An intelligent deployment policy for deception resources based on reinforcement learning[END_REF] study the deployment policy for deception resources and especially the position of these resources. A model is proposed to represent the attacker/defender confrontation and the attacker's strategy is provided. A Threat Penetration Graph (TPG) is used to preselect the locations of deception resources. A Q-learning algorithm is provided to find the optimal deployment policy. A real-world network environment is used to demonstrate the effectiveness of the proposed method.
The problem with games such as signaling games is that they are generated with two players who each has two possible actions. These are games with incomplete information taking into account the asymmetry of the players' perceptions but do not support multiple sources of deception as there could be in a computer system. Moreover, this type of game is not adapted to model a multi-state and multi-stage situation. Other Bayesian games take into account multistate and multistage characteristics but to our knowledge, in the literature, there is no game modeling that takes into account the different perceptions of the players and that allows to take into account both the effects of deception and the effects of MTD strategies on a computer system. Another difficulty is to find an optimal solution in a cyber environment model where the number of states can be very large. The use of Deep Reinforcement Learning will help to address this problem.
CONFRONTATION MODEL
In this section, we present our model of a computer system with several types of potential vulnerabilities. A single machine is modeled but we could model a network of machines in a similar way. We will come back to this in the Section 6.4. Our modeling takes into account the perception of the players and the actions allowed for each of them are real-world actions, so the attacker can launch scans and attacks on different types of vulnerabilities while the defender can deploy Deception or MTD strategies. Due to their different characteristics, these strategies will have a different effect on the environment. MTD strategies will affect the attacker's prior knowledge, while deception strategies will provide false information to the attacker's scans. Deception strategies are deployed over several time-step but are not deployed permanently because maintaining a deception strategy has a cost.
In Section 3.1, we present the main components of the model. We define the different states of the game in Section 3.2 and the observations of the two players in Section 3.3. The game process is presented in Section 3.4 and the reward system in Section 3.5. These sections are useful to understand how DRL is used in this context (see Section 5).
Environment
In this section, we define our attacker/defender model using player perception. For this, we consider a machine, an attacker and a defender. The attacker seeks to compromise the machine while the defender seeks to defend it by using MTD and deception strategies. The machine will be modeled by a vector of vulnerabilities V and by a compromise score C. For each available vulnerability i, if it is present on the machine then
V [i] = 1 otherwise V [i] = 0.
The compromise index C is between 0 and 1. If the machine is not compromised then C is 0. Its score can then evolve to 1 meaning that the machine is fully compromised.
Attacker.
A first player of the model is the attacker whose goal is to compromise the machine. For this, they have several actions available: a GlobalScan, Scans or Attacks. For each vulnerability in V , present or not on the machine, a scan and an attack are available. To each of these actions is associated a DI score indicating the information damage, a DC score indicating the compromise damage and a Cost indicating the cost of this action. The compromise damage of Scans is zero while the information damage is higher for Scan actions than for Attack actions. The greater the information damage for an action against a certain vulnerability, the faster it will be for the attacker to acquire the information needed to exploit it with an attack. In contrast, if a vulnerability is not present on the machine, a large DI score for an action will make it faster to remove that vulnerability from potential vulnerabilities. In addition to these available actions, there is a Global Scan that scans all vulnerabilities at once. In return, the information obtained on each vulnerability by a GlobalScan is lower than for a singular Scan.
Other variables are associated with the attacker and allow measuring the attacker's progress in compromising the machine. We define two vectors pV and perceivedpV containing for each vulnerability v in V , respectively, a score measuring the amount of real information the attacker has about this vulnerability and a score measuring the amount of information the attacker thinks they have about this vulnerability. If perceivedpV [v] = 0.5, the attacker has no information about the vulnerability v. The closer perceivedpV [v] is to 1, the more information the attacker has about the vulnerability v and the more they think the machine has it. The closer perceivedpV [v] is to 0, the more information the attacker has about the vulnerability v and the more they think the machine has not it. perceivedpV and pV are not necessarily equal. If the attacker is deceived, then these two vectors may not be equal. Since pV represents a real amount of information, if the machine is vulnerable to a vulnerability v then pV [v] will be between 0.5 and 1 and if it is not then pV [v] will be between 0 and 0.5. The variable Ca measures the state of compromise of the machine perceived by the attacker. An assumption of our model will be C = C a . This means that the deception will not be about the status of compromise but will focus on the information needed for attacks. Another variable will be the phase of the CKC in which the attacker is located. This one can take three values (0,1 or 2) because we will consider only 3 phases of the CKC to make the model simpler: Reconnaissance, Intrusion and Pivilege Escalation/Exploitation. This parameter will be updated at each step according to the values of pV ,I a and C a .
It is considered that some attacks will only be available at certain phases of the CKC.
Finally, two other variables will measure the attacker's overall knowledge of the machine. The first one is System Info Perception I a (See Equation 1) which measures the overall amount of information perceived by the attacker about the observable vulnerabilities of the machine. This means that in CKC 0 and 1, we do not consider vulnerabilities that can only be used in CKC 2. The second is Useful System Info Perception Iu a (See Equation 2) which measures the amount of useful information for attacks perceived by the attacker on the observable vulnerabilities of the machine.
I a = 2 K v ∈V max(0; 0.5 -|V [v] -pV [v]|)) (1)
Iu a = 2 K v ∈V max(0; perceivedpV [v] -0.5) ( 2
)
where K is the number of available attack strategies considered in the model.
Defender.
The second player will be the defender. Their goal is to prevent the machine from being compromised while minimizing the cost to achieve this goal. Among its possible actions, there are MTD strategies and deception strategies. The defender can also choose not to use any actions. MTD strategies will influence the knowledge already acquired by the attacker. Indeed, if the attacker has collected information about the system and the defender successfully uses a MTD strategy then some of the information acquired by the attacker will become incorrect.
On the other hand, deception strategies do not modify the validity of the information already obtained by the attacker. They provide the attacker with erroneous information. If the attacker acquires information about the system in the steps following the use of a deception strategy, this information may be erroneous and thus deceive the attacker.
Each of the possible strategies is associated with a cost that considers the difficulty to implement it and the software or hardware resources needed to do so.
Model parameters.
A table E f f defines for each action of the defender its effectiveness against an attacker's strategy. This value includes both the ability to prevent an attack and the ability to prevent information about the system from being obtained. We will come back to this in Section 6.1.
In our model, the defender does not necessarily detect all attacks. It detects them with the probability P Det ect ion which is a parameter to be specified in the model. To simplify, it's the same for all attacks. In addition, when an attacker takes an action, the defender can detect which potential vulnerability has been targeted but is not able to distinguish a scan from an attack. This will have an effect on the defender's observations. Deception strategies do not necessarily achieve their objectives. There is a probability of deception success (See Equation 3) depending on the attacker's information about the machine and the phase of the CKC:
P Decept ion = exp(-λ Decept ion • (CKC + 1) • (I a + 0.3)) (3)
Moreover, the attacks are not necessarily a success. The probability of success of an attack targeting a vulnerability on the machine against a defense strategy (See Equation 4) depends on the attacker's knowledge of the targeted vulnerability and the effectiveness of the defensive strategy against the attack.
P At t ack (A, D) = V [A] • exp(- λ At t ack • (E f f (A, D) + 0.3) max(0.01; (pV [A] -0.5) • 2) ) (4)
where A represents an attacker's action and D represents a defender's action. If the vulnerability is absent from the machine then the probability of success of the attack is zero. λ Decept ion and λ At t ack are parameters that will calibrate the above probabilities to make them more consistent with the real environment we want to model. Similarly, the value of certain offsets in these probabilities could also be adjusted to better reflect the values of a real situation.
States of the game
Our model uses a discrete time scale. The game is a succession of stages. At each stage, the game is characterized by its state. In a given time step t, the state of the game is :
s t = ⟨V , C, pV , CKC⟩ (5)
Each of these parameters has been defined previously.
Observations
For each of the players, the attacker and the defender, we will define the observations. These are the information held by each player that can be used at each time step of the game to choose a strategy. The state of the game for the attacker is defined by the tuple O a :
O a = ⟨perceivedpV , C a ⟩ (6)
This provides to the attacker both information about the vulnerabilities of the machine and about the state of compromise of the machine.
On the other hand, the defender has information about the different vulnerabilities that have been targeted by the attacker in the past if the attacks have been detected. They also have information about the defense strategies used in the past. The state of the machine for the defender is defined by the tuple O d :
O d = ⟨last_attacks, last_de f enses, times_since_last_attack, times_since_last_de f ense, attack_counts⟩ (7)
where
• last_attacks is a vector that contains for the last k time steps, if detected, the vulnerabilities targeted by the attacker in One-Hot encoding i.e. each element of the vector is 1 if the corresponding attack has been detected and 0 otherwise. • last_de f enses is a vector that contains for the last k time steps, the strategies used by the defender in One-Hot encoding. • times_since_last_attack is a vector containing for each vulnerability the number of time steps elapsed since the last action (Scan or Attack) detected from the attacker on it.
• times_since_last_de f ense is a vector containing for each defense strategy the number of time steps elapsed since its last use. • attack_counts is a vector containing for each vulnerability the total number of detected actions of the attacker on it.
k is a parameter to be defined of the model. It represents the number of time steps considered in the last_attacks and last_de f enses vectors of the observations.
Course of a game step
In this section, we will explain how the game proceeds. For that, we will introduce the Information Gathering function (See Algorithm 1). It corresponds to the unfolding of an information gathering strategy of the attacker and its impact on the model. It will modify the attacker's information held on the vulnerabilities by considering the decoy strategies set up by the defender. This function is called at different moments of the game as for example when the attacker performs a scan, a global scan or an attack that fails. Indeed, it is considered that even when an attack fails, the attacker still gets information about the targeted vulnerability.
This Information Gathering function takes as parameter the vulnerability targeted by the attacker AND µ a parameter quantifying the potential impact of the information gathering on the vulnerability information. Indeed, the larger µ is, the more pV and perceivedpV will be modified by the information gathering. In the algorithms,
V [A A ] is short for V [v]
where v is the vulnerability targeted by the attack or scan A A . It is the same for pV [A A ] and perceivedpV
[A A ].
Algorithm 2 provides the pseudocode of the unfolding of a game timestep given the actions chosen by the defender and by the attacker. To summarize its operation, the first part is to check if the strategy of the defense is a MTD strategy. If this is the case, then pV evolves in this way. Then the nature of the attacker's action is checked. If it is an individual scan, then the Information Gathering function is used with a rather high µ, i.e. a strong potential impact of the information gathering. If it is a Global Scan, then we do the same but for all the available vulnerabilities and with lower µ. If not, we look to see if it is an attack. If it is and the attack is successful, then we update V , pV , perceivedpV and C. If not, we use the Information Gathering function with a low µ.
This process will repeat itself to form an episode. An episode is a sequence of states, actions and rewards that ends in a final state. In our model, an episode ends if the machine is compromised or if the number of time steps exceeds 100 because we consider that the attacker will be dissuaded or discouraged from attacking and will stop the attack.
Reward
To optimize the defender's strategy selection at each step, we need to define a utility function U D that specifies the defender's objective. Its objective is to make the compromise of the machine as long as possible and thus to minimize the information held by the attacker on it as well as the level of compromise of it. U D is defined as follows:
U D = w • (1 -I A ) + (1 -w) • (1 -C) (8)
the more scans will be promoted. The larger w3 is, the more compromise damage will be promoted in the utility and therefore the more attacks will be promoted. The larger w1 is, the more the attacker will try to fill their lack of information about the global system. In this case, it is rather the Global Scan that will be promoted. The cost of each attacker strategy also has its influence. The reward given to the attacker at time t is given by:
r t A = U A -Cost[A A ] (10)
where U A is the Defender Utility at time t and Cost[A A ] is the cost of the attacker action used at time t. The attacker will choose at each step the action that maximizes the reward they can get.
STRATEGY SELECTION OPTIMIZATION
In this section, we will present the approach used to find an optimal policy. A policy is a function that associates with each state of the game the action that the agent will take in this state. Because of the complexity of the game, the large number of possible states and the stochastic nature of the rewards and transitions between states, it is difficult to find an optimal policy analytically. That is why we decide to use Deep Q-Learning algorithm [START_REF] Mnih | Playing atari with deep reinforcement learning[END_REF].
Firstly, we define the discounted return at time t for the defender:
G t = r t +1 D + γr t +2 D + • • • + γ T -1 r T D (11)
where we consider the episodes as finished, T is the length of the episode and γ is a discount factor that balances the weights between future and current rewards.
A policy is a function that takes as input a state S and returns the probability of using each action in this state. We also define the Q-function Q π (s, a) which returns the value of taking action a in state s under policy pi:
Q π (s, a) = E π [G t |s t = s, a t = a] (12)
= E π [r t +1 D + γG t +1 |s t = s, a t = a] (13)
The optimal state-action value fonction is:
Q * (s, a) = max π Q π (s, a) (14)
= E[r t +1 D + γmax a ′Q * (s ′ , a ′ )|s, a, s t +1 = s ′ ] (15)
This equation is called Bellman Optimality Equation [START_REF] Mnih | Playing atari with deep reinforcement learning[END_REF].
CKC Phases Prerequisites Weights
Reconnaissance Ø 2: Attacker utility according to the type of attacks
w1 = 0.2 w2 = 0.6 w3 = 0.2 Intrusion Iu a > 0.3 w1 = 0.2 OR ANY w2 = 0.4 perceivedpV [i] > 0.8 w3 = 0.4 Exploitation C > 0.3 w1 = 0.4 OR w2 = 0.1 Privilege Escalation w3 = 0.5
Strategy Attacker Utility U
A (A A ) Scan w1 • (0.5 -|perceivedpV [A A ] -0.5|)+ (w2 • DI [A A ] + w3 • DC[A A ]) • perceivedpV [A A ] Attack (w2 • DI [A A ] + w3 • DC[A A ]) • perceivedpV [A A ] Global A i is Scan U (A i )/2 Scan Table
In our context, a state s corresponds to the defender's perception of the state of the game, i.e. its observations O d .
Temporal Difference Learning offers an iterative process that updates the Q-values for each state-action pair based on the Bellman Optimality Equation. This process converges to the optimal Qfunction.
Q(s, a) = (1 -α) • Q(s, a) + α(r t +1 D + γmax a ′Q (s t +1 , a ′ )) (16)
where α is a learning rate.
In classic Reinforcement Learning, a table is used to store the mapping between the pair state, action and their corresponding Q-value. In Deep Reinforcement Learning, a neural network is used for this. The input of the neural network is the state, and the output is the estimated Q-value of each action in this state.
We use the Deep Q-Learning algorithm (DQN) of Mnih et al. [START_REF] Mnih | Playing atari with deep reinforcement learning[END_REF] with a replay buffer and a target network to train our model. This algorithm uses many hyperparameters that must be defined before learning. They are in the Table 3
EXPERIMENTS 6.1 Data for Experiments
In Sections 3 and 4, we describe our model and a basic scenario for the attacker. Several parameters are required for the experiments. First, concerning the machine, we need to define the vulnerabilities considered in the model. These are not Common Vulnerablities and Exposures (CVEs), they are rather families of vulnerabilities. Then, for each of them, we have to define the damage in Information and in value of Compromission as well as their cost in the case of a scan or an attack. These values have been defined arbitrarily based on information from Mitre CAPEC [START_REF]Common Attack Pattern Enumeration and Classification (CAPEC)[END_REF] and Mitre CVE [START_REF]Common Vulnerabilities and Exposures (CVE)[END_REF]. Table 4 shows the data used in the experiments concerning the vulnerabilities and the attack strategies. We have selected a set of known vulnerability families that can be separated into two categories: those accessible from outside the machine and those requiring local access on the machine. For simplicity, we consider that the costs of the scans are the same. The same goes for the cost of the attacks.
Implementation
The experiments were done on an Intel Core i7-9750H CPU with an NVIDIA GeForce RTX 2070 graphics card.
Our model has been implemented in an OpenAi Gym environment [START_REF] Brockman | Openai gym[END_REF]. Stable Baselines3 is a python library providing the implementation of Reinforcement Learning algorithms [START_REF] Raffin | Stable-Baselines3: Reliable Reinforcement Learning Implementations[END_REF]. We use it to train our agents on the model.
We sought to optimize the hyperparameters used to train our agents. For this, we trained the model with many different parameters. The model parameters and the hyperparameters finally used for the tests are located in the Table 3.
Results
We trained agents in different environments with different parameters. We have measured the influence of the parameters on the performance of our trained agents. Shown in Figure 1 are the training curves of different defenders for different values of λ at t ack . We can thus compare the influence of the probability of attack success with the performance of the agents. Our model includes a significant uncertainty, so the reward has a large variance. In Figure 2 we compare the performance of our DQN-trained agent with the performance of agents using only one defensive strategy or randomly choosing the defensive strategy. We evaluate each strategy over 1000 episodes. We compare the average episode reward for each strategy but also the average episode length.
The agent trained with the DQN performs better than the other "naive" strategies in terms of both episode reward and episode length. This means that our agent is able to use its observations to deduce a defense strategy to use. Moreover, we can observe the influence of strategy costs in the reward because for some strategies the episode length is higher than for others but the corresponding reward is lower. For example, IP Random Strategy has a higher reward than Honey Service but a shorter episode length.
We compared the performances obtained by an agent trained with the DQN in different environments. We varied the probability of detection of attacks P Det ect ion , λ at t ack and λ decept ion . The lower the probability of detection of attacks, the lower the rewards for the defender. This makes sense because if the defender does not observe the attacks, they cannot choose an appropriate strategy. λ at t ack , which inversely influences the probability of success of attacks, has a strong impact on the performance of our agent. λ decept ion inversely impacts the probability of deception success. The larger the value for λ decept ion , the lower the probability of deception success.
The problem with the DQN algorithm is that the resulting policy is not necessarily optimal. Indeed, it may be a local optimum. This explains some of the results in Figure 3, for example, for P Det ect ion = 0.8 and λ at t ack = 0.7, the performances of the agent for λ decept ion = 0.5 are lower than for λ decept ion = 0.7 which is anomalous according to the explanations of the previous paragraph.
Discussion
The proposed model is multi-state and multi-stage. It allows to consider a large number of possible states for the system. Moreover, the representation of the vulnerabilities in the V vector allows to consider different types of machines such as servers or workstations which will have different types of vulnerabilities. Futhermore, thanks to our representation of the defender's observations, we are able to use the attacker's past behavior to better predict these future actions and thus choose an optimal strategy. Moreover, our modeling takes into account the difference in nature between MTD strategies and Deception strategies. Thus they act differently on the system. Deception and MTD strategies should be seen as complementary. Experiments show that to counteract a variety of potential attacks, it is necessary to have access to a variety of defensive strategies. The experiments also show the importance of good coordination of these strategies and the need to limit the deployment of certain strategies to critical moments because their cost to the system can be very high.
Regardless of the parameters used, we manage to train an agent that performs better than the unitary strategies against our attacker following our scenario detailed in Section 4. The advantages of this scenario are that it is based on the perception of the attacker and it separates remote attacks and local attacks such as privilege escalation.
The realism of the model could be improved by using data from experiments. To do this, we could implement several defensive strategies on a system with known vulnerabilities in order to measure the ability of these strategies to prevent or slow down their exploitation by an attacker. In particular, as the model is sequential, it would be interesting to integrate the time of each action for both the attacker and the defender in the calculation of their cost. Moreover, the effectiveness of defensive strategies against attacks should not be static but depend on the attacker's progress in their attacks. Indeed, for example, the IP Random defense strategy will not be as efficient against an attacker located outside the network or against an attacker who already has access to the internal network.
In this paper, the model seeks to represent the confrontation of a defender and an attacker around a single machine. However, to better represent reality, it would be necessary to extend this model to a network of machines. This would multiply the number of possible actions for each player, the number of states of the model and the number of entry points for the attacker. It would then be necessary to consider the conflicts that can exist in the deployment of defensive strategies on several machines. This would allow the simulation of more realistic attack scenarios. It would then be interesting to know if the DQN algorithm is still efficient in a model of this type. This is ongoing research work that will be the subject of a future article.
CONCLUSION
Deception and MTD strategies are two types of strategies that consist in bringing false information or uncertainty to the opponent's perception in order to increase the cost of the attacks. In this paper, we proposed a model of an attacker/defender confrontation in a computer system considering the asymmetry of perceptions and the impact of the two players' strategies on them. We then proposed an attacker scenario based on the CKC and using their perception. Thanks to this, we performed simulations and trained with the DQN algorithm a defensive agent. It is able to choose the most adapted defensive strategies to prevent the compromise of the machine by using the observed past actions. This simulation framework could be used to optimize the use of MTD and Deception strategies in real context by using data from experiments for model parameters.
Future works would then be to develop an emulation environment allowing the deployment of different MTD and deception strategies. This would provide data such as the cost or effectiveness of the latter. It could also allow for the training of defending DQN agent directly on the emulation environment. Another part of the work would be to improve the attacker scenarios. To do this, we could try to train an attacking DQN agent based on its observation.
Figure 1 :
1 Figure 1: Learning curve of different agents in different environments with different λ at t ack and λ Decept ion = 0.5, P Det ect ion = 1
Figure 3 contains the comparison of the rewards obtained with a DQN agent for different values of these parameters. It contains one heatmap Table 6: Efficiency of the defensive strategies against the attacker actions per P Det ect ion value. Each heatmap shows the rewards obtained for 3 different values of λ at t ack and λ decept ion .
Figure 2 :
2 Figure 2: Comparison of the performance of different defense strategies in different environments with λ at t ack = 0.5 and λ Decept ion = 0.5, P Det ect ion = 1
Figure 3 :
3 Figure 3: Comparison of the reward of the defender using a DQN agent in environments with different values of detection probability P Det ect ion , λ at t ack and λ decept ion
Table 1 :
1 Prerequisites and weights of the utility function for each CKC phase.
Table 3 :
3 . List of model parameters and DQN hyperparameters
Parameter Value
Weight in Defender Utility 0.3
k for Defender Observations 3
Exploration Fraction 0.1
Target Update Interval 5000
Neural Network Layers [256x256]
Replay Buffer Size 1000000
Activation Function Tanh
Train Frequence One episode
Batch Size 32
Learning Rate 0.0005
Discounted Factor 0.99
Table 4 :
4 List of considered vulnerabilitiesConcerning the defender, we consider eight possible defense strategies: four MTD and four deception strategies. They can be found in the Table5with the cost of each strategy.
required
Vulnerability Type CKC phase Cost DI DC
Identity Scan 0 0.2 0.3 0
Spoofing Attack 1 0.3 0.15 0.4
Traffic Scan 0 0.2 0.2 0
Injection Attack 1 0.3 0.1 0.4
Brute Force Scan Attack 0 1 0.2 0.3 0.2 0.1 0.4 0
Command Scan 0 0.2 0.4 0
Injection Attack 1 0.3 0.2 0.4
Code Scan 0 0.2 0.3 0
Injection Attack 1 0.3 0.15 0.4
Privilege Scan 2 0.2 0.2 0
Abuse Attack 2 0.3 0.1 0.6
Authentication Scan 2 0.2 0.1 0
Bypass Attack 2 0.3 0.05 0.6
Privilege Scan 2 0.2 0.2 0
Escalation Attack 2 0.3 0.1 0.6
Strategy
Number Strategy Type Cost
0 IP Random MTD 0.1
1 Port Random MTD 0.2
2 Rekeying Keys Random MTD 0.1
3 Language Random MTD 0.3
4 Honey Service Deception 0.4
5 Honey Credentials/Accounts Deception 0.3
6 Honey Process Deception 0.2
7 Honey Files/Logs Deception 0.2
Table 5 :
5 Considered defense strategiesAnother essential element in the experiments is the table of efficiencies of the defensive strategies against the actions of the attacker mentioned in the Section 3.1.3 and shown in the Table6. These are chosen arbitrarily based on our knowledge of the different strategies, but one way to improve the model could be to use data from real experiments.
w is a numeric parameter to give more weight to the information or to the compromise in the defender's utility function.
The cost of each strategy also has its influence. The reward given to the defender at time t is given by:
where U D is the Defender Utility at time t and Cost[D A ] is the cost of the defender action used at time t. The defender seeks to maximize the total reward over an entire episode.
ATTACKER SCENARIO
Now that we have presented our Attacker/Defender model, we will present our attacker scenario. Indeed, we will train our defender to react to the attacker's actions to make the machine's compromise as long as possible. We therefore need an attack scenario for the attacker. This is based on a simplified version of the Cyber Kill Chain. Only three steps of the CKC are considered, the same as those used for the model (See Section 3.1.1): Reconnaissance, Intrusion and Exploitation or Privilege Escalation.
To compromise the machine, the attacker will have to go through each of these phases. Actions available to the attacker will depend on the phase in which they are. Indeed, for example, if the attacker is in the third phase of the CKC, then it means that they have infiltrated the machine and therefore there may be new potential Algorithm 2 Model -Run one step
Run Information Gathering with A A , and µ end if end if vulnerabilities reachable only from inside the machine. Moving from one phase to another of the CKC requires certain prerequisites which for this example scenario can be found in the Table 1.
Attacker Strategy Selection
At each step, the attacker will have to choose an action among those available. For this purpose, we use a very common function in game theory reffered to as utility function. This function associates with each available action a utility representing the satisfaction of the player to choose this action. The attacker will then choose the action that maximizes this function.
In our case, the satisfaction depends on the objective of the attacker and thus on the phase of the CKC in which they are. Indeed, for example, in the recognition phase, it is the collection of information and therefore the scans that are promoted while in the intrusion phase it is the attacks that are promoted. To this end, some parameters of the utility function will depend on the phase of the CKC in which the attacker is (see Table 1)
The utility function is not the same depending on the type of action of the attacker (See Table 2). Each parameter influences the utility function in a different way. Indeed, the larger w2 is, the more information damage will be promoted in the utility and therefore |
00019983 | en | [
"spi.meca.vibr"
] | 2024/03/04 16:41:22 | 2005 | https://hal.science/hal-00019983/file/44.pdf | Raphaël De Matos
email: [email protected]
Emmanuel Foltête
Noureddine Bouhaddi
Catherine Guyard
Total Calalytic Converter Modelling for Durability Improvement
Keywords: Exhaust gaz purification, dynamic modeling, non-linear damping, mechanical ageing, fiability
From the simple drain of flow to the gas cleaning, exhaust systems of automotive vehicles become complex modules playing a decisive role in the control of pollutant emissions and on acoustic comfort. In terms of depollution, the key elements are the catalysts and other systems of gas purification such as particle filters, NOx or SOx traps, etc. Catalyst is a subset in which chemical reactions are initiated with the aim of maintaining the pollutant emissions below certain rates fixed by standards. Evolutions of emission control standards deal with service life of catalytic converters. Mechanical durability should not restrict the lifetime of catalysts. This aspect has led us to develop a prediction tool, with the aim of preselecting design solutions and reducing testing time. To reach such requirements, a total catalytic converter modeling is needed. Many efforts have been made on both numerical modelling and experimental identification. Advanced measurement techniques such as 1D pulsed ESPI have been used. A non-linear damping identification approach based on the wavelet transform of time responses has been developed and validated.
Introduction
The need for strongly reducing air pollution generated by vehicles led to the elaboration of antipollution regulations. Those gave rise to well-known exhaust systems components: the exhaust catalysts.
These ones generally consist of the following under-components (FIG. 1):
-a ceramic substrate, location of chemical reactions.
-a metallic box.
-an holding mat which maintains the ceramic substrate within the steel shell. Such a device is submitted to different kind of mechanical sollicitations: on the one hand, all the vibrations generated by the engine or due to the road, as well as shocks, are transmitted to the steel shell. On the other hand, high thermal variations may reduce the service life of catalysts. As a consequence, designers must take into account all these factors to minimise their influence on durability.
Mat is a fibrous material which plays several significant roles:
-First of all it maintains the substrate, by compensating for geometrical defects and protects it from possible blows and impacts. -Secondly, it offers a thermal insulation of the metallic shell.
-Lastly, it guarantees that the gas flow crosses the substrate in its quasi-integrality (rate higher than 99%).
For all these reasons, mats constitute essential elements of catalysts. Their ageing directly determine the service life of the catalytic function. The current state of knowledge shows that failures noted on exhaust catalysts are mainly due to the fact that the mat does not ensure its maintenance property anymore.
The experimental characterization as well as the modelling of mats are essential to improve the durability of the catalytic function. This paper does not deal with ceramic substrates as they are considered, thanks to measurements, as rigid body components.
Shell characterization using speckle interferometry
Electronic Speckle Pattern Interferometry (ESPI) is a recent optical technique which allows the displacement field of a harmonically vibrating structure to be measured. Based on the holographic interferometry principle, it provides a contactless, full field and instantaneous measurement by using a powerful pulsed laser and a specific CCD camera, along one two or three directions. In the past few years, a technique has been developped in order to extract quantitative data for model updating from this kind of dynamic measurements [START_REF] Foltête | Quantitative dynamical measurements for model updating using electronic speckle interferometry[END_REF].
In order to perform a model updating under the best conditions as possible, the huge measured data (about one million of pixels) are reduced to the image points which correspond to the FE nodes. By using the video image, the relation between the 3D coordinates and 2D coordinates in the image, as well as the measurement direction (called sensitivity direction) are identified for each FE node.
The extraction of the measured data from the displacement image is made up of two operations: the data condensation on the finite elements (FE) nodes which are visible from the observation point and the computation of the sensitivity directions. A measured value is then associated to each visible node.
In order to perform efficient and rapid measurements, avoiding a large amount of non pertinent data, we first proceed to a classical accelerometric measurement leading to the knowledge of the modal frequencies and damping. Then ESPI measurements are performed on resonant frequencies [START_REF] Lepage | High spatial resolution vibration analysis using hybrid method : Accelerometers -double pulse espi[END_REF]: some mode shapes are presented in FIG. 2.
Solving the problem of multiple eigenvalues
As most of catalytic converters are cylindrical, we must take care about multiple eigenvalues. Let's write Y (a) , the matrix constituted by the N calculated eigenvectors. The matrix Y (a) is divided in sub-spaces: the first sub-space includes the rigid body modes in the case of a free-free boundary conditions study, then each sub-space is constituted by a pair of eigenvalues (same modal shape, but not with the same principal direction) (Equation 1).
Y (a) = . . . SE 1 SE 2 SE 3 . . . SE P (1)
This division is operated regarding eigenfrequencies: if the difference between two eigenfrequencies is below a predefined tolerance, we consider that they are associated to a pair of eigenvectors. Now, we dispose of two sets of eigenvectors:
-the eigenvectors extracted from measurements: Y (e) , -the analytical eigenvectors: Y (a) . For each eigenvector extracted from measurements y
(e)
ν , a combination of coefficients c i ν is calculated by applying the least square method on the Equation 2 below (minimizing ε i ):
SE i Y (a) c i ν = y (e) ν + ε i (2)
The set of coefficients Finally a normalization with respect to the mass is operated:
Ỹ (a) T [M ] Ỹ (a) = [`I `]
Comparison between experimental and analytical results
The experimental mounting consists of an empty pipe suspended with free-free boudary conditions. The excitation force is generated by a shaker: acceleration and force are measured at the excitation point. ESPI measurements are performed under harmonic excitation at the resonant frequencies.
Because of multiple eigenvalues, accelerometric measurements do not give enough information for model updating. The MAC matrix between experimental and analytical modes shows that each experimental mode could be represented at least by two analytical modes (FIG. 3-(a)).
On the other hand, we can illustrate the improvment provided by ESPI measurements, as well as the sub-space division for multiple eigenvalues (FIG. 3-(b)(c)). As said before, mats play a decisive role in the durability of catalytic converters. Moreover, mats are not well know in dynamics, because of their structure complexity.
In the following developpement, we present an original method to extract modal parameters of mats [START_REF] Mallat | A Wavelet Tour of Signal Processing[END_REF][4].
Experimental setup
Mats are constituted of ceramic fibers. An organic binder ensures a good cohesion between these fibers in order to facilitate the manipulation. When submitted to heat, this binder flows away: on vehicule, this occurs during the first kilometers. After that and for the whole lifetime of the vehicule, mats hold on ceramic substrates without any binder.
For this reason, we have choosen to work on binder free mats, that is to say with mats that can not be manipulated safely. The experimental characterization bench has been designed in order to be able to study the dynamic response of the 1D oscillator constituted with such a material.
A thermal aged mat, considered as isotropic, is packaged in working like conditions. A harmonic force is applied to the 1D oscillator and suddenly stopped. By observing the dynamic free response we are able to extract a large information field by using the wavelet transform.
Data processing
Despite of the short time of measurement (due to high damping ratio), the wavelet transform gives good results as shown on FIG. 4-(a). Frequency and damping ratio are represented vs. displacement: the behavior shown is viscoelastic. Since the experimental bench can be assimilated to a single-degree-of-freedom system, it can be idealized to a system shown in FIG. 5-(a). Therefore, we can use the concept of viscous damping to represent the damping mechanism of viscoelastic material (FIG. 5-(b)). Mathematically, for By considering the real and imaginary parts of the above equations, it yields to:
✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✂ ✁✂
✄ ✁✄ ✄ ✁✄ ✄ ✁✄ ☎ ✁☎ ☎ ✁☎ ☎ ✁☎ K B F(t) M x(t) ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✂ ✁✂
K R = M ω 2 0 K I = 2ζM ω 2 0 ( 3
)
where ζ is the damping factor of the system. By expressing the displacement field as a function of the geometric parameters, the Lamé coefficient µ is known. As a consequence, the complex Young's Modulus can be computed FIG. 4-(b).
Conclusion
ESPI is a recent measurement technology wich provides a high spatial resolution. The correlation between the model and the real structure has been improved by using ESPI measurements. In addition, we have shown how to solve the problem of multiple eigenvalues: as a consequence, we would be able to perform model updating on cylindrical entities.
Concerning mats, measurements have underlined a non-linear response. The wavelet transform method allows us to extract eigen-frequency and damping ratio of mats as a function of the amplitude. Moreover, it provides a quick testing method: if classical testing method were used, we would have applied the Fourier transform technique on dozens of measurements at different amplitudes. This long and tedious step is favorably replaced by a single measurement followed by a single data processing.
The extraction of the complex stiffness is done as the characterization bench can be considered as a single-degree-of-freedom system. As a consequence, the Young modulus of the material is computed.
This will lead to a dynamic representation of a whole catalytic converter, at ambient temperature. The next step will consist in the evaluation of a wide range type of mats, at different temperatures.
FIG. 1 -
1 FIG. 1 -Constitutive elements of a catalytic converter
FIG. 2 -
2 FIG. 2 -Mode shapes extracted from ESPI measurements
c i ν leads to a transformation matrix [T ]. The last operation consists in calculating the matrix Ỹ (a) = Y (a) [T ] where Ỹ (a) is a new modal matrix representing the same sub-spaces as Y (a) but respecting the principal directions of the experimental eigenvectors Y (e) .
3 -
3 FIG. 3 -MAC matrix for a pipe, considering accelerometric and holographic measurements
4 -
4 FIG. 4 -Instantaneous Resonant frequency, damping factor and Young's modulus vs. displacement
FFIG. 5 -
5 FIG. 5 -Viscous vs. viscoelastic systems
✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂
✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂ ✁✂
✄ ✁✄ ✄ ✁✄ ✄ ✁✄ ☎ ✁☎ ☎ ✁☎ ☎ ✁☎
✆ ✁✆ ✁✆ ✁✆ ✁✆ ✁✆ ✁✆ ✁✆ ✆ ✁✆ ✁✆ ✁✆ ✁✆ ✁✆ ✁✆ ✁✆ ✝ ✁✝ ✁✝ ✁✝ ✁✝ ✁✝ ✁✝ ✁✝ ✝ ✁✝ ✁✝ ✁✝ ✁✝ ✁✝ ✁✝ ✁✝
Acknowledgments
This work was sponsored in part by the French Agency for Environment and Energy Management (ADEME). Agence de l'Environnement et de la Maîtrise de l'Energie (ADEME) Sophia Antipolis -06560 Valbonne, France |
04103754 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2023 | https://shs.hal.science/halshs-04103754/file/WP13.pdf | Samira Marti
Isabel Martínez
Florian Scheuer
Samuel Ammann
Enea Baselgia
Marius Brülhart
Joël Bühler
Vera Frei
James Hines
Katrine Jakobsen
Niels Johannesen
Marko Köthenbürger
Wojtek Kopczuk
Simone Leibundgut
Raphael Parchet
Yasmine Perrinja- Quet
Laura Salathé
Sebastian Siegloch
Joel Slemrod
Dario Tortarolo
Does A Progressive Wealth Tax Reduce Top Wealth Inequality? Evidence From Switzerland *
Like in many other countries, wealth inequality has increased in Switzerland over the last fifty years. By providing new evidence on cantonal top wealth shares for each of the 26 cantons since 1969, we show that the overall trend masks striking differences across cantons, both in levels and trends. Combining this with variation in cantonal wealth taxes, we then estimate an event study model to identify the dynamic effects of reforms to top wealth tax rates on the subsequent evolution of wealth concentration. Our results imply that a reduction in the top marginal wealth tax rate by 0.1 percentage points increases the top 1% (0.1%) wealth share by 0.9 (1.2) percentage points five years after the reform. This suggests that wealth tax cuts over the last 50 years explain roughly 18% (25%) of the increase in wealth concentration among the top 1% (0.1%).
Introduction
Many advanced countries have experienced rising income and wealth inequality over the past decades. These trends have spurred discussions about potential institutional responses. In particular, the tax treatment of capital gains has been at the front and center of the policy debate across the globe in the last few years. One reason is that they make up the largest component of income at the top of the distribution, including notably the payoffs to the founders of successful businesses (Scheuer and Slemrod 2020). At the same time, capital gains are treated favorably by most countries' tax systems at the moment, mostly because of lower rates, taxation only based on realization, and various tax exemptions at death. Taken together, these two facts have been recognized as a key reason for the erosion of tax progressivity, so that the average tax rate of millionaires and billionaires can be lower than that of individuals further down the distribution (Leiserson and Yagan 2021).
To address this, the introduction of wealth taxes has been prominently discussed in several countries including the United States (Saez and Zucman 2019). This would ensure that rich households bear some tax burden even when not realizing any capital gains, which may dampen the rise in wealth inequality over time. However, we currently lack systematic evidence on how wealth taxation (and its progressivity) affects the evolution of wealth concentration. Our aim in this paper is to shed light precisely on this effect.
To do so, we exploit the decentralized structure of the Swiss wealth tax as a laboratory setting. While twelve European countries levied an annual tax on net wealth in the 1990s, by now only three-Norway, Spain, and Switzerland-still levy such a tax, and only Switzerland raises a level of government revenue comparable to recent proposals such as those put forward by Senators Bernie Sanders and Elizabeth Warren in the US (Scheuer and Slemrod 2021). The Swiss example is therefore of particular interest for the policy debate elsewhere.
Our first contribution is to construct novel time series, based on data from cantonal archives, for top wealth concentration in each of the 26 Swiss cantons since 1969. In particular, we calculate the top 10%, 5%, 1%, 0.1% and 0.01% wealth shares using data on the number of taxpayers and their total wealth in various brackets and by estimating a local Pareto distribution at the top. We find that the overall increase in wealth concentration at the national level masks striking differences across cantons, in terms of both levels and trends in within-cantonal inequality. Whereas some cantons (such as Zurich) have seen a reduction in their top 1% wealth share over the last 50 years, others (such as Schwyz) have seen theirs almost double. Since the cantons have freedom in designing their wealth tax schedules, this raises the question to what degree these diverging trends are driven by policy heterogeneity, and in particular by differences in wealth tax rates in the top bracket.
We therefore complement our information on cantonal wealth distributions with the cor-responding panel data on top marginal wealth taxes, going back to 1964. Cantons have frequently changed their top tax rates with an overall downward trend but significant variation. For instance, the highest rate in our data is 1.34% in Glarus in 1970, and the lowest is 0.13% in Nidwalden in 2014.
Combining these data sets, we then explore the link between the two. Our event study design allows us to estimate the dynamic effect of wealth tax reforms on the subsequent evolution of top wealth shares. Focusing on large tax reforms and controlling for income and bequest taxes, we find that cuts to the top marginal wealth tax rate in a given canton increase wealth concentration in that canton over the course of the following decade, and that tax increases reduce it. The effect is strongest at the very top of the distribution. For the top 1% and 0.1%, for instance, a reduction in the top marginal wealth tax rate by 0.1 percentage points increases their wealth share by 0.9 and 1.2 percentage points, respectively, five years after the reform (compared to an average wealth share of 34% for the top 1%, and 16% for the top 0.1%). This implies that the overall reduction in the progressivity of the wealth tax in the Swiss cantons over the last decades explains roughly a fifth (a quarter) of the increase in concentration among the top 1% (0.1%) over this time horizon.
While this is a sizeable portion, it is also clear that other factors must have played a more prominent role in shaping wealth inequality in Switzerland. This is not surprising because, despite the variation across cantons, the wealth tax is not very progressive in any of them, with moderate top rates especially compared to recent proposals in the US, and relatively low exemption amounts, which imply that a large swath of the population is subject to it.
Whereas we are interested in the relationship between progressive wealth taxation and wealth inequality, the growing literature on the behavioral response of declared wealth to wealth taxes has focused on the absolute effect (see Scheuer and Slemrod 2021 for an overview).1 Closest to our study is Brülhart et al. (2022), who also take advantage of variations in the wealth tax rate across Swiss cantons and over time using a similar event study design. Apart from the fact that we consider the distributional effects of wealth taxation rather than its effect on the total amount of reported wealth, our main contribution is that our data covers a much longer time period. Since 2003, the federal tax administration in Switzerland has published yearly wealth statistics for all cantons and Brülhart et al.'s (2022) panel analysis is based on this data. Instead, by collecting data from the cantonal archives directly, our time series go back to 1969. During the decades before 2003, there was more variation in tax rates across cantons, including notably some significant tax hikes, which became much rarer later on. Moreover, the overall level of tax rates was significantly higher in the 1970s than since the 2000s, and the degree of wealth concentration has changed sub-stantially since this earlier period. Thus, looking at a longer historical evolution is crucial to tackle our research question.
Our paper is organized as follows. Section 2 gives an overview of the Swiss wealth tax system. Section 3 describes our data construction, notably how we compute cantonal top wealth shares based on the archive data and using Pareto interpolations. Section 4 discusses our novel findings on inter-cantonal differences in wealth inequality since 1970. Finally, Section 5 presents the results from our cross-cantonal event study and Section 6 concludes.
Wealth taxation in Switzerland
The Swiss tax system is generally structured in three layers: the federal, cantonal and municipal level. There are 26 cantons and about 2,300 municipalities. The Swiss constitution gives the cantons considerable autonomy over taxation and public spending decisions. In 2018, total tax revenues at the federal level amounted to $70 billion, while all cantons and municipalities together raised another $77 billion in fiscal revenues (corresponding to 10% and 11% of GDP, respectively).
The wealth tax has a long tradition in Switzerland and in fact predates the modern income tax. The cantons have been taxing wealth since the early 18th century and this was their main source of revenue until World War I. Between 1915 and 1959, there was also a wealth tax at the federal level. Since then, there has been no federal wealth tax but all cantons must levy a comprehensive wealth tax, over which they have significant freedom in designing. 2 In the 1990s, twelve European countries levied an annual tax on net wealth. By now, only three-Norway, Spain, and Switzerland-still levy such a tax, with Switzerland raising more than three times as much revenue as a fraction of total revenues (3.9%) as any of the other countries (Scheuer and Slemrod 2021). In 2018, 9.6% of the total tax revenues of all Swiss cantons and municipalities was raised by the wealth tax ($7.5 billion).
Tax base
The base of the Swiss wealth tax is broad: in principle, all assets, including those held abroad, are taxable. Only common household assets and foreign real estate are exempt from taxation; moreover, pension wealth such as occupational pensions (the so-called second pillar of the Swiss retirement system, which complements the national social security system) and the balances held on some voluntary retirement savings accounts (the so-called third pillar of the retirement system) are exempt until the date of the payout. 3 The tax liability is based on net wealth, so taxpayers can deduct mortgages and other debt. All residents aged 18 and over are legally bound to submit an annual tax filing (children's wealth must be included in the parents' tax returns). Net wealth is self-reported, which constrains tax enforcement. However, there is a 35% withholding tax on the return to domestic financial assets, which can only be claimed back when those assets are declared in the wealth tax base.
Tax schedules
Each canton designs its wealth tax schedule. Eight cantons impose flat rates (above some exemption level) and the other 18 feature progressive schedules with multiple tax rate brackets. Each municipality then chooses a multiplier that is applied proportionally to the cantonal tax rate schedule. Hence, an individual's overall tax liability depends on both the canton and municipality of residence.
Exemption levels differ by canton but are relatively low (Scheuer 2020). For instance, in 2018, it ranged from about $55,000 in the canton of Jura to $250,000 in the canton of Schwyz (for married couples).
Tax rates have declined over time. In 16 of the 26 cantonal capitals, the annual top wealth tax rate was below 0.5% in 2018. Hence, the Swiss wealth tax is targeted at a large share of the population and is only moderately progressive. Indeed, it is not intended to redistribute the stock of wealth but to be payable out of the resulting income (Schweizerische Steuerkonferenz 2021). In the next section, we provide more detailed information on the historical evolution of the wealth tax burden across cantons.
Data and methodology
Top wealth tax rates
Our first data set includes the cantonal wealth tax rates for all 26 Swiss cantons. Due to the highly decentralized tax system in Switzerland, it offers substantial variation in wealth tax rates since cantons have frequently changed their tax schedules over the decades. Our data includes the average and the marginal wealth tax rates for each canton and year going back to 1955 (including the municipal multiplier for the cantonal capital or main municipality). Since our main interest is the concentration of wealth at the top, we focus on the top marginal tax rates. Moreover, because our data on cantonal top wealth shares only begin later, we use information on top marginal wealth tax rates since 1964 for the present analysis.
In 2018, the combined cantonal and municipal marginal wealth tax rates in the top bracket ranged between 0.1% (canton of Nidwalden) and 1.1% (canton of Geneva). In 1969, three cantons imposed top marginal wealth tax rates above one percent: 1.34% in the canton of Glarus, 1.11% in Graubünden and 1.0% in Basel-Landschaft. Figure 1 shows the average of the cantonal top marginal wealth tax rates over time. While, in 1969, the average of all cantonal top marginal wealth tax rates was 0.73%, it had decreased to 0.49% in 2018.
Overall, the 26 Swiss cantons can be divided in roughly three groups according to the trends in their wealth tax rates. In the first group, top tax rates have hardly changed over the last 50 years (see Figure 2 for some examples). Cantons in the second group have gradually lowered their wealth tax rates over time (see Figure 3). Finally, the third group consists of cantons that have changed tax rates more extensively, including notably some significant tax cuts over a short period of time in the recent past (see Figure 4).
Top wealth holdings
Our second data set collects information on reported wealth holdings from 1969 to 2018. Since 2003, the federal tax administration (Eidgenössische Steuerverwaltung, ESTV) has published yearly net wealth statistics for all cantons. For the prior years, we have collected data directly from the cantonal archives, and combined it with irregularly reported tabulations by the ESTV. All statistics report the number of taxpayers in different wealth brackets and the corresponding sum of total wealth. The brackets mostly range from a bracket for zero net wealth to one for a net wealth of more than $1 million. Not all cantonal time series are recorded identically; notably, the bracket thresholds differ across cantons and change over time. Most importantly, some cantonal sources exclude taxpayers with (nearly) no wealth (less than $1000). Before 1969, only 3 out of the 26 cantonal archives provided information about this lowest wealth bracket. Therefore, the available data for this period is insufficient to plausibly approximate the missing wealth at the bottom of the distribution. As a result, we confine our analysis to the years since 1969.
Some cantonal statistics report taxable wealth instead of net wealth. Taxable wealth is defined as net wealth minus the tax exemption amount. The taxable wealth statistics indicate a considerably more pronounced wealth concentration than those based on net wealth. There are two reasons: first, the exemption level matters relatively less for top wealth holders; and second, the statistics based on taxable wealth include a larger share of taxpayers with no wealth at all. Both effects increase the top wealth shares and thus lead to an overestimate of wealth inequality. To correct this break in the series, we rescale taxable wealth to match net wealth where both are available. We then apply the same scaling factor to the years where only taxable wealth is available. 4 The statistics published by the ESTV since 2003 are based on net wealth, so this issue only concerns the earlier years.
While the wealth statistics based on tax returns are the best available data to study the long-run evolution of wealth inequality in Switzerland, it is important to keep in mind some caveats. First, because net wealth is self-reported, and despite the withholding tax, misreporting and tax evasion cannot be excluded, especially for the wealthiest households. Indeed, Brülhart et al. (2022) argue that misreporting is the most important component to the overall behavioral response of reported wealth to changes in taxes. A second issue concerns the number of taxpayers. All adult residents have to submit a tax return each year. However, married and officially registered same-sex couples (since 2007) are jointly tax liable and show up as only one tax unit in the tax statistics. To make sure our time series are consistent across time and cantons, we calculate the total number of tax units using register data (adult population minus one half of the married adult population).
Pareto interpolation of top wealth shares
Our data sources report net wealth as well as the number of taxpayers in absolute brackets, defined between thresholds in Swiss Francs. Instead, our object of interest are the wealth shares of the various top quantiles of the wealth distribution. To estimate the wealth of a given percentile, we make use of the fact that wealth holdings at the top of the distribution approximately follow a Pareto distribution. The cumulative distribution function is given by
F(x) = 1 - k x α , k > 0, α > 1, x ≥ k,
where the parameters α and k need to be estimated. The probability density function takes the form
f (x) = αk α /x α+1 . Since f (z|z ≥ x) = f (z)/(1 -F(x))
, the average wealth w(x) of tax units with wealth larger than or equal to x is given by w
(x) = ∞ x z f (z|z ≥ x)dz = αx α ∞ x z -α dz = α α -1 x.
Hence, mean wealth above a given threshold x is given by the constant factor β ≡ α/(α -1) independent of the threshold x. Conversely, we can estimate the local Pareto parameter at any given threshold x from the equation
β = w(x) x .
Using these estimates, we then calculate the respective wealth shares of each of the top percentiles of interest.
4 Wealth inequality in Switzerland Dell et al. (2007) provided first estimates of the wealth distribution in Switzerland during the last century. For the years 1915 to 1957, they were based on the federal wealth tax, which (as described in Section 2) was levied irregularly as a war tax and eliminated after 1959. For 1940, Dell et al. (2007) extrapolated data from the canton of Thurgau. For some other years (1913, 1919, 1969, 1981, 1991 and 1997), they relied on wealth statistics published by the federal administration. Föllmi and Martínez (2017) updated these national top wealth share series using the wealth tax statistics published annually by the ESTV since 2003. Based on our data collected from the cantonal archives, we are able to paint both a more fine-grained and comprehensive picture of the evolution of wealth inequality in Switzerland over the past five decades. More importantly, rather than constructing only the national wealth distribution, we provide new evidence on inequality trends at the cantonal level. This variation across cantons will be crucial for analyzing the effect of wealth taxes on inequality.
The end result of our data construction is a panel data set including the wealth shares of various top percentiles (10%, 5%, 1%, 0.1% and 0.01%) of the adult population in each canton between 1969 and 2018. When some cantonal archives are missing information for some years, we linearly interpolate the top wealth shares between the years for which we have information for the corresponding canton.5
Historical evolution at the national level
Figure 5 shows the (wealth-weighted) average of the cantonal top wealth shares for the top 1%, 0.1% and 0.1-1%. 6 It indicates increasing wealth concentration in Switzerland since the mid 1970s: the average wealth share of the top 1% across cantons has risen from 30% to 42% most recently. The increase has been even more pronounced for the top 0.1%: their average wealth share has more than doubled over the course of the same time period from 11% to 23%. This suggests that the rising upward trend in wealth inequality is mainly rooted in the steep wealth growth at the very top of the wealth distribution. Indeed, the average share of the top 0.01% more than tripled from less than 4% to 12.5%, whereas the average top 10% share has increased more moderately from 65% to 75%.
It is important to keep in mind that pension accounts are tax exempt and therefore not included in the wealth statistics in our analysis. The mandatory wage deduction for savings in occupational retirement accounts, as well as the caps for voluntary savings in private retirement accounts, have constantly grown over the last fifty years. Since our calculations ignore these trends, we tend to over-estimate the increase in wealth inequality. Föllmi and Martínez (2017) provide approximate adjustments taking into account pension wealth for the national wealth distribution. While these corrections decrease the wealth share of the top 10%, the difference is much smaller for the top 1% and negligible for the top 0.1%. 7 Another caveat is that some cantons offer foreigners who live but do not work in Switzerland an exemption from regular taxation, subjecting them instead to a flat-rate tax based on their living expenses. Hence, the wealth held by these foreign nationals is not included in our data either. The rules for this alternative tax regime have been tightened recently, and it currently affects fewer than 5,000 individuals (Scheuer and Slemrod 2021). For Switzerland as a whole, Baselgia and Martínez (2022) estimate, based on data from the Swiss rich lists published by the business magazine BILANZ, that the top 0.01% wealth share is 16% instead of 12%. They caution, however, that the wealth reported by BILANZ may be systematically too high. Moreover, the bias from ignoring the wealth of foreigners subject to expenditurebased taxation varies by canton. For instance, most of those individuals live in the canton of Vaud, whereas several other cantons (such as Zurich, Schaffhausen, Basel-Landschaft and Basel-Stadt) have completely abolished the flat-rate tax regime.
Cantonal differences
Our data uncovers remarkable differences across the Swiss cantons in top wealth inequality and its evolution over the last 50 years. For illustrative purposes, we focus on the top 1%. Figure 6 presents the corresponding wealth shares for the six cantons Zürich, Nidwalden, Solothurn, Basel-Landschaft, Aargau and Schwyz. We pick these cantons because their time series are relatively complete and because they provide a good overview of the different inequality trends over time. It is immediately apparent that the top 1% wealth share in Nidwalden is of a different magnitude than in the other cantons. After falling over the course of the 1970s and then remaining constant until the end of the 1990s at the (already high) level of nearly 50%, it increased sharply to almost 70% most recently. This is the highest value across all cantons and years. Recall that Nidwalden is also the canton with the lowest top wealth tax rate for most of our study period.
Although starting out from a much lower level, the canton of Schwyz has experienced, in relative terms, an even more striking rise in wealth concentration. While its top 1% wealth share was 32% in 1969 and remained between 30 and 40% until the late 1990s, it has almost doubled since then and has now reached 60%.
By contrast, in the canton of Zürich, the top 1% share fell between 1969 and 1975 and has since then remained strikingly constant over time, always at a level slightly below 40% including most recently. We find similarly flat overall trends for the cantons of Bern, Graubünden, St. Gallen and Uri, although they have gone through larger fluctuations in between.
The most complete data series is available for the canton of Basel-Landschaft, with information for each year since 1969. Its top 1% wealth share has followed a pronounced U-shaped pattern over time: from 50% in 1969 to less than 30% in the 1990s and back to 44% in 2018. The canton of Solothurn has followed a similar trajectory over time.
Finally, the canton of Aargau has featured relatively low levels of concentration (around It is also useful to put these numbers in international comparison, such as relative to measures of wealth inequality in the United States. Compared to the U.S. top 1% wealth share of 35% in 2018 (World Inequality Database), 19 of the 26 Swiss cantons feature higher degrees of concentration, with Nidwalden (69%), Schwyz (60%), Basel-Stadt (57%), Obwalden (56%), Geneva (55%), and Zug (51%) coming out on top of the list. In view of the fact that the U.S. exhibits a higher degree of wealth concentration than many other advanced countries, we conclude that, from an international perspective, wealth inequality is relatively high in most Swiss cantons.
Wealth taxation and top wealth inequality
In this section, we combine our panel data on top wealth tax rates and top wealth shares to shed light on the relationship between the two: Do changes in the progressivity of the wealth tax affect the concentration of wealth down the road? For this purpose, we exploit the variation in the timing of wealth tax reforms across cantons.
Cross-canton event study model
We estimate an event study model of the following form (Schmidheiny and Siegloch 2020):
W i,t = K ∑ k=1 12 ∑ j=-4 β k j D k i,t-j + 12 ∑ j=-4 γ j X i,t-j + θ t + ω i + µ i t + ε i,t
where W i,t is the top wealth share in canton i and year t, D k i,t an event indicator for a wealth tax reform of type k in canton i and year t, X i,t a set of controls, θ t a time fixed effect, ω i a canton fixed effect, and µ i a canton-specific time trend. Since we aim at isolating the effect of wealth tax reforms on wealth concentration, the controls X i,t include both the top marginal income and the average estate net-of-tax rate in canton i and year t. 8 We distinguish K = 4 types of reform events to allow for potentially heterogeneous effects: tax cuts and hikes, as well as small and large tax changes, defined as smaller or larger than a cutoff of 0.05 percentage points in absolute value. The estimators β k j capture the dynamic effects of the events of interest-namely, small and large wealth tax cuts and hikes-on top wealth shares j years after the reform. We set β k -1 = 0 for all k to express the dynamic effects relative to the year prior to the reform. Moreover, we specify the effect window up to twelve years after the event and considering pre-trends up to four years prior to the event. This also corresponds to the leads and lags of the included controls.
Figure 7 displays the coefficients β k j for the top 1% wealth share as a dependent variable and for large reforms. The results suggest that cantonal tax cuts increase the cantonal top 1% wealth share up to 7 years after the reform, whereas tax hikes reduce it. The point estimates indicate an effect of a change in the top 1% wealth share between one and two percentage points (in either direction) but the statistical significance is only marginal (the effect of small reforms is insignificant). This needs to be compared to an average top 1% wealth share of roughly 34% in our estimation sample.
Figure 8 shows the corresponding results for the top 0.1% wealth share. In this case, the effects of wealth tax reforms display the same sign but are slightly bigger, notably when put in relation to the average top 0.1% wealth share of 16% across time and cantons. This suggests that the effect of wealth taxation on wealth inequality is concentrated at the very -10 % -8 % -6 % -4 % -2 % 0 % 2 % 4 % 6 % 8 % 10 % Effect on top 1% wealth share (pp change) top of the distribution. Indeed, our estimated coefficients for the top 0.01% (not shown) are of a similar absolute magniture (and thus twice as large relative to the baseline wealth share of that group of 8%), whereas the event study model with the top 10% wealth share as the dependent variable produces insignificant results.9 Of course, these results cannot necessarily be given a causal interpretation since tax policy decisions at the cantonal level could in principle have been anticipatory in nature or part of broader tax reforms. However, the insignificant pre-trends up to 4 years prior to a wealth tax reform reduce such potential endogeneity concerns. Tables 2 and3 in the Appendix contain the regression coefficients and standard errors (for the event study models with the top 1% and top 0.1% wealth share as the dependent variable, respectively), including γ j corresponding to the control variables X i,t-j , namely the top marginal income and the average estate net-of-tax rates.
Interpreting the magnitudes
To put these findings into perspective, it is useful to relate them to the magnitude of the typical wealth tax reforms in our data. Figure 9 shows the histogram of tax rate changes (in percentage points) in our sample period. Overall, there were 714 tax reforms (with more tax -10 % -8 % -6 % -4 % -2 % 0 % 2 % 4 % 6 % 8 % 10 % Effect on top 0.1% wealth share (pp change) Recall that Figures 7 and8 display the coefficients for reforms associated with a change in the tax rate of at least 0.05 percentage points, which effectively isolates approximately the largest 10% of all reforms. 10 The mean tax cut (hike) among this subset of reforms is a reduction (increase) in the top rate by 0.17 (0.1) percentage points. Taken together with the point estimates from our event study model, the implied magnitude of the effect on top wealth shares is therefore quite sizeable. For instance, it predicts that a 0.1 percentage points reduction in the top wealth tax rate would increase the top 1% wealth share by 0.9 percentage points five years after the reform, and the top 0.1% wealth share by 1.2 percentage points.
Indeed, the average top marginal wealth tax rate decreased from 0.73% in the mid-1970s to 0.49% in 2018. At the same time, the wealth share of the top 1% increased on average from 30% to 42%, and the top 0.1% wealth share more than doubled on average from 11% to 23%. Hence, this back-of-the-envelope calculation suggests that historical wealth tax cuts explain roughly 18% of the increase in wealth concentration among the top 1% over this time horizon, and 25% of the increase in wealth concentration among the top 0.1%. While this is a substantial portion, especially in view of the limited progressivity of the wealth tax in Switzerland, it means that other factors must have been more important in shaping the evolution of wealth inequality over the past few decades.
A potential concern with this interpretation is that reforms to the top marginal wealth tax rate may be correlated with simultaneous reforms to features of the tax schedule further down the distribution, notably the exemption amounts below which no wealth tax is due. For example, if cantonal policy makers aim for largely revenue-neutral reforms, they might lower the exemption amounts whenever they cut the top tax rate. In this case, our results might be driven by both components of the reforms rather than only by changes to the marginal wealth tax: If households further down the distribution who become subject to the wealth tax as a result of the reform (due to the lowered exemption amount) respond by accumulating less wealth, this would further (indirectly) increase concentration at the top. To address this, Figure 10 shows a scatter plot of the change to the top wealth tax (on the horizontal axis) and the change to the exemption amount (on the vertical axis) associated with each of the cantonal tax reforms in our sample. It reveals no systematic relationship between the two dimensions. In fact, exemption amounts were mostly increased over time (in real terms) and hardly ever reduced, despite considerable reductions in the top wealth tax rates. Thus, our results are likely due to changes in the progressivity of the wealth tax at the top rather than at the bottom of the distribution.
GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GLGL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL GL ZG ZG ZG ZG
GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR GR
Conclusion
In this paper, we first present novel evidence on wealth inequality in the 26 Swiss cantons since 1969. We find that the overall increase in wealth concentration at the national level masks striking differences across cantons, in terms of both levels and trends of withincantonal inequality. Our second contribution is to explore the effect of wealth taxation on wealth inequality exploiting policy heterogeneity across cantons over the last fifty years. Our event study design, based on large tax reforms and controlling for income and bequest taxes, shows that cuts to the top marginal wealth tax in a given canton increase wealth concentration in that canton over the course of the following decade, and that tax increases reduce it.
The effect is strongest at the very top of the distribution. For the top 0.1%, for instance, a reduction in the top marginal wealth tax rate by 0.1 percentage points increases their wealth share by 1.2 percentage points seven years after the reform. This implies that the overall reduction in the progressivity of the wealth tax over the last decades explains roughly a quarter of the increase in concentration among the top 0.1% over this time horizon. Note that, in 2018, this group only included roughly 5000 tax units. Thus, it is an impressively small number of the wealthiest households who benefited most from reduced wealth tax rates at the top.
Since our analysis is based on aggregate data at the cantonal level, it remains silent on the economic mechanisms underlying this effect. Uncovering the anatomy of the response of wealth inequality to wealth taxation-in terms of its mechanical, real savings, inter-cantonal mobility, misreporting, and asset pricing components, for example-would require more detailed micro data and is therefore beyond the scope of this paper. In their analysis of how declared wealth responds to changes in wealth taxes, Brülhart et al. (2022) make use of precisely such data for the cantons of Bern and Luzern. They find that about a quarter of the total response is due to mobility, a fifth due to house price capitalization, and the rest likely due to misreporting. Whether a similar decomposition holds for the response of top wealth concentration, rather than absolute wealth levels, is an interesting question for future research.
If a sizeable fraction of the effect arises from taxpayer mobility, then this also raises a conceptual question: At which level of geographical aggregation should we measure inequality? Suppose, for instance, that canton A lowers its top wealth tax rate, thereby inducing some wealthy households in another canton B to move to canton A. This would increase the top wealth share in canton A and reduce it in canton B without any change in the national top wealth share. This example illustrates that aggregating our estimates based on comparisons across cantons is not straightforward. Indeed, in the presence of a significant mobility response, our estimates would need to be scaled down correspondingly when used to predict the effect of a simultaneous reduction in wealth taxation in all cantons on wealth concentration at the national level.
Relatedly, it also raises the question whether we should care at all about inequality at the cantonal level, or rather only at the federal level. In fact, one may argue that there is no reason to stop at the level of an individual country, and that, ultimately, world-wide inequality is all that matters from a normative standpoint. Still, there is significant interest in withincountry inequality. One reason is that many political decisions, not least about tax policy and redistribution, are taken at the country level, and that there may be a feedback loop between within-country inequality and political decision making. By the same argument, since the Swiss cantons have a large degree of political autonomy, studying the evolution and determinants of wealth inequality at the sub-national level is equally important. Our paper takes a first step in this direction.
Our results also make clear that changes to wealth taxation are not the most important driver of the recent rise in wealth inequality in Switzerland. Indeed, the Swiss wealth tax was never intended to achieve a major redistribution of wealth, but rather to generate stable revenues for the cantons and municipalities. This is evident in the moderate tax rates, which even at the top are likely smaller than the rates of return to the wealth of the very rich, and the fact that a large portion of the population is subject to the wealth tax.
Other changes to the Swiss tax system over the last 50 years could have played a role. In particular, most cantons have abolished bequest taxes for direct descendants (Brülhart and Parchet 2014) and there is no bequest tax at the federal level. At the same time, bequests account for a considerable part of the wealth of the superrich in Switzerland. For instance, Baselgia and Martínez (2022) show that, most recently, 75% of the 300 richest individuals in Switzerland have been heirs. This is extremely high compared to the Forbes 400, the corresponding list for the United States. In 2018, 69% of the wealthiest Americans were self-made founders of their own businesses (Scheuer and Slemrod 2020). Quantifying the degree to which the erosion of the cantonal bequest taxes has contributed to long-run wealth inequality in Switzerland would be an interesting topic for further investigation.
Figure 1 :
1 Figure 1: Top marginal wealth tax rates 1964-2018, average of cantonal rates
Figure 2 :Figure 3 :Figure 4 :
234 Figure 2: Top marginal wealth tax rates in Fribourg, Basel-Stadt, Vaud, Neuchâtel and Geneva
Figure 5 :
5 Figure 5: Top 1%, 0.1% and 1-0.1% wealth shares and top marginal wealth tax rates 1969-2018 (wealth-weighted averages across cantons)
Figure 6 :
6 Figure 6: Top 1% wealth shares of the cantons Zurich, Nidwalden, Solothurn, Basel-Landschaft, Aargau and Schwyz 1969-2018
= 294, # small hikes = 175, # large cuts = 34, # large hikes = 6, N = 1020, cantons: 26, years: 1976 -2015. Model includes canton and time FE, canton-specific trends, lags and leads of log top net-of-inheritance-tax and top net-of-income-tax rates. 90% confidence intervals, SEs clustered at canton level. Dependent variable: top 1% wealth share; average in estimation sample: 33.9%.
Figure 7 :
7 Figure 7: Cross-canton event study, top 1% wealth share
cuts = 294, # small hikes = 175, # large cuts = 34, # large hikes = 6, N = 1020, cantons: 26, years: 1976 -2015. Model includes canton and time FE, canton-specific trends, lags and leads of log top net-of-inheritance-tax and top net-of-income-tax rates. 90% confidence intervals, SEs clustered at canton level. Dependent variable: top 0.1% wealth share; average in estimation sample: 15.9%.
Figure 8 :
8 Figure 8: Cross-canton event study, top 0.1% wealth share
in top marginal wealth tax rate, in pp
Figure 9 :
9 Figure 9: Cantonal wealth tax changes 1964-2018
Figure 10 :
10 Figure 10: Cantonal changes to the top wealth tax and the exemption amount 1969-2018
Note: Model includes canton and time FE as well as canton-specific trends, SEs clustered at canton level. Dependent variable: top 1% wealth share.
Year relative to wealth tax change Small cut Small hike Large cut Large hike log top net-of-estate-tax rates log top net-of-income-tax rates
-4 0.0022976 (0.0020295) (0.0038144) (0.0050576) (0.0083876) (0.0891568) (0.103685) -0.0031858 -0.001189 0.00271 0.1889398 0.1096511
-3 0.0027329 (0.0021233) (0.003756) -0.0016244 -0.0010082 (0.0039427) (0.009724) 0.0052672 0.053156 (0.0634622) (0.0637724) 0.0253212
-2 0.0020839 (0.0020935) (0.0031495) (0.0037763) (0.0093627) (0.0617059) (0.0832014) -0.0017135 0.0044265 0.0047558 0.064948 -0.1142434
0 0.0010645 (0.0023383) (0.0036396) (0.0053442) (0.0104915) (0.0506904) (0.0533242) -0.00262 0.0116087 -0.0027763 0.0154428 -0.0171117
1 0.0020149 (0.0024915) (0.003564) -0.0041061 0.0158724 (0.0071165) (0.0074566) (0.0814433) (0.0630007) -0.0080446 -0.0166484 -0.019266
2 0.003395 (0.002812) -0.0045465 (0.0034218) (0.0070065) (0.006087) 0.0144571 -0.0177016 0.0263441 (0.0640897) (0.0375767) -0.0479522
3 0.0045326 (0.0026801) (0.0040457) (0.0066075) (0.0052283) (0.062072) -0.0035743 0.0107281 -0.0116437 0.009336 -0.0199642 (0.0304443)
4 0.0039425 (0.0026098) (0.0030684) (0.0065124) (0.0045683) (0.0549504) (0.0241641) -0.0026286 0.014968 -0.0054516 0.0390477 -0.0032574
5 0.0022235 (0.0025769) (0.0029955) (0.0085046) (0.0050247) (0.0475782) (0.038835) -0.002887 0.0156372 -0.0061575 -0.0134754 -0.0038553
0.0025558 (0.0024987) (0.0030709) (0.0114197) (0.0067028) (0.0720875) (0.0229728) -0.0023962 0.0142353 -0.0025123 0.0632166 0.0167414
7 0.0016832 (0.002224) -0.0029766 (0.0032538) (0.0132075) (0.007296) 0.01908 0.0040213 0.2333809 (0.0792453) (0.0350742) -0.0141396
8 0.001469 (0.0025894) (0.0029395) (0.0099914) (0.0070839) (0.0723615) (0.0242661) -0.001831 0.0109595 0.0013128 0.1514018 0.0302078
9 0.0008653 (0.0026548) (0.0030323) (0.0093716) (0.0067707) (0.0853943) (0.0151072) -0.0004304 0.0070992 0.0013345 0.1061328 -0.0044352
10 0.0001582 (0.0023999) (0.0034623) (0.0070931) (0.0051138) (0.065598) -0.0023406 -0.005262 -0.0005021 0.0892894 0.0475557 (0.0409621)
11 -0.0007746 (0.0025855) (0.0036668) (0.0065559) (0.007485) -0.0024195 -0.0047227 0.0037486 0.0295935 (0.0682379) (0.0164015) -0.0232601
12 -0.0009869 (0.0023105) (0.0033076) (0.0064668) (0.0079657) (0.0645107) (0.0550869) -0.0006997 0.0015392 0.0021267 0.1090395 0.0408898
Constant 0.992283 1.189494
N Groups N Observations 1002 26 R 2 within R 2 between 0.4699 0.7727 R 2 overall 0.2653
Table 2 :
2 Cross-canton event study model for the top 1% wealth share Note: Model includes canton and time FE as well as canton-specific trends, SEs clustered at canton level. Dependent variable: top 0.1% wealth share.
Year relative to wealth tax change Small cut Small hike Large cut Large hike log top net-of-estate-tax rates log top net-of-income-tax rates
-4 0.001881 (0.0019647) (0.0039929) (0.0050858) (0.0086046) (0.0904797) (0.093234) -0.0022456 -0.0022969 0.00483 0.1429525 0.160743
-3 0.002222 (0.0021907) (0.0038691) (0.0037324) (0.0099277) (0.061513) 0.0001204 0.0009048 0.0058096 0.0501811 0.0216165 (0.0529901)
-2 0.0019531 (0.0021851) (0.0032527) (0.0039572) (0.0082074) (0.0600775) (0.0652993) -0.0003361 0.0050203 0.0026091 0.0982664 -0.0893154
0 0.0013246 (0.0021654) (0.0043348) (0.0054106) (0.013202) -0.0006556 0.0137974 -0.0143376 0.0491401 (0.0347417) (0.0512172) -0.0074544
1 0.002462 (0.0022067) (0.0041318) (0.0075811) (0.0107536) (0.0839255) (0.070348) -0.0021194 0.0192765 -0.0194086 0.0207446 0.0161065
2 0.0031735 (0.0024697) (0.0036029) (0.0075215) (0.0096587) (0.0686057) (0.0361945) -0.0031897 0.0174902 -0.031182 0.0487051 -0.0473359
3 0.0043674 (0.002497) -0.0024796 (0.004441) 0.0129753 (0.0068345) (0.0099439) (0.0505065) (0.0361399) -0.0232158 0.0188918 -0.0671123
4 0.0046294 (0.0023955) (0.0030902) (0.0073241) (0.0080322) (0.0483108) (0.0243231) -0.0016716 0.017926 -0.0155879 0.013602 0.021173
5 0.0026645 (0.002486) -0.0021085 (0.0026502) (0.0090204) (0.008296) 0.0204186 -0.0160419 -0.0655603 (0.047687) 0.0046424 (0.0490397)
6 0.0032979 (0.0027402) (0.0028592) (0.0124937) (0.0068713) (0.0621205) (0.0260203) -0.0010331 0.0211501 -0.0120412 0.0035395 0.0053703
7 0.0023027 (0.0024359) (0.0034237) (0.0142148) (0.0074954) (0.0717843) (0.0371136) -0.0010761 0.026211 -0.004494 0.1527247 0.0035248
8 0.0021861 (0.0027077) (0.0031605) (0.0098094) (0.0081067) (0.0696155) (0.0301853) -0.0005634 0.0147989 -0.0075425 0.1000737 0.0200387
9 0.00224 (0.002756) 0.001157 (0.0032542) (0.0085551) (0.0081934) (0.0795746) (0.0154614) 0.009812 -0.0044303 0.143336 0.0011738
10 0.001999 (0.0024002) (0.0038338) (0.0054889) (0.0061475) (0.066082) -0.0003187 -0.0015225 -0.0060394 0.1389383 0.0664139 (0.0502499)
11 0.0013585 (0.0025442) (0.0041113) (0.005228) -0.0004508 -0.0011343 -0.0020627 (0.0071496) (0.0641367) (0.0170583) 0.0405962 -0.0213298
12 0.0011842 (0.002033) 0.0007849 (0.0032469) (0.0057216) (0.0082058) (0.0727604) (0.0570599) 0.0045363 -0.0023967 0.0369275 0.0382706
Constant -2.223493 (1.322405)
N Groups N Observations 1002 26 R 2 within R 2 between 0.4699 0.7659 R 2 overall 0.3158
Table 3 :
3 Cross-canton event study model for the top 0.1% wealth share
Recent studies include[START_REF] Seim | Behavioral responses to wealth taxes: Evidence from Sweden[END_REF],[START_REF] Zoutman | Yearly 2003 -2018 Schweizerische Steuerkonferenz "Vermögenssteuer natürlicher Personen[END_REF], Londoño-Velez and Avila-Mahecha (2019),Durán- Cabré et al. (2019),[START_REF] Agrawal | Wealth tax mobility and tax coordination[END_REF],[START_REF] Jakobsen | Wealth taxation and wealth accumulation: Theory and evidence from Denmark[END_REF] and[START_REF] Brülhart | Behavioral responses to wealth taxes: evidence from Switzerland[END_REF].
Bequests are taxed at the cantonal level, typically in the form of inheritance taxes. In the 1990s and early 2000s, however, most cantons abolished inheritance taxes for direct descendants[START_REF] Brülhart | Alleged tax competition: the mysterious death of bequest taxes in Switzerland[END_REF]. So far, all attempts to introduce an inheritance tax at the federal level have been unsuccessful.
While foreign real estate is not subject to the Swiss wealth tax, it is included when determining the relevant tax bracket and thus the individual tax rate.
Alternatively, we could add the exemption levels to taxable wealth. However, this would introduce measurement error because exemption amounts depend on marital status and the number of dependents of each taxpayer, and we do not have information on the composition of these characteristics by wealth bracket.
This only affects some years prior to 2003. Since then, the ESTV provides yearly wealth distribution statistics for each canton. See Table1in the Appendix for a detailed list of sources from which we have collected cantonal wealth statistics.
6 The population-weighted averages across cantons feature similar levels and trends.
Accounting for pension wealth in our cantonal wealth statistics is beyond the scope of this paper, but an important task for future research.
Apart from wealth taxes, both bequest and income taxes at the cantonal level are potential drivers of wealth accumulation, and the corresponding rates have changed multiple times during our sample period. For instance, the average top marginal income tax rate across cantons rates rose from 20.5% in 1970 to 25.7% in 1982 and then decreased to 21.2% in 2018. In turn, the average inheritance tax rate across cantons first fluctuated between 1.5% and 2.5% and, since 2000, has decreased to 0.2% in 2018.
We obtain similar results when using the local Pareto parameter of the wealth distribution instead of top wealth shares as the dependent variable.
When using smaller cutoffs (such as 0.03 or 0.01 percentage points) to define large reforms, the point estimates decrease in magnitude, and the standard errors decrease at the same time due to the increasing sample size. As a result, our estimates remain statistically significant under these alternative definitions of large reforms.
Appendix
Canton(s) Year(s) Source All cantons 1 |
00342984 | en | [
"info.info-hc",
"info.info-au",
"info.info-ma"
] | 2024/03/04 16:41:22 | 2005 | https://hal.science/hal-00342984/file/Pocheville2001.pdf | Table of Contents
Note:
The information in this document is provided as is and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability.
Introduction
In the current virtual reality field, each of current display (audition, visual and touch) are often indepently studied. But a more interresting approach consist to blend them to create a multi-modal virtual reality. The fact is putting more than one modality together does not result in the sum of the modalities. The experience can be enriched by such cooperation, but user can also feel destructive interaction between the modalities. This creates new challenges for the virtual reality creator. For the most cases, the modalities are not to be handled in the same way, thus creating diffuclties for integration. In the following, we focus on such challenges.
In order to be able prototype multimodal integration schemes and models, the LSC developed a generic framework called I-Touch [POC04a][POC04b] that serves also the rigid bodies prototyping demonstrator of the deliverable D7.5 and D7.6. This part will describe the components of the I-Touch framework. The fact the framework is multimodal is of great importance, in order to be able to evaluate the importance of each feedback. We tried to give each of the modalities the same 'importance', from an integration point a view, precisely in order to be able, after that, to modulate the user experience.
The I-Touch framework
Design of the I-Touch framework
The I-Touch framework is designed to be modular from its core to its interfaces. Although it is still in the development process, it already allows plug-ins (static linking in program) of different behaviors models and different collision detection algorithms. The framework architecture is given in the Figure 1. As it can be noticed, haptic interfacing, among other rendering capabilities, takes an important place. This particular focus meets the Touch-HapSys objectives in better understand haptics and its relation with the other modalities. However, this modality has the same 'rights' and 'duties' as other modalities. Precisely, the haptic modality can be as easily put in / removed from the simulation as any other.
From a pure technical point-of-view, the framework is divided in three main modules; each of them is further subdivided in as many sub-modules as needed. The I-Touch framework is completely object-oriented. This allows easy part replacement and improvements. It is implemented in a pure standard C++, and apart from the driver libraries, does not use platform dependent code. It can be easily ported to Linux or MacOsX, however it is designed to be best suited to MS Windows OS. Of course, this object-orientation makes also easy the sub classing of some part of the framework, in order to create completely new applications. The fact we can devise new application "on demand" is of considerable importance. Namely, we are also envisaging its use for dedicated psychophysical investigations.
The core system
The core system is responsible for handling the operating system (the platform specific code is inserted here), the configuration, and the basic functionalities of a physically-based simulation. It provides a basic scene graph for managing the various objects that composes the virtual scene. This core system can accept many simulation algorithms along with different input methods. Classical mathematical methods [START_REF] Dorst | Geometric algebra: A computational framework for geometrical aplications[END_REF] [2] and structures are also provided, for the easy prototyping/evaluation of new/existing algorithms. The core system also provides a very flexible configuration in XML, which parametrizes any aspect of the simulation, thus allowing the creation of test-beds rather easily.
In addition to configuration tools, a file format for holding together "geometrical" object properties has been devised. This format is open and flexible; moreover, additional data can be included and be ignored if not necessary to the simulation, even if it is an unknown data. An importer and exporter have been written for 3DSMax, along with C++ and C# libraries for loading efficiently theses files. This exporter can, for now, export geometry, normal, binormal and texture information, however, it does not save the relationship between texture information and texture map (this link has to be recreated in configuration files, however, the result is of course exactly the same). This data format, exporter, importer and C++ libraries have been conjointly developed by A. Pocheville and M. Brunel. Details concerning the implementation can be found in [START_REF] Brunel | Rendu multimodal haute fidélité, rapport de fin d'étude[END_REF].
The input and output system
While the input system needs to be flexible and needs to manage many different inputs, the output system should ensure high fidelity rendering along with adequate refresh rates according to the addressed modality/output. Of course, the output system is tightly related to multi-modal integration. One of the most important facts here is that the output system should be very flexible and allow almost any information as its input, in order to be able to prototype new integration schemes. However, there are limits to flexibility, in the end, each modality still requires special handling.
The simulation system
The simulation is composed of the simulation manager, and a set of simulation virtual objects.
The simulation system uses the core system for standard interaction with the computer and the user, and input/output system for multi-modal interaction. Collision detection algorithms are part of the simulation system. The simulation system, like any other part of the framework, can be replaced or modified rather easily. The simulation system can be decomposed in two subsystems: the collision detection subsystem and the behavior model (the physic step). The fact is that collision detection step can be also benchmarked to certain extend: for example, a penalty or a constraint based physic resolution has known inputs, we can then select detection collision algorithms that provides such information. The following schema illustrates this interaction: Impusle based method(s)
Multimodal integration
One of the most challenging issues of I-Touch is multimodal integration. Visual, audition and haptic senses have different refresh rates: from as low as 30Hz for visual interaction, up to as high as 10kHz for the 3D sound one. Integrating each of these modalities is not a trivial task. Others tentatives tried through parallelization of the computation on different computers.
Here, we decided to push the limits on focusing in using one computer, but this unveils some problems as exposed later.
Simulation engine flexibility
The fact that the simulation engine is completely flexible and modular allows the integration of different behaviors models with the same multimodal rendering. This is done through abstraction of the output methods. For example, haptic rendering can be easily derived to create new rendering. These rendering can then be easily switched. Moreover, since the simulation engine use the simulation objects as placeholders, alsmost any information can be provided to the output. For example, simulation objects are holding the whole normal map used in visual bump mapping and haptic bump mapping. This information can then be accessed at will.
However, the simulation engine has to provide some specific information to the output routines. For example, for the real-time 3D sound rendering, contact information (and changes in contact through time) are required. The immediate benefit of the ability to switch between renderings is that we can benchmark how well does a simulation engine behaves with multimodal rendering. For example, bounce models have difficulties in rendering contact information with sound, while they provide excellent rendering of bounce sounds. It can be observed that
Collision detection integration
The I-Touch framework has a certain amount of requirement from the collision detection module. It should be noted that due to the architecture of the framework, inter penetration never occurs. For I-Touch, objects can only be touching. Objects are considered to be touching when they are less than a certain predefined distance away from each other. We will refer to this distance as the tolerance of the collision system. When a contact occurs, the contact area should be represented by a group of point pairs, each of which is on the surface of one object. The points on each object should represent the contact region of this object. No point pairs should be duplicated in the list since each pair is later used to calculate the reaction force.
The collision detection module should be robust to handle polygon soups where duplicated vertices can be found and thus neighborhood information between triangles might be erroneous. The collision results should be calculated fast enough to leave the required time for the dynamical model to calculate the reaction forces at interactive rates.
Our proposed system provides the required information. It is composed of three layers:
The first is an acceleration layer that determines the triangle pairs having a high probability of colliding, thus needing further processing. The second layer performs our topology determination test on each of the pairs obtained from the first layer. This test obtains for each pair, the list of point pairs that defines the contact between these triangles. The last layer is where all the contact point pairs from the second layer are gathered, while eliminating duplicates. In the following, we will explain each of the three layers of our system.
Layer1
The first layer is an acceleration phase, based on object oriented bounding box hierarchies. Each hierarchy is constructed top down, starting by the first box at the object level, and proceeding by dividing the triangles into two groups, and constructing a bounding box for each group. The leaves of the trees of the hierarchies contain each, one triangle. This construction produces 2n boxes for each object, where n is the number of triangles for that object. In order to account for the tolerance, we have to enlarge each box of the tree by half the value of the tolerance in the direction of the three axes of the box. This action is crucial since we consider a pair of triangles to be in contact when they are within the tolerance from each other. An example is shown is the following figure.
When traversing the tree, if the current nodes are leaves we add the pair of triangles that they contain to the list of triangle pairs that we will process in layer 2. If the nodes are not both leaves, then we process their child boxes together. The output of this layer is the list of triangle pair that need further processing.
Layer2
The input of the second layer is the output of the first layer. We need to determine the contact type for each triangle pair in the list, which can be a point, edge or surface contact and for each case, find the points that delimit the contact zone for each triangle. We define the "margin" as the variation in the distance within which we consider the two vertices as parallel to a plan. For example, having v11, v12 and v13 as vertices of the first triangle and p2 as a plan of the second triangle with the tolerance fixed to 10 and the margin fixed to 2, then if
d(v11, p2) = 11 d(v12, p2) = 5 d(v13, p2) = 6
We consider that we are potentially in a case of edge-plan contact because we consider that v12 and v13 are both lying in p2 within the tolerance and the margin.
Our algorithm starts by calculating the distances from each vertex to the plan of the second. The distance is not a signed number, and do not depend on the direction of the normal. This is done to account for the case where the vertices of the triangles are given in the reverse order.
This first part of the algorithm has the following pseudo code t1 and t2 are the first and second triangle respectively v11, v12, v13,, v21, v22, v23 are the vertices of t1 and t2 respectively p1 and p2 are the plans of t1 and t2 respectively
For each vertex of t1
Calculate the distance for this vertex to p2
End for
For each vertex of t2
Calculate the distance for this vertex to p1
End for
If at lease one vertex of t1 is in t2 or at least one vertex of t2 is in t1
Determine the type of potential contact of t1 using the calculated distances Determine the type of potential contact of t2 using the calculated distances Call PlanContact(HighestContactOrder(t1,t2))
Else
Call EdgeEdge()
End if
In order to determine the type of potential plan contact, we start by identifying the vertex with the smallest distance, and then verify if the other two vertices of his triangles are within the margin. If both are, then we are in a potential plan-plan contact. If only one is, it is a potential edge-plan contact for this triangle. If none is, then it is a potential vertex-plan contact. All vertices within the margin and the closest one are called active.
PlanContact()
A plan contact can be a vertex-plan, edge-plan, or a plan-plan. In order to determine all the point pairs that define the contact, this function starts by checking if the active vertices of the two triangles are inside all the plans defined by the edges and the normal of the second triangle. If a vertex is, then we are sure it is a contact vertex since we already verified that it is within the tolerance, and we can add it and his projection on the second triangle are added to the result vertex pair list.
Then we check the edges of the two triangles, having both end points active, against each other. If their closest points are less then the tolerance away from each other then we add the closest points of the two triangles to the result pair list. Some conditions have to be verified in order to assure that no duplicate pair are added to the list, but will be omitted for simplicity.
For each active vertex v of t1
If v inside the triangular prism of t2
EdgeEdge()
If no vertex of the two triangles is in the plan of the other, then we must verify that we don't have edge-edge contacts before concluding with a no contact situation.
This test checks all pairs of edges again each other, and adds the closest point on two edges to the result list if the distance between their corresponding edges is less than the tolerance. We also have to check certain conditions to assure that no duplicates are added to the list.
The following is an edge-edge case
End for
An important remark is that the case of an edge penetrating the face of the second triangle, but having a distance to all three edges of the second triangle highest then the tolerance, is not detected.
This should not be a problem for I-Touch since this should not be allowed to happen.
When the end point of this edge reaches the plan of the triangle, the engine of I-Touch, prevents penetration, and thus the above mentioned situation never occurs.
Layer3
The last layer of the system deletes redundant pairs from the result list. This can be done by the brute force algorithm of order O(n 2 ), where n is the number of result pairs. Knowing that the number of results pair is usually not very high, such complexity can be acceptable in most cases. However, we designed an optimized duplicate pair remover, which checks only the results of the neighboring triangles for duplicate pair. Such a strategy is possible, since duplicates can only exist among adjacent triangles.
Such an optimized version should only be used when the models are well constructed, and neighborhood information for the triangles is correct. If it is not the case, it is better to use the brute force method.
The result of this layer is a list of point pairs, defining the contact between the two objects, and having no duplicates.
Sound integration
3D positional audio enhances the immersion of the operator in the simulation. We have two methods for rendering 3D sound: real-time rendering, and semi-real-time rendering. The realtime rendering uses information directly provided by the simulation, such as changes in the friction map to produce sound. It also uses object properties such as resonance frequencies to computes contact sounds [START_REF] Van Den Doel | FOLEYAUTOMATIC : Physicallybased Sound Effects for Interactive Simulation and Animation[END_REF] . While this is the correct method for producing friction and bounce sound, it suffers from several drawbacks. First of all, it is very computational-time consuming, and, in a system composed by only one processor, it can become the bottleneck of the simulation (and take the place of the collision detection!). Maybe relocating the sound computations could solve this problem. The other fact is that the sounds generated are, for now, less "realistic" than the ones produced by the second approach.
The following is an example of analysis of modal parameters of a material.
The semi-real-time sound rendering approach uses off-line recorded sounds of different materials in contact. These different sounds are stored in a database according to some material properties. They are used by the simulation as they are and the only amplitude and/or frequency modulation (pitch, volume...) are processed. This method can be seen as same as vertex transform followed by a texture pass in visual rendering.
The following figure stress out the difference between the two methods:
Visual integration
Relatively to the sound and the haptic rendering, the visual one is the easiest. We can use the same geometry as the one used for physics calculations, or a higher level, smoother one for better rendering. Objects are linked to rendering information, such as geometry, material and alpha information, and pixel and vertex shaders. This allows almost any rendering of the objects, from standard Gouraud-shaded plastic look, to advanced Phong-shaded semireflecting materials with bump mapping. Dynamic lighting is also supported. The visual rendering is completely configuration controlled, so there is a great flexibility in the rendering process. The fact that the visual rendering is well understood and has numerous techniques is very interesting here, because we can take inspiration from these techniques in others renderings.
Haptic integration
Our approach differs from the previous ones. Indeed we are conceptually considering that the haptic devices (interfaces) interact with the simulation and not the reverse way i.e. the haptic device does not drive the simulation. The point here is that haptic device can be removed at will. Even more, more than one haptic device should not pose any problem to this type of simulation, since haptic rendering is totally decorelated from simulation loop. Obviously, haptic devices can induce a change in the course of the simulation but they cannot compromise its integrity. To be clearer, the simulation does not take as granted what is needed from the input device and, in extreme cases, these particular inputs are ignored. In fact, this enhances considerably the stability of the interaction. For example, when the operator actions are toward violating a given non-penetration constraints, they are not considered integrally (as is the case for classical computer haptics API's). This is better for engine stbility, and allows for many new algorithms to be used. However, haptic feedback has to be consired from another point of view. The basic principle behind this is exmplained in the following figure: The fact that haptic integration is not considered as a special rendering allows new synergies between rendering to be investigated. One example of this is the recently implemented haptic
Simulation Loop
Haptic Device 1
Haptic Device 2 ASK ASK ANSWER ANSWER bump. As for visual bump mapping, we can simulate rough haptic surfaces through haptic bumps. We tried two different approaches: height based forces, and normal based forces. The basic principle is the same: the force computed by the simulation engine is slightly modulated by a term, which depends either on the height or the normal. In our actual implementation, haptic bump does only work with one contact point, but we are working on extension to multiple points.
The following figure explain the principle behind the haptic bump:
As far as "bump sensation" is concerned, the normal based force give superior results. The bump map used for haptic bump is exactly the same as the one used in the visual bump, thus the two modalities match perfectly and the rendering is coherent.
Thermal integration
While there have been advances in thermal integration, there remains difficulty in some areas: First of all, we have to be able to reproduce any material from its thermal properties.
For now, we can only approximate the law of the thermal interaction between a human finger and a material through a Peltier pump, but many progress are still to be made, in order to be able to deliver almost exact replication of thermal sensations in the real worl. Secondly, while the integration of the thermal sense is not very difficult in the simulation (mainly from its poor refresh rate, the human has not a high refresh rate thermal sense), it poses great difficulty from a practical point of view, where we should combine thermal with kinaesthetic feedback. An efficient device is yet to be devised.
Putting them together
Many synchronizing algorithms do exist, but for now, we did not experimented them, because they are time consuming when added to the simulation engine. Computation of different contacts, and then contact forces, takes most of the CPU time. As theses computation are the main bottleneck of the simulation, the rendering is sufficently fast. Moreover, the fact of multithreading does not harm this process, since the refresh rates are too high for an human to notice little differences.
The data is given to render in this order (immediately one after the other): haptics, sound and vision. In the future it would interesting to have scalable algorithms that would be totally disconnected from the physical simulation, we will then need synchronization algorithms. This would prevent that the new bottleneck (let's say, sound rendering) to harm the other displays. However, it will also mean that one of the modalities will have degraded performance, we will need to investigate how far we go in degradation in one of the modalities without ruining interaction.
Evaluation tools
The testing of research projects is made easy with I-Touch, however such a testing requires to analyze data from the simulation. In I-Touch, every simulation variable can be "tagged" from recording; this allows after-run simulation analysis through tools. For example, FPS data, or time taken to compute a frame (much more speaking than FPS in regard to performance) evolution can be viewed easily. The 'tagging' is done by providing the pointer to the variable. Then, each frame (or at a given time), the variable is recorded. At the end of the simulation, a file that can be viewed in a special viewer is written. The actual viewer is written in .Net and is provided with the I-Touch framework.
Also, the debugging facilities and text functions in I-Touch make it easy to dump data. However, we want to go further, and new real-time tools are in development. Such tools will render in real time evolution of variables, in numerous manners (time graphs, bars, standard text, etc.). Specific analysis tools to simulation should also be included, such as automatic reporting of number of contacts, physics calculation time, time spent in rendering or in other tasks -this step is almost done, but we have to link it with the visual rendering. This will create a complete and easy evaluation tool, in order to accelerate further the development of test cases. This step is very important in the relationship that this project can have with psycho-physical and others experiments.
Conclusion and future work
We have shown that we have made a prototype of software framework that allows the creation of many test cases. They all have in common experiments with operation, in a multimodal way. The haptic rendering is not considered with special concerns, thus facilitating the evaluation of haptic impact in a multimodal context. Integration of all these modailities in one framework in not an easy task. We have chosen the modularity path, which allows testing of and imaging new scenarios. However, there is an attempt to provide each of the modularity a coherent information, in order to allow a real symbiose between the different senses.
1 . 1 . 1 1 . 2 1 . 3 1 . 4 6 1. 5 1 . 7 1 . 8 1 . 9
11112131465171819 INTRODUCTION...................................................................................................... THE I-TOUCH FRAMEWORK..................................................................................... Design of the I-Touch framework ..................................................................................................... The core system .................................................................................................................................. The input and output system ............................................................................................................. The simulation system ........................................................................................................................ 2 MULTIMODAL INTEGRATION.................................................................................Simulation engine flexibility .............................................................................................................. 1.6 Collision detection integration .......................................................................................................... 1.6.1 Layer1 ........................................................................................................................................... 1.6.2 Layer2 ........................................................................................................................................... 1.6.3 PlanContact() ................................................................................................................................ 1.6.4 EdgeEdge() ................................................................................................................................... 1.6.5 Layer3 ......................................................................................................................................... Sound integration ............................................................................................................................. Visual integration ............................................................................................................................. Haptic integration ............................................................................................................................ 1.10 Thermal integration ....................................................................................................................... 1.11 Putting them together .................................................................................................................... 1.12 Evaluation tools ............................................................................................................................. CONCLUSION AND FUTURE WORK.......................................................................16 REFERENCES...........................................................................................................
Figure 1 :
1 Figure 1: I-Touch framework architecture.
Figure 2 :
2 Figure 2: I-Touch interaction between collision detection and response
For every edge e1 of t
For every edge e2 of t2 If d(e1, e2)<tolerance Add the closest points on e1 and e2 to the result list End if End for
Figure :
: Figure : Principle of force handling.
Figure :
: Figure : Example of haptic bump (combined with visual bump). The surface is in fact a plane.
We have succeeded in creating sample applications that use the flexibility of the I-Touch framework. However, integration with real psycho-physical tests is yet to be done. Further work is oriented toward refining I-Touch through its multimodal component to serve also as a psychophysics evaluation tool. Progressively our aim is to evolve it to a complete piece of software that can serve haptic research. Future improvements will include more intimate collaboration of the different modalities, and new ways of defining what a modility is from a software point of view. |
04103793 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2023 | https://shs.hal.science/halshs-04103793/file/WP15.pdf | Manon François
Vincent Vicard
email: [email protected]
Manon Francois
email: [email protected]
Tax Avoidance and the Complexity of Multinational Enterprises
Keywords: Complexity, Firm organization, Multinational enterprises, Profit shifting, Tax avoidance. JEL Classification: F23, H2, L22
Does the complexity of the ownership structure of multinational enterprises' (MNEs) enable tax avoidance? We characterize as complex an MNE's ownership structure in which the headquarter owns its subsidiaries through a chain of intermediaries and we build a measure defined as the mean number of layers between affiliates and the headquarter. We use firm-level cross-country data to show that affiliates belonging to more complex MNEs are more likely to report zero profit, which is consistent with complexity enabling tax avoidance by multinationals. Our results underline that only the more complex MNEs shift profits away from their high-tax affiliates, while MNEs with flat ownership structures do not display such pattern.
Introduction
Multinational enterprises (MNEs) organize their production through a network of tens or even hundreds of affiliates located in different countries. While MNEs and their foreign affiliates are a building block of global value chains and account for a large share of international trade and production, 1 little is known about how MNEs organize the ownership of their network of affiliates and its consequences.
MNEs may implement a flat ownership structure in which the headquarter holds affiliates directly, or more complex structures involving chains of ownership in which subsidiaries are owned by intermediaries that may be located in different countries. 2 The choice of ownership structure is shaped by the organization of production, i.e. the industry diversification of the MNE, its geographic footprint or the degree of fragmentation of its production process and its outsourcing decisions. It also reflects other determinants including internal financing, expropriation risks or past M&A history (UNCTAD, 2016). 3 Tax motives also shape the organisation of MNE, as evidenced by the central role played by conduit entities in offshore financial centers or tax havens in tax planning strategies. 4 In this paper, we posit that a complex ownership structure may serve tax avoidance and investigate whether being part of more or less complex MNEs affects profit shifting between subsidiaries. We build a measure of complexity based on the number of layers of ownership between affiliates and the headquarter within an MNE. Comparing the reported profitability of affiliates belonging to different MNEs, we document which type of firms are more prone to shift profits away from high tax affiliates. The complexity of MNE appears to complement the instruments of profit shifting in tax avoidance schemes aiming at minimizing the overall tax liability of the group.
MNEs shift profits from high to low-tax subsidiaries using three main instruments: the manipulation of their transfer prices in trade in goods, debt shifting and the location of intangibles in tax havens and export of associated services. 5 These instruments however do not require specific forms of ownership and can occur between any two affiliates of the MNE, directly or indirectly related. The organization of the ownership structure 1 UNCTAD (2016) underline that less than 1% of MNEs have more than 100 affiliates, but these MNEs account for almost 60% of global MNE value added.
2 Alabrese and Casella (2020) find that the country of location of the direct and ultimate owners differs for 40% of foreign affiliates, involving complex ownership structures with investment chains crossing borders.
3 [START_REF] Lewellen | Internal Ownership Structures of U.S. Multinational Firms[END_REF] find that 51% of US multinationals have a flat ownership structure and 39% a highly complex one, but observable firm characteristics, such as size, age, industry, or diversification, explain only up to 37% of the variation in complexity across firms.
4 Garcia- [START_REF] Garcia-Bernardo | Uncovering Offshore Financial Centers: Conduits and Sinks in the Global Corporate Ownership Network[END_REF] show how different offshore financial centers (OFCs) are used in complex structure of multinationals' ownership as conduit or sink for locating capital and revenues. [START_REF] Phillips | Group subsidiaries, tax minimization and offshore financial centres: Mapping organizational structures to establish the 'inbetweener' advantage[END_REF] distinguish two types of affiliates in OFCs, stand-alone and in-betweeners, the later having control of a large share of the network of affiliates and allowing for aggressive tax planning strategies. See also [START_REF] Damgaard | What is real and what is not in the global fdi network? IMF Working Paper[END_REF] and [START_REF] Delatte | Grey zones in global finance: The distorted geography of cross-border investments[END_REF] for evidence of the role of tax havens as intermediaries using FDI data.
5 See section 2.
of the subsidiary network can however further support tax avoidance once profits have been shifted to low-tax affiliates. More complex structures facilitate tax treaty shopping through the use of intermediaries in conduit countries ( [START_REF] Van't Riet | Optimal tax routing: network analysis of FDI diversion[END_REF][START_REF] Hong | Tax treaties and foreign equity holding companies of multinational corporations[END_REF] or the use of hybrid financial instruments to reduce tax liabilities [START_REF] Johannesen | Tax avoidance with cross-border hybrid instruments[END_REF][START_REF] Hardeck | Assessing the Tax Benefits of Hybrid Arrangements -Evidence from the Luxembourg Leaks[END_REF], allowing to design tax-minimizing routing of dividends from subsidiaries up to the ultimate owner. As such, complexity would be a complement to the instruments of profit shifting. Ownership complexity is also associated with lower transparency over the activity of MNEs and more discretionary power to managers over other stakeholders [START_REF] Balakrishnan | Tax aggressiveness and corporate transparency[END_REF][START_REF] Atwood | The Complementarity Between Tax Avoidance and Manager Diversion: Evidence from Tax Haven Firms[END_REF]. From the perspective of tax authorities, such opacity increases the information burden for tracking tax obligations and the costs of coordination among national tax authorities.
We build two different databases. We first use information on the network of affiliates owned by 66,539 MNEs drawn from Orbis, including more than 1.3 million affiliates worldwide. We define as complex an MNE organization in which the headquarter owns its subsidiaries through a chain of intermediaries, possibly spanning several countries. Our measure of complexity of the ownership structure is so defined as the average number of layers of ownership between each affiliate and the ultimate owner of the MNE. The raw data show a large heterogeneity in complexity, increasing in the size of MNEs, the number of industries in which they operate and their presence in tax havens. Interestingly, more complex MNEs exhibit a lower effective tax rate but are not more profitable overall according to consolidated financial accounts.
We then use micro-data on unconsolidated accounts of some 212,000 European affiliates of multinational firms to investigate how their reported profitability varies with corporate taxation and MNE complexity. Our identification strategy aims to compare affiliates from the same sector and located in the same country that differ in the complexity of the group they belong to [START_REF] Johannesen | Are Less Developed Countries More Exposed to Multinational Tax Avoidance? Method and Evidence from Micro-Data[END_REF]; [START_REF] Bilicka | Organizational Capacity and Profit Shifting[END_REF]). Such multi-country setting allows to control for all country characteristics, including the corporate tax rate, likely to affect reported profit in different countries, using fixed effects. We focus on the incidence of zero reported profit, which has been shown to be an important margin of profit shifting. 6We first show a larger bunching around zero reported profit for affiliates belonging to more complex multinational enterprises, consistent with tax avoidance being enabled by complex ownership structures. Such effect is economically significant when compared to other measures of profit shifting using the tax rate differential. Our results are not driven by other dimensions of complexity of the network of affiliates, such as the number of countries or industries in which the MNE operates, nor by the size of the MNE or its mere presence in tax havens. We also find that the relevant dimension of complexity is at the MNE level, while the layer of the subsidiary itself in the network does not matter.
We then ask whether complexity works as an enabler of profit shifting away from high-tax affiliates and show that profit shifting is magnified within complex MNEs. Indeed, high-tax affiliates belonging to complex MNEs tend to bunch more than hightax affiliates belonging to less complex MNEs. We further show that only more complex MNEs shift profits away from their high-tax affiliates, while the probability of reporting zero profit does not depend on the tax rate differential for MNEs with flat ownership structures. Complexity therefore enables profit shifting activities.
Finally, looking at the pattern of profit shifting within MNEs, our analysis shows that affiliates directly held through tax havens are more likely to report zero profit than other affiliates in more complex MNEs. This suggests that the profit allocation within MNEs partly rests on the development of the ownership network in tax havens.
This paper contributes to the literature on profit shifting using firm-level data, which focuses on the incentives to shift profit, measured as the tax wedge between a given affiliate and other affiliates of the MNE (see e.g. Huizinga et al. (2008); [START_REF] Johannesen | Are Less Developed Countries More Exposed to Multinational Tax Avoidance? Method and Evidence from Micro-Data[END_REF]). 7 We add to this literature by providing evidence on the types of MNEs more likely to shift profits.8 Our results also emphasize that a relevant dimension of heterogeneity in profit shifting is the complexity of the ownership network at the MNE level, and not the position of the subsidiary alone in the network, in line with tax strategies being decided at the headquarter level. In this respect, our paper complements the existing literature that has focused on subsidiary-level characteristics likely to affect profit shifting: firms' organizational capacity [START_REF] Bilicka | Organizational Capacity and Profit Shifting[END_REF] or size [START_REF] Davies | Knocking on Tax Haven's Door: Multinational Firms and Transfer Pricing[END_REF][START_REF] Wier | The dominant role of large firms in profit shifting[END_REF].
We also connect the literature on profit shifting to the literature on treaty shopping. [START_REF] Hong | Tax treaties and foreign direct investment: A network approach[END_REF] and [START_REF] Petkova | On the relevance of double tax treaties[END_REF] both provide evidence on the positive relationship between the existence of a tax-minimizing route and FDI and [START_REF] Hong | Tax treaties and foreign equity holding companies of multinational corporations[END_REF] shows that favorable tax treaty networks are positively associated with the use of equity holding companies. [START_REF] Van't Riet | Optimal tax routing: network analysis of FDI diversion[END_REF] show that treaty shopping reduces the tax liability on dividends by about 6 percentage points. We add to this literature by showing that ownership complexity supports profit shifting in line with complexity driven by chains of ownership allowing treaty shopping.
This paper is also related to the literature on MNEs organization. [START_REF] Altomonte | Business groups as knowledge-based hierarchies of firms[END_REF] propose a knowledge-based model of business groups in which the optimal organizational structure depends on production and problem solving efficiency. [START_REF] Altomonte | TNCs' global characteristics and subsidiaries' performance across European regions[END_REF] show that affiliates belonging to MNEs whose network is more geographically widespread but less diversified exhibit better performance. We add to this literature by showing that beyond economic determinants, the choice of ownership structure by MNEs affects the reported profitability of affiliates located in different jurisdictions with different corporate tax rates. Directly related to ours, two papers in the accounting literature have investigated the tax determinants of foreign affiliate ownership. [START_REF] Dyreng | The effect of tax and nontax country characteristics on the global equity supply chains of U.S. multinationals[END_REF] explore how US multinationals use foreign holding companies, and show that both tax and non-tax determinants affect the choice to use such intermediary and its location. [START_REF] Blouin | Does Tax Planning Effect Organizational Complexity: Evidence from Check-the-Box[END_REF] investigate the introduction of the check-the-box regulation in the US in 1997 and find that it incentivized MNEs to alter their organizational structure to take advantage of the new regulation tax planning potential.9 Our analysis complements those papers by showing the consequences of alternative ownership choices at the MNE level on profit shifting.
The paper is organized as follow. We define our measure of complexity in Section 2 and introduce a conceptual framework to understand the relationship between complexity and tax avoidance. Section 3 provides descriptive evidence of the level of complexity across firms and the characteristics of complex MNEs. Section 4 presents our methodology and main results, with associated robustness exercises gathered in Section 5.
Conceptual framework
We propose that the complex ownership structure of multinational subsidiaries supports profit shifting by MNEs through reallocating profits away from high-tax affiliates toward low-tax affiliates. In this section, we discuss how ownership complexity relates to tax avoidance strategies and present our measure of complexity.
How complexity matters for tax avoidance strategies
The literature has convincingly shown that tax avoidance by multinational enterprises is significant globally: [START_REF] Tørsløv | The missing profits of nations[END_REF] find that 36% of MNEs profits are shifted to tax havens in 2015, reducing tax revenues in high-tax countries accordingly. MNEs engage in profit shifting through three main channels. They can use transfer prices on trade in goods [START_REF] Bernard | Transfer pricing by u.s.-based multinational firms[END_REF][START_REF] Cristea | Transfer pricing by multinational firms: New evidence from foreign firm ownerships[END_REF][START_REF] Vicard | Profit shifting through transfer pricing: evidence from French firm level trade data[END_REF][START_REF] Davies | Knocking on Tax Haven's Door: Multinational Firms and Transfer Pricing[END_REF][START_REF] Wier | Tax-motivated transfer mispricing in south africa: Direct evidence using transaction data[END_REF][START_REF] Liu | International Transfer Pricing and Tax Avoidance: Evidence from Linked Trade-Tax Statistics in the United Kingdom[END_REF], intra-firm debt shifting (Huizinga et al., 2008;[START_REF] Fuest | International debt shifting and multinational firms in developing economies[END_REF] and the location of intangibles assets in tax havens [START_REF] Karkinsky | Corporate taxation and the choice of patent location within multinational firms[END_REF][START_REF] Dischinger | The role of headquarters in multinational profit shifting strategies[END_REF][START_REF] Hebous | At your service! The role of tax havens in international trade with services[END_REF] to shift profits from high to low-tax subsidiaries. Profits can be shifted between any two subsidiaries or between a subsidiary and the headquarter of the MNE. Using those three instruments of profit shifting therefore does not directly require any specific ownership structure of the firm or a direct ownership link between the two affiliates involved.
Whatever their location inside the firm, profits are then routed as dividends to the headquarter and shareholders. Here the ownership structure of the MNE can be designed so as to minimize the tax incurred to remit dividends to the parent firm. By intermediating a conduit entity in-between the parent and its subsidiary, and locating it in a jurisdiction with favorable tax treaties with both the source country of the subsidiary and the country of the ultimate parent, an MNE can take advantage of specific provisions reducing its tax liability on dividends or other passive incomes. Such treaty shopping enables MNEs to take advantage of lower taxation by redirecting investment through a third country. 10 By doing so, they increase the complexity of their ownership network by setting foreign holding subsidiaries in countries that offer favorable tax regimes, instead of directly holding subsidiaries in which they operate real activities.
Tax treaties are bilateral agreements aiming at preventing double taxation and facilitating cross-border activities. At the global level, more than 3,500 bilateral tax treaties are in force, covering more than 80% of ownership links (UNCTAD, 2016). They provide for specific provisions defining the tax treatment of income earned abroad by residents of the two contracting authorities. By setting specific tax provisions at the bilateral level, tax treaties enable MNEs to design tax efficient chains of investments that minimize their tax liability when routing dividends up to the ultimate owner and managing cash within the firm. It is worth noting that routing dividends through a number of intermediary countries may generate tax liability at each stage, except for routes through countries that specifically provide tax provisions to avoid taxation, such as tax havens characterized by a network of favorable tax treaties [START_REF] Palan | Tax Havens: How Globalization Really Works[END_REF].
Of particular interest in our case, tax treaties define the scope, rate and applicability of withholding taxes. Withholding taxes are taxes applied in the source country on dividend, interest, and royalty payments made to residents of a foreign country. Since withholding tax rates are defined by tax treaties, they depend on the location of the parent and the subsidiary. 11 Additionally, the EU parent-subsidiary directive imposes a zero withholding tax rate on dividends distributed by a subsidiary to its parent when both are EU residents, and provides for dividend participation exemption. Multinational corporations may therefore organize indirect ownership chains to exploit specific tax treaty provisions on withholding taxes. [START_REF] Van't Riet | Optimal tax routing: network analysis of FDI diversion[END_REF] show treaty 10 One famous example of such treaty shopping is the Double Irish with a Dutch Sandwich scheme orchestrated by several US multinationals in the 2000s and 2010s. It involved two Irish subsidiaries, one tax resident in Bermuda and one tax resident in Ireland and fully owned by the former, and a Dutch subsidiary [START_REF] Jones | Tax haven networks and the role of the big 4 accountancy firms[END_REF].
11 For instance, the Netherlands have double tax treaties with over 90 countries. Although the number of tax treaties is not very different from the EU average, the main difference lies in its generosity. Over 80% of all tax treaties signed by the Netherlands offer a zero withholding tax rate on dividends, royalties and interests, against a standard withholding tax rate of 15% for dividends and of up to 25.8% for interests and royalties in 2022 (https://www.ey.com/en_gl/tax-guides/ worldwide-corporate-tax-guide). In 2019, the Netherlands had approximately 12,400 conduit companies representing approximately 550% of their Gross Domestic Product (GDP). The amount related to interest, royalty and dividend payments that flow through these conduit companies annually represented about A C170 billion between 2015 and 2019 (https://www.government.nl/documents/reports/ 2021/10/03/the-road-to-acceptable-conduit-activities).
shopping gains on dividend repatriation for two thirds of country pairs in their sample, and an average reduction of the tax liability on dividends by 6 percentage points thanks mainly to lower withholding taxes on indirect repatriation routes. Similarly, [START_REF] Hong | Tax treaties and foreign direct investment: A network approach[END_REF] finds a tax minimizing indirect route for 39% of country pairs in a smaller sample of 70 countries, and a treaty shopping reduction of 9.4 percentage points corresponding to three quarters of the withholding tax rate on dividends. Controlled Foreign Corporation (CFC) rules are another relevant dimension of MNEs taxation. They are applied by the country of the parent company and attribute some passive income -e.g. dividends, interests, and royalties -of a low-taxed foreign subsidiary to its parent company for purpose of taxation. Depending on specific national legislations, CFC rules may not apply to subsidiaries located in countries with a double taxation treaty or to subsidiaries performing substantial economic activity. MNEs have therefore incentives to locate holding affiliates in countries that do not impose CFC rules. Within the EU, the applicability of CFC rules has been restricted to purely artificial schemes since 2006, as per the Cadbury-Schweppes ruling [START_REF] Schenkelberg | The Cadbury Schweppes judgment and its implications on profit shifting activities within Europe[END_REF].
A separate dimension in which complex ownership structures may facilitate profit shifting is through the creation of opacity in the functioning of the MNE. Creating opaque schemes offers more discretionary power to managers that can implement corporate diversion (i.e., a transfer of wealth from shareholders to managers). At the same time, corporate diversion is associated with tax avoidance, especially when the cost of diversion is low [START_REF] Desai | The demand for tax haven operations[END_REF][START_REF] Desai | Theft and taxes[END_REF]. Cross-border ownership chains also blur the investor nationality [START_REF] Alabrese | The Blurring of Corporate Investor Nationality and Complex Ownership Structures[END_REF] and make the identification of indirect ownership links more difficult. These phenomenon are reinforced by the use of tax havens that provide secrecy and thus increase the burden of information on tax and regulatory authorities and reduce their effectiveness.12 As such, complex ownership structures may directly facilitate the use of regular instruments of profit shifting.
Measuring complexity
We are interested in the complexity of affiliate ownership at the level of a multinational enterprise. A complex ownership structure refers to an MNE organization in which the headquarter owns its subsidiaries through a chain of intermediaries, possibly spanning several countries, contrary to a flat or horizontal structure in which subsidiaries are held directly by the headquarter.
We use a simple measure of complexity defined as the mean number of layers of ownership over all affiliates of a multinational company, following UNCTAD (2016). The number of layers is the number of ownership links between the global ultimate owner (GUO) of the MNE and the affiliate. The ultimate owner is the individual or entity at the top of the corporate ownership structure. 13 An affiliate held directly by the ultimate owner is at layer 1 while a subsidiary held through one intermediary is at layer 2. Figure 1 illustrates the computation of our complexity measure.
Such a measure accounts for both the length of the chain of holdings and the number of affiliates at each layer of ownership as can be seen from Figure 1. The more vertical the structure, the higher the value of our complexity measure. A perfectly horizontal MNE (with at most one layer between the ultimate owner and each affiliate) will always have a complexity measure of 1, irrespective of the number of affiliates, while a MNE with 10 affiliates held through a single ownership chain with 10 vertical ownership links will have a score of 5.5. The complexity score also increases the further away from the ultimate owner the mass of subsidiaries are located in the ownership structure. Orbis provides information on the ownership structure of affiliates up to 10 layers. In case of cross-ownership, we keep the shortest path from the affiliate to the ultimate owner. Table C16 in Appendix C shows that on average, multinational enterprises have 1.4 layers of ownership between their headquarter and their affiliates. We can however observe that there exists a large heterogeneity in the complexity of ownership structures. More that half of the MNEs in our sample have only one layer, which implies that they have a horizontal structure: one parent firm directly owns all of its affiliates. The top 25% firms, on the opposite, have on average 1.5 layers, with the more complex firms in our sample having a maximum of 10 layers of ownership. 14We also use three alternative measures of complexity of ownership network as robustness. 15 We use the maximum layer within the MNE (as accounted for in [START_REF] Wagener | The Relevance of Complex Group Structures for Income Shifting and Investors' Valuation of Tax Avoidance[END_REF]), which considers only the verticality of the ownership structure. Second, we build on [START_REF] Ajdacic | The wealth defence industry: A large-scale study on accountancy firms as profit shifting facilitators[END_REF] and use an entropy measure that takes into account the maximum number of layer within the MNE, but also the number of subsidiaries at each layer. Such a measure increases with the number of layers but also with a more even distribution of affiliates at each layer, reflecting the idea that an MNE with one parent owning 99 affiliates directly and one indirectly is less complex than an MNE with one parent owning 50 affiliates, each owning another affiliate. Finally, we use a skewness measure of the distribution of affiliates across layers for MNEs that have at least two layers (from [START_REF] Altomonte | Business groups as knowledge-based hierarchies of firms[END_REF]). 16 3 Descriptive evidence
Data source
We use micro data from the Orbis database, maintained by Burean Van Dijk. It provides cross-country ownership and financial information (from the balance sheet and profit and loss accounts) for corporations, allowing to measure the global reach of MNEs through their network of affiliates and to analyze cross-country micro-data from financial accounts at the affiliate level. The information is adapted by Bureau Van Dijk to make it comparable across countries.
First, we focus on consolidated accounts and information on the MNEs' ownership network. We build a database with information for the year 2018 at the group level of multinational firms that have at least one affiliate in the European Union. We retrieve information on the networks of affiliates and the level of ownership for each affiliate so that we have a view on the structure of the group. The final sample includes 66,539 groups owning a total of 1,330,423 affiliates worldwide.
Table 1 shows descriptive statistics for the MNEs in our sample. The average MNE has 16 affiliates, spread over 4 different countries and 3 different industries. The largest MNEs are present in more than 150 countries, emphasizing the importance of crossborder ownership links in such firms. Most MNEs do not have any presence in a tax haven: in our sample, 27% of all MNEs have at least one affiliate in a tax haven, in line with 20% for German MNEs reported by [START_REF] Gumpert | Multinational Firms and Tax Havens[END_REF]. This also suggests the role of tax havens and OFCs as determinants of ownership structure.
Second, we build a database at the unconsolidated level. We focus on the affiliates of the MNEs in the consolidated database located in the European Union. We only consider affiliates that have unconsolidated accounts and basic financial information shareholdings where one entity is owned (fully or partially) by the same entity in which it owns a stake. UNCTAD (2016) documents that the vertical dimension of complexity captured by our measure dominates for the largest MNEs.
16 Appendix C provides additional details on their computation; the results are presented in Section 5.1. available. The final sample includes 212,516 affiliates.17 We use the return on assets, measured as earnings before interest and tax plus financial profits divided by total assets, as our measure of profitability at the subsidiary level.
There are some limitations related to the use of Orbis: the coverage is uneven by country for financial information since reporting requirements differ across countries. 18In particular, the coverage of Orbis is limited for balance sheet information in tax havens, some of which have no credit registry. Note however that Orbis does report information on the ownership structure of the MNE including tax haven affiliates even when no balance sheet information is available [START_REF] Garcia-Bernardo | Multinational corporations and tax havens: evidence from country-by-country reporting[END_REF].
Regarding the ownership information, we have the cross-sectional information as it is at the date of download of the data. Therefore, we do not have information about mergers, acquisitions or creation of affiliates. Finally, we use balance sheet data instead of tax return data. [START_REF] Bilicka | Comparing UK Tax Returns of Foreign Multinationals to Matched Domestic Firms[END_REF] shows that there exist a reporting difference for MNEs between their accounting reported information and their tax return information. We might therefore be underestimating the profitability effect of complexity.
Statutory tax rate information comes from the Tax Foundation and the list of tax havens is from [START_REF] Hines | Fiscal Paradise: Foreign Tax Havens and American Business[END_REF].
Complexity and size
The number of layers mechanically depends on the number of affiliates, especially when the number of affiliates is low. An MNE with one subsidiary has a maximum number of layers of one, an MNE with two subsidiaries has a maximum number of layers of two, etc., as shown in table B14 in Appendix B. Yet, figure 2 shows that there is heterogeneity in the complexity of MNEs within bins of number of affiliates. The median complexity increases with the number of affiliates, from 1 for MNEs with less than four affiliates to just below 2 for MNEs with more than 100 affiliates, underlying that most affiliates are owned through at most 2 layers of control. The distribution of MNEs' complexity is however heterogeneous within bins of size: the 90th percentile exceeds 2 for firms with at least six affiliates and reaches 4 for the largest MNEs. The complexity of MNEs' ownership structure as measured by the number of layers is therefore conditioned by the size of the network of affiliates. In the following, to account for this relationship between size and complexity, we will systematically condition on the number of affiliates using fixed effects by bins of size. 19 Table B15 shows that size fixed effects explain close to one third of the variance in complexity across MNEs, while industry and country fixed effects have a limited explanatory power.
The determinants of complexity of corporate structure
In this section, we explore how the complexity of corporate structure varies across MNEs along several characteristics. We regress complexity on a set of variables measuring different dimensions of subsidiary networks, including tax-related factors (a dummy for having at least one subsidiary in a tax haven) and non-tax factors (the geographical footprint of the firm and the sectoral diversification of a group). Diversification is defined here as the number of different industries (NACE 2-digit code) in which affiliates of the group operate. Results reported in column (1) of Table 2 show that, as expected, MNEs with a presence in a tax haven have a more complex ownership structure, in line with tax haven affiliates serving as a conduit entities. More diversified MNEs also exhibit larger complexity levels, while the opposite is true regarding the number of countries in which the MNE operates.
In column (2), we add characteristics of the country of location of the global ultimate owner: an indicator variable equal to one when the origin country is a tax haven and its corporate tax rate. While the latter is not significantly associated to complexity, MNEs whose global ultimate owner is incorporated in a tax haven are significantly more complex.
Finally, in columns (3)-( 5), we consider how complexity is correlated with different measures of performance of the firm using the consolidated financial information. More complex MNEs are more productive (measured as labor productivity in column ( 6)) but are not more profitable, as measured by the return on assets (column ( 7)). They also exhibit a lower effective tax rate (measured as tax paid divided by profits; column (8)), in line with our assumption that complexity may serve tax avoidance. While such relationships are correlations, and do not imply causation, they are informative in showing that both tax and non-tax factors are related to the complexity of the ownership network. Our empirical methodology below focuses on the reported profitability of affiliates of MNEs and the distribution of profits within the MNE. In interpreting our results, it is interesting to keep in mind that, at the consolidated level, more complex MNEs do not exhibit a lower return on assets.
Complexity and tax avoidance 4.1 Methodology
Our empirical strategy focuses on the propensity to report zero profit for a firm. When the cost to shift profit is fixed, or when the cost is variable but low enough, MNEs have incentives to shift all their profits away from high-tax affiliates to reduce their overall tax liabilities [START_REF] Bilicka | Comparing UK Tax Returns of Foreign Multinationals to Matched Domestic Firms[END_REF][START_REF] Johannesen | Are Less Developed Countries More Exposed to Multinational Tax Avoidance? Method and Evidence from Micro-Data[END_REF]. Such bunching at zero profit may also result from non-tax incentives, so that we need to compare firms located in the same country and in the same industry that face different incentives or opportunities to shift profit, because of the characteristics of the MNE they belong to.
We estimate the probability of bunching around zero profit of firm i held by a multinational enterprise j, located in a country c and operating in sector k:
1 zero i = β 0 + β 1 T ax f or ij + β 2 Complex j + θ k + θ c + θ s + ϵ i , (1)
where 1 zero i is a dummy variable for firms reporting zero profit, measured as a return on assets between -0.5% and 0.5%. Complex j is our measure of complexity of the MNE j to which affiliate i belongs. T ax f or ij is the unweighted average tax rate of all affiliates and the GUO but affiliate i within the MNE. θ c and θ k are country and industry (NACE 2digit) fixed effects respectively. Finally, θ s are fixed effects by bins of MNE size measured as the number of affiliates, with bins s = [1; 2; 3; 4; 5; 5/10; 11/25; 26/100; > 100]. 20The model is a cross-sectional specification for the year 2018. The introduction of country fixed effects controls for all country-specific characteristics likely to explain the average profitability of firms and probability of reporting zero profit (including the effective tax rate, specific tax regimes or the average book-tax difference in a country). Equation 1 is estimated through OLS, with robust standard errors clustered at the MNE level in line with our variable of interest.
We focus on affiliates of MNEs, disregarding domestic firms that have no opportunity to shift profits to foreign affiliates. Our identification comes from comparing affiliates in a given country and industry belonging to MNEs of similar size in terms of the number of affiliates but differing in terms of their average tax rate (and so incentives to shift profits) and complexity (opportunity to shift profits).
Before turning to empirical results, we present graphical evidence of bunching at zero profit specific to more complex MNEs. Figure 3 reports the distribution of returns on assets for our sample of European affiliates, distinguishing more complex MNEs (complexity above the median) and less complex MNEs (complexity below the median). It shows a clear bunching at zero of the reported return on asset for all corporations: about 10.2% of affiliates on the whole sample report close to zero profits (defined as above as a return on assets between -0.5% and 0.5%).21 But bunching at zero profit is more prevalent among affiliates of more complex MNEs: 10.6% of them report zero profit against 9.8% for affiliates in less complex MNEs. Note: Less (more) complex MNEs are affiliates belonging to a MNE below (above) the median of complexity. Sample restricted to ROAs between -5% and 5%.
Complexity and bunching at zero profit
Results from estimating equation ( 1) are shown in Table 3 In columns ( 3) and ( 4), we augment the model by adding our measure of ownership network complexity. Complexity has a positive and significant impact on the likelihood of reporting zero profits. Corporations belonging to more complex MNEs are therefore more likely to report zero profits than other corporations in the same country and industry. Note that adding our complexity measure leaves the coefficient on parent tax rate (column (3)) or average tax rate (column (4)) broadly unchanged but improves the precision of their estimates.
Our point estimate in column (4) implies that increasing average complexity of the MNE by one layer increases the likelihood of reporting zero profits by 0.5pp. Considering that the baseline rate of bunching around zero profits is 10.2%, increasing average complexity of the MNE by one layer increases bunching at zero by about 5.4%. As a benchmark, a 10 percentage point decrease in the average foreign tax rate increases the share of corporations reporting zero profits by around 10 percent. The effect of complexity on tax avoidance is thus economically significant.
In columns ( 5) and ( 6), we add a variable measuring the subsidiary position in the ownership tree (Subsidiary Level) of the MNE to assess whether the relevant dimension of complexity pertains to the ownership organization of the MNE as a whole or to the distance of the subsidiary itself to the headquarter. The insignificant coefficient on the subsidiary level confirms that the relevant dimension is the complexity at the MNE level, which does not systematically affect affiliates further away from the ultimate owner. This is consistent with the idea that tax planning is a strategic decision of the MNE and is defined for the whole group and not subsidiary by subsidiary.
Complexity as an enabler of profit shifting
We then ask the question: does complexity in ownership network work as an enabler of profit shifting away from high-tax affiliates? To assess this, we add an interaction term between complexity and the average tax rate of all affiliates but affiliate i as follows:
1 zero i = β 0 + β 1 T ax f or ij + β 2 Complex j + β 3 T ax f or ij × Complex j + θ k + θ c + θ s + ϵ i . (2)
We expect the MNE complexity to magnify the impact of the tax rate of foreign affiliates, i.e. β 3 to be negative. Results are reported in column (1) of Table 4. It shows a negative coefficient on the interaction between tax rate of other affiliates and complexity: complexity magnifies the difference in the probability to report zero profit between high and low-tax affiliates. Such result is consistent with complexity playing a facilitating role in profit shifting. To illustrate the impact of complexity on profit shifting, we plot in Figure 4 the linear prediction of reporting zero profit depending on the average tax rate of other affiliates within the group, and the 95% confidence intervals. We consider 3 cases: affiliates belonging to an MNE at the 10th percentile of complexity (Complex j = 1), at the median level (Complex j = 1.6) and at the 90th percentile (Complex j = 3.15). Predicted zero profit decreases sharply with the average tax rate faced by other affiliates within the MNE for MNEs at the 90th percentile of complexity. More complex MNEs report significantly more zero profits in high-tax affiliates than in low-tax affiliates. While such negative relationship, although flatter, is still present for MNEs at the median of complexity, it flattens completely for less complex MNEs at the 10th percentile.
Figure 4 therefore shows that complexity in the ownership network of affiliates works as an enabler of profit shifting away from high-tax affiliates; and that profit shifting between affiliates prevails only in sufficiently complex MNEs. Note: 95% confidence intervals. The complexity levels correspond to the 10th, 50th and 90th percentile level of complexity in the sample.
Profit shifting within complex MNEs
Finally, in this section, we investigate how the structure of affiliate ownership affects profit allocation within the MNE. Given the role of intermediaries located in tax havens in tax avoidance schemes, we hypothesize that affiliates held through a tax haven are more likely to report zero profit than other affiliates in more complex MNEs. To test this, we estimate the following:
zero i = β 0 + β 1 tax f or ij + β 2 Complex j × T Hhold ij + β 3 T Hhold ij + θ c + θ k + θ j + ϵ i , (3)
where T Hhold ij ("Held by a TH " in Table 4) is a dummy equal to one if firm i is held directly by an affiliate in a tax haven. Note that such a specification controls for any potential omitted variable bias at the level of the MNE through the MNE fixed effects θ j . Here we exploit differences among affiliates within MNEs to fully control for the characteristics of the multinational enterprise and consider the role played by chains of ownership through tax havens together with complexity in enabling tax avoidance.
Results are presented in columns ( 2)-( 4) of Table 4. We introduce T Hhold ij and its interaction with complexity in turn. Column (2) underlines that on average affiliates held through tax havens are not different from other affiliates. Column (3) however shows that all MNEs are not alike in their internal allocation of profits: affiliates held through a tax haven are more likely to bunch at zero profit when they belong to a complex MNE. The coefficient on the interaction term is positive and significant. Finally, column (4) confirms that such result is robust to the inclusion of MNE fixed effects, i.e. is not driven by other unobserved characteristics of more complex MNEs.
Such allocation of profit between affiliates of more complex MNEs underlines that chains of ownership going through tax havens are central to tax avoidance strategies.
Confounding factors at the MNE level
The complexity of the ownership network may also be correlated with other tax and non-tax characteristics of the MNE, as underlined in Section 3.3. In Table 5, we control for such confounders at the MNE level and show that complexity is the relevant MNE characteristic with regard to tax avoidance.
We first test whether the location of the global ultimate owner matters. In particular, Table 2 underlines that MNEs whose global ultimate owner is located in a tax haven are more complex. In column (1), we control for a dummy indicating whether the global ultimate owner is located in a tax haven. We find that subsidiaries from such MNEs indeed have a larger probability of reporting zero profit. Such effect does however not drive the impact of complexity on the propensity to report zero profit.
Next we control for the size of MNEs since due to the fixed cost nature of profit shifting [START_REF] Bilicka | Comparing UK Tax Returns of Foreign Multinationals to Matched Domestic Firms[END_REF][START_REF] Davies | Knocking on Tax Haven's Door: Multinational Firms and Transfer Pricing[END_REF][START_REF] Wier | The dominant role of large firms in profit shifting[END_REF], larger firms are more likely to shift profits and they are also more complex on average. We control for size using the logarithm of the number of employees (column (2)) and the logarithm of total assets (column (3)) at the MNE level. Size has a negative effect on the propensity to bunch at zero profit, suggesting that larger MNEs are also more profitable on average. The coefficient on complexity remains qualitatively and quantitatively similar. It confirms that the complexity of ownership structure is the relevant dimension of heterogeneity in profit shifting across MNEs, and is not driven by a correlation with the size of MNEs, which by itself does not appear as a relevant dimension of heterogeneity in profit shifting behavior.
Finally, in column (4), we control for the labor productivity of the MNE. Even though more productive MNE have on average a more complex ownership structure (as shown in Table 2), we want to more fully control for the impact of productivity (measured as value added divided by the number of employees) on reported profitability. We observe that affiliates of more productive MNEs tend to bunch less at zero profit. Again, controlling for this, our result on complexity remains similar.
Robustness and alternative specifications
In this section, we provide robustness analysis for our benchmark results.
Omitted variable -Other dimensions of complexity
We assess whether our results are driven by other dimensions of complexity of the MNE and its network. Complex ownership structure may reflect other determinants of firm organization, which could explain the larger bunching at zero profit of more complex MNEs to the extent that such dimensions affect their overall profitability. To deal with such potential omitted variable bias, we introduce, in Table 6, several alternative measures of complexity as controls: the presence in tax havens, the number of different industries in which the MNE operates and the number of different countries of location of affiliates to account for the geographical spread of the MNE. Among these variables, the sectoral diversification is positively associated with bunching at zero, reflecting the fact that more productive firms may be more likely to manage activities in diversified industries. On the opposite, the geographical footprint and, surthe restricted sample on which the number of employees and labor productivity are available.
prisingly, the presence of the MNE in a tax haven and the number of tax havens in which the MNE is present are negatively associated to the probability to report zero profit. The negative coefficient associated with the number of tax havens is, however, reversed once we control for the geographical and sectoral spread of the MNEs (column (6)). In all specifications of Table 6, the coefficient on our complexity measure remains significant and of similar magnitude. Note: This table reports OLS estimates of eq. ( 1) on cross-sectional data for the year 2018. Av. foreign tax rate is the unweighted average tax rate across the GUO and all subsidiaries of multinational firm j but subsidiary i. TH is a dummy variable equal to 1 if MNF j has at least one affiliate located in a tax haven. Nb diff. TH is a variable equal to the number of different tax havens an MNF is located in. Nb diff. ind. represents the sectoral diversification of the MNF. Nb diff.countries represents the geographical spread of the network of subsidiaries of the MNF. Standard errors in parenthesis robust to heteroscedasticity and clustered at the multinational corporation level. All specifications include country FE, NACE-2-digit sector FE and size bins FE. * p < 0.10, ** p < 0.05, *** p < 0.01.
Alternative measures of ownership network complexity
Another robustness check relates to our measure of the complexity of the ownership structure of the MNE. We use the average number of layers of ownership as our benchmark measure of complexity, due to its simplicity and the fact that it accounts for both the length of ownership chains as well as the number of affiliates at each layer. We use here alternative measures of complexity: the maximum level of layers of ownership within the MNE ownership network, a Shannon entropy measure and an (inverse) skewness measure (see Appendix C). Results are presented in columns (1)-(3) of Table 7.
All complexity measures display a positive and significant coefficient, in line with our benchmark results. The significance level is however lower for the skewness measure (column (3)), which is the less correlated with our benchmark complexity measure (the correlation is 40% against more than 80% for the other two measures). It confirms that the number of layers of ownership is a relevant dimension of complexity for tax purposes.
Other robustness tests
In Table 7, we also test whether our results are robust to controlling for common shocks across industries and countries by including country×sector fixed effects (see column (4) in Table 7). We also analyze the effect of changing the level of clustering. In the benchmark results, standard errors are clustered at the multinational firm level in line with since the likelihood that one affiliate reports zero profit is likely to be correlated with the likelihood that another affiliate within the group bunches around zero profit. We show that our results are robust to clustering at the country level and at the country×MNE level in columns ( 5) and ( 6).
As a third robustness check, we remove the size fixed effect and focus on affiliates belonging to MNEs with more than 10 (respectively 50) affiliates in columns ( 7) and (8) to show that our treatment of the systematic relationship between the number of affiliates and complexity through fixed effects by bins of size does not drive our results. Focusing on large MNEs in terms of number of affiliates ensures that such complexity is not systematically affected by the MNE size (see Table B14). In all these instances, results remain qualitatively similar to the benchmark estimates.
A fourth robustness looks at the linearity in the effect of complexity on bunching at zero profit. We estimate Equation 1 using deciles of complexity of the MNE. Results are presented in Figure 5, which plots the estimated coefficients by decile of increasing complexity and their confidence interval. It shows that the impact of complexity on the probability to report zero profit is insignificant for low levels of complexity but increases with the level of complexity, especially for subsidiaries belonging to highly complex MNEs. The impact of MNE complexity is however not driven solely by these highly complex MNEs.
In Appendix D, we additionally show that our results are robust to changing the definition of our dependent variable 1 zero using alternative thresholds or definitions (Table D17). We also show in Table D18 that our results are robust to removing outliers. Note: This figure shows the estimated coefficient on the variable "Complexity" divided in deciles. The regression results are reported in column (4) of Table D18. Note: This table reports OLS estimates on cross-sectional data for the year 2018. Av. foreign tax rate is the unweighted average tax rate across the GUO and all subsidiaries of multinational firm j but subsidiary i. Skewness is defined as the opposite of the standard skewness measure of the distribution of affiliate across layers. A positive value of skewness implies that more affiliates are located at layers further away from the GUO. Standard error in parenthesis robust to heteroscedasticity and clustered at the multinational corporation level in columns (1), ( 2), (3), ( 4), ( 7) and (8), at the subsidiary country level in column (5) and at both the multinational corporation and subsidiary country levels in column (6). All specifications include country FE, NACE-2-digit sector FE. * p < 0.10, ** p < 0.05, *** p < 0.01.
Conclusion
This paper investigates the impact of complex ownership structures of multinational enterprises on tax avoidance. Using a cross-country dataset of European affiliates, we show that corporations belonging to more complex groups report lower profits than similar affiliates in the same country and industry. Such pattern holds for high-tax affiliates, confirming that complexity facilitates profit shifting between affiliates.
Our analysis extends the literature on profit shifting using micro-data in documenting a dimension of heterogeneity in profit shifting behavior across MNEs: only the more complex MNEs shift profit away from their high-tax affiliates, while MNEs with a flat ownership structure do not show such tax sensitivity in their reported profits. Our results complement papers showing other dimensions of heterogeneity, across countries [START_REF] Johannesen | Are Less Developed Countries More Exposed to Multinational Tax Avoidance? Method and Evidence from Micro-Data[END_REF] and across corporations depending on the quality of their management [START_REF] Bilicka | Organizational Capacity and Profit Shifting[END_REF] or size [START_REF] Wier | The dominant role of large firms in profit shifting[END_REF].
Such evidence of heterogeneity in profit shifting provide relevant insights for designing anti-avoidance policies. Tax authorities need quality information on the ownership structure of multinational enterprises to better understand profit shifting schemes. A first improvement is the introduction of Country-by-Country reports that requires all MNEs with previous year consolidated revenues above A C750 million to disclose some financial information in each country in which they have an affiliate. However, there is no information regarding the ownership structure of the MNEs and its complexity. Our results show that such information would be valuable to tax administrations. ownership chain could contain up to 10 layers. We also asked for the total number of affiliates. When several levels are requested on Orbis, the number of affiliates is limited to 1,000 per company. For those companies, it is impossible for us to compute the mean number of layers in the corporate group since we cannot observe all affiliates. We therefore decide to focus on companies for which we believe we have the right number of affiliates. We count the number of affiliates at each layer by company. We store the number of affiliates of level n -1, assuming that the maximum number of layers is n.
In the analysis, we only consider groups for which the sum of n -1 level affiliates and the total number of affiliates declared by Orbis is lower than 1,000.
A.1.2 Unconsolidated data:
We start from the 1,330,423 affiliates. We keep those that are in the EU (570,678). We retrieve information for 328,785 affiliates for which we have unconsolidated accounts. We drop firms if total assets, employment, sales or tangible fixed assets is negative in one of the years studied. We also drop affiliates for which the number of affiliates in their MNE is unsure. The final sample contains 212,516 affiliates. Figure 8 reports how complexity varies with other measures of the size of the firm than the number of affiliates: the number of employees and total assets. While larger firms tend to be more complex, the relationship between complexity and size is not as directly related using these alternative measures of size. The average number of layers is somewhat stable across employment size deciles up the 8th decile, and increases only for large MNEs belonging the the 9th and 10th deciles of employment; it is increasing with total assets particularly from the 5th decile.
We also investigate the source of variation in complexity, by country of origin, sector or size bins in terms of number of affiliates. Table B15 reports the R2 from regressing our complexity measure on fixed effects by size bin, sector and country. Column (1)-(3) show that country and sector fixed effects have little explanatory power (R2 of 4% and 2% respectively), while the size fixed effect explains 31% of the variation in complexity.
D Robustness D.1 Changing the definition of zero
We consider different definitions the our main dependent variable zero i in Table D17.
In the benchmark results, zero i is a dummy variable equal to 1 if the return on assets is between -0.005 and 0.005. ROA below -0.005 and above 0.005 are thus treated the same way. We first consider only MNEs that exhibit positive or null profits in column (1). In column (2), we set the dummy zero i equal to 1 if the subsidiary has a ROA below 0.5% and we therefore consider a negative profit as a zero profit. This allows to account for the possibility that firms use loss carryovers as an instrument to optimize taxes. Finally, in columns (3) and (4), we set the dummy zero i equal to 1 if the ROA is between -0.01 and 0.01 and between -0.001 and 0.001 respectively. Table D17 shows that our results are robust to all the above specifications of our dependent variables.
D.2 Outliers
Our measure of complexity is very skewed to the left. We therefore deal with potential outlier issues in Table D18. We first run a Cook's distance test that measures the aggregate change in the estimated coefficients when each observation is left out of the estimation. We run the estimation after removing the top 1% data points that have the most weight in the estimation. We then focus on the role of affiliates belonging to the most complex MNEs. In column (2), we remove from the sample the affiliates that are in the top 10% in terms of complexity. In column (3), we interact our complexity measure with a dummy variable equal to 1 if the affiliate belongs to an MNE in the top 10% in terms of complexity. The coefficient on complexity is not affected by the changes. Finally, in column (4), we compute the effect of complexity on profit shifting by deciles of complexity. We observe that the effect of complexity on tax avoidance is larger for affiliates belonging to more complex groups than for affiliates belonging to MNEs that are less complex.
Figure 1 :
1 Figure 1: Measuring the complexity of ownership networks
Figure 2 :
2 Figure 2: Distribution of complexity by size bins
Figure 3 :
3 Figure 3: Distribution of return on assets (percent, 2018)
Figure 4 :
4 Figure 4: Profit shifting depending on complexity
Figure 5 :
5 Figure 5: Bunching at zero profit by deciles of complexity
Figure 8 :
8 Figure 8: Complexity by alternative measures of firm size
Table 1 :
1 Summary statisticsNote: Diversification is the share of subisidiares working in the same sector as the Ultimate Owner. The average tax rate is the unweighted average tax rate across all subsidiaries of an MNE. The share of MNEs in low-countries is based on the bottom 25% of the distribution of tax rate of GUOs in our sample (tax rate around 21%). Tax haven presence is the share of MNEs that have at least one subisdiary in a tax haven.
Obs Mean Std. Dev. Min Max
Number of subsidiaries 66,539 16.09 48.57 1 949
Number of different countries 66,539 3.86 6.28 1 151
Number of different tax havens 66,539 0.46 1.08 0 21
Number of different industries 66,539 3.13 2.57 1 21
Diversification 66,525 0.27 0.34 0 1
Avg. tax rate 66,539 0.25 0.05 0 0.35
Tax rate GUO 66,514 0.25 0.05 0 0.35
Share in low-tax countries 66,514 0.28 0.45 0 1
Tax haven presence 66,539 0.27 0.45 0 1
Table 2 :
2 Determinants of MNE complexityThe dependent variable is the complexity at the MNE level. Standard errors in parenthesis are robust to heteroscedasticity. Diversification is defined as the number of different industries (NACE 2-digit code) in which affiliate of the group operate.The return on assets, effective tax rate and labor productivity variables are trimmed for the 1 st and 99 th percentiles. * p < 0.10, ** p < 0.05, *** p < 0.01.
(1) (2) (3) (4) (5)
Complex. Complex. Complex. Complex. Complex.
Tax haven presence 0.08*** 0.06*** 0.06*** 0.06*** 0.04***
(0.01) (0.01) (0.01) (0.01) (0.01)
Nb of diff. countries -0.01*** -0.01*** -0.01*** -0.01*** -0.01***
(0.00) (0.00) (0.00) (0.00) (0.00)
Diversification 0.14*** 0.14*** 0.11*** 0.12*** 0.12***
(0.00) (0.00) (0.00) (0.00) (0.00)
Tax rate GUO 0.01
(0.04)
GUO in a TH 0.10***
(0.01)
Labor prod. (log) 0.02***
(0.00)
Return on assets 0.01
(0.01)
Effective tax rate -0.07***
(0.01)
Observations 66,539 66,514 15,436 36,215 30,727
R-squared 0.31 0.31 0.28 0.27 0.27
Note:
Table 3 :
3 Zero profit and complexity
. Columns (1) and (2) re-
Note: This table reports OLS estimates of eq. (
1
) on cross-sectional data for the year 2018. Av. foreign tax rate is the unweighted average tax rate across the GUO and all subsidiaries of multinational firm j but subsidiary i. Subsidiary Level is the layer at which the subsidiary is located, i.e. the number of ownership links between the Ultimate owner and subsidiary i. Standard error in parenthesis robust to heteroscedasticity and clustered at the subsidiary country level in columns (
1
) and (
2
) and the multinational corporation level in others columns. * p < 0.10, ** p < 0.05, *** p < 0.01.
Table 4 :
4 Complexity and profit shifting
(1) (2) (3) (4)
1 zero 1 zero 1 zero 1 zero
Table 5 :
5 Confounding factors This table reports OLS estimates on cross-sectional data for the year 2018. Av. foreign tax rate is the unweighted average tax rate across the GUO and all subsidiaries of multinational firm j but subsidiary i. Standard error in parenthesis robust to heteroscedasticity and clustered at the subsidiary country level in columns (1) and (2) and the the multinational corporation level in others columns. All specifications include country FE, NACE-2-digit sector FE and size bins FE.
(1) (2) (3) (4)
1 zero 1 zero 1 zero 1 zero
Avg. foreign tax rate -0.115*** -0.065* -0.099*** 0.032
(0.030) (0.035) (0.033) (0.042)
Complexity 0.005*** 0.004** 0.006*** 0.004*
(0.002) (0.002) (0.002) (0.002)
GUO in a TH 0.010***
(0.004)
MNE nb of employees (log) -0.001**
(0.000)
MNE total assets (log) -0.002***
(0.001)
MNE labor productivity (log) -0.002
(0.001)
Observations 212,516 139,165 180,078 90,267
R-squared 0.045 0.037 0.044 0.033
CountryFE Yes Yes Yes Yes
IndustryFE Yes Yes Yes Yes
SizeBinFE Yes Yes Yes Yes
Note:
* p < 0.10, ** p < 0.05, *** p < 0.01.
Table 6 :
6 Zero profit and different dimensions of complexity
(1) (2) (3) (4) (5) (6)
1 zero 1 zero 1 zero 1 zero 1 zero 1 zero
Avg. foreign tax rate -0.131*** -0.135*** -0.133*** -0.125*** -0.132*** -0.128***
(0.030) (0.030) (0.030) (0.029) (0.030) (0.030)
Complexity 0.006*** 0.005*** 0.006*** 0.005*** 0.005*** 0.004**
(0.002) (0.002) (0.002) (0.002) (0.002) (0.002)
TH -0.013*** -0.012*** -0.014***
(0.003) (0.003) (0.003)
Nb diff. TH -0.002** -0.001 0.002*
(0.001) (0.001) (0.001)
Nb diff. ind. 0.002*** 0.003***
(0.001) (0.001)
Nb diff. countries -0.000*** -0.001***
(0.000) (0.000)
Observations 212,516 212,516 212,516 212,516 212,516 212,516
R-squared 0.045 0.045 0.045 0.045 0.045 0.046
CountryFE Yes Yes Yes Yes Yes Yes
IndustryFE Yes Yes Yes Yes Yes Yes
SizeBinFE Yes Yes Yes Yes Yes Yes
Table 7 :
7 Robustness tests
(1) (2) (3) (4) (5) (6) (7) (8)
1 zero 1 zero 1 zero 1 zero 1 zero 1 zero 1 zero 1 zero
Avg. foreign tax rate -0.123*** -0.125*** -0.138*** -0.124*** -0.127* -0.127* -0.151*** -0.222***
(0.029) (0.029) (0.037) (0.029) (0.066) (0.066) (0.040) (0.067)
Complexity (max) 0.004***
(0.001)
Shannon Entropy 0.014***
(0.003)
Skewness 0.002*
(0.001)
Complexity 0.006*** 0.006** 0.006** 0.004** 0.006**
(0.002) (0.003) (0.003) (0.002) (0.002)
Observations 212,516 212,516 179,580 212,500 212,516 212,516 158,197 86,676
R-squared 0.045 0.045 0.045 0.055 0.045 0.045 0.046 0.056
Country FE Yes Yes Yes No Yes Yes Yes Yes
Industry FE Yes Yes Yes No Yes Yes Yes Yes
Size Bin FE Yes Yes No No No No
Cluster MNE MNE MNE MNE Country MNE MNE MNE
& Country
Table 8 :
8 Profits and complexity -alternative specification Av. foreign tax rate is the unweighted average tax rate across the GUO and all subsidiaries of multinational firm j but subsidiary i. The ROA variable is trimmed to remove the 1 st and 99 th percentiles. Standard errors in parenthesis are robust to heteroscedasticity and clustered at the multinational corporation level. All specifications include country FE, NACE-2-digit sector FE and size bins FE. * p < 0.10, ** p < 0.05, *** p < 0.01.
(1) (2) (3) (4)
Profit (log) Profit (log) ROA ROA
Complexity -0.028*** -0.028*** -0.004** -0.003**
(0.011) (0.011) (0.002) (0.002)
Tax rate GUO 0.119 0.105***
(0.174) (0.024)
Avg. foreign tax rate -0.641** 0.162***
(0.250) (0.030)
Nb of employees (log) 0.443*** 0.442*** 0.017*** 0.017***
(0.006) (0.006) (0.001) (0.001)
Fixed assets (log) 0.292*** 0.292***
(0.004) (0.004)
Observations 86,540 86,538 125,640 125,636
R-squared 0.517 0.517 0.021 0.021
CountryFE Yes Yes Yes Yes
IndustryFE Yes Yes Yes Yes
SizeBinFE Yes Yes Yes Yes
Note: This table reports OLS estimates of eq. (4) on cross-sectional
data for the year 2018.
Table A11 :
A11 Affiliate database
Final list in the ownership network 1,330,423
Keep only affiliates in the EU27 570,781
Information available in Orbis 328,785
Cleaning 212,516
Figure 6: Distribution of MNEs across number of affiliates
15,000
Number of MNFs 5,000 10,000
0
1 2 3 4 5 5-10 _11-25 _26-100 _>100
Number of affiliates
31
Table A12 :
A12 Descriptive statistics of the main variables Data is for the year 2018. Profits is the sum of EBIT and financial profit/loss. All financial variables with a (*) are in millions of euros. The effective tax rate, the return on assets (ROA) and the labor productivity variables are trimmed for the top and bottom 1%. The average foreign tax rate is the unweighted average tax rates across all subsidiaries and the GUO in MNE j except firm i. Zero profit is a dummy equal to 1 if a subsidiary declares a return on assets between [-0.005;0.005].
Obs. Mean Std. Dev. Min Max
Consolidated data -GUO information
Labor prod. 16,223 0.15 0.37 -0.53 4.74
Effective tax rate 30,727 0.16 0.28 -1.81 2.01
Return on assets 36,215 3.37 18.43 -143.34 78.09
Tax rate GUO 66,514 0.25 0.05 0 0.35
Diversification 66,539 3.13 2.57 1 21
Tax haven presence 66,539 0.27 0.45 0 1
Nb of diff. tax havens 66,539 0.46 1.08 0 21
Nb of diff. countries 66,539 3.86 6.28 1 151
Complexity 66,539 1.34 0.60 1 10
Nb of affiliates 66,539 16.09 48.57 1 949
Unconsolidated data -Subsidiary information
Profits* 212,517 2.74 91.29 -4,757.94 20,017.72
Costs of employees* 132,522 5.2 31.4 -13.86 4,494.4
Total assets* 212,517 54.45 596.86 0 72,062.94
Fixed assets* 193,424 37.20 489.62 -0.2 56,050.48
EBIT* 212,517 2.08 80.09 -2,589.3 19,817.61
Number of employees 133,553 110.69 650.71 0 92,768
Share of affiliates in a TH 212,517 0.14 0.34 0 1
Zero profit (dummy) 212,517 0.1 0.3 0 1
ROA 208,265 0.90 28 -277.78 80.75
Avg. foreign tax rate (including the GUO) 212,517 0.25 0.04 0 0.35
Tax rate GUO 212,497 0.25 0.05 0 0.35
Tax rate SUB 212,517 0.25 0.06 0.09 0.35
Complexity 212,517 1.89 1 1 9.53
Note:
Table A13 :
A13 Distribution of subsidiaries by country -Subsidiary database
B Measuring complexity
Country Freq. Percent
Austria 2,932 1.38
Belgium 16,164 7.61
Bulgaria 2,845 1.34
Cyprus 160 0.08
Czech Republic 7,523 3.54
Germany 9,640 4.54
Denmark 10,904 5.13
Estonia 2,715 1.28
Spain 23,130 10.88
Finland 4,753 2.24
France 28,878 13.59
Greece 957 0.45
Croatia 2,475 1.16
Hungaria 5,613 2.64
Ireland 4,416 2.08
Italy 26,315 12.38
Lithuania 918 0.43
Luxembourg 4,058 1.91
Latvia 2,005 0.94
Malta 1,381 0.65
Neitherlands 2,605 1.23
Poland 11,006 5.18
Portugal 8,639 4.07
Romania 6,883 3.24
Sweden 18,818 8.85
Slovenia 1,418 0.67
Slovakia 5,366 2.52
Total 212,517 100
Figure 7: Number of MNEs by maximum number of layers
40,000
Number of MNEs 20,000 30,000
10,000
0
1 2 3 4 5 6 7 8 9 10
Complexity (max)
Table B14 :
B14 Maximum number of layers by number of affiliates
Max. nb Number of affiliates
of layers 1 2 3 4 5 6-10 11-25 26-100 >100 Total
1 15,370 7,495 4,150 2,419 1,504 2,676 1,055 210 11 34,890
2 0 2,423 2,502 2,118 1,831 4,665 2,866 855 42 17,302
3 0 0 330 373 442 1,841 2,426 1,472 224 7,108
4 0 0 0 34 66 436 1,118 1,326 348 3,328
5 0 0 0 0 7 105 344 752 396 1,604
6 0 0 0 0 0 25 138 419 374 956
7 0 0 0 0 0 6 50 180 238 474
8 0 0 0 0 0 1 17 102 200 320
9 0 0 0 0 0 3 8 60 118 189
10 0 0 0 0 0 0 23 89 256 368
Total 15,370 9,918 6,982 4,944 3,850 9,758 8,045 5,465 2,207 66,539
Table C16 :
C16 Summary statistics -Complexity
Skewness Entropy Complexity Complexity
(Max) (Mean)
p25 0 0 1 1
Median 0.52 0 1 1
Mean 0.68 0.36 1.94 1.34
p75 1.33 0.67 2 1.5
Obs. 31637 66539 66539 66539
Table D17 :
D17 Alternative definitions of zero profit This table reports OLS estimates on cross-sectional data for the year 2018. In column (1), we restrict the sample to firms that exhibit positive or null profits. In column (2), we set the dummy zero i equal to 1 if the subsidiary has a ROA below 0.5% and we therefore consider a negative profit as a zero profit. In columns (3) and (4), zero i equals 1 if ROA∈ [-0.01, 0.01] and ROA∈ [-0.001, 0.001] respectively. Standard errors in parenthesis are robust to heteroscedasticity and clustered at the multinational level. All specifications include country FE, NACE-2-digit sector FE and size bins FE. *** p<0.01, ** p<0.05, * p<0.1.
(1) (2) (3) (4)
1 zero 1 zero 1 zero 1 zero
Avg. foreign tax rate -0.176*** -0.430*** -0.186*** -0.120***
(0.039) (0.055) (0.035) (0.025)
Complexity 0.005** 0.020*** 0.007*** 0.004***
(0.002) (0.004) (0.002) (0.002)
Observations 137,937 212,516 212,516 212,516
R-squared 0.056 0.055 0.048 0.050
CountryFE Yes Yes Yes Yes
IndustryFE Yes Yes Yes Yes
SizeBinFE Yes Yes Yes Yes
Note:
Table D18 :
D18 Dealing with outliers This table reports OLS estimates of eq. (2) on cross-sectional data for the year 2018. Av. foreign tax rate is the unweighted average tax rate across the GUO and all subsidiaries of multinational firm j but subsidiary i. Standard error in parenthesis robust to heteroscedasticity and clustered at the multinational corporation level in others columns. All specifications include country FE, NACE-2digit sector FE and size bins FE. *** p<0.01, ** p<0.05, * p<0.1.
(1) (2) (3) (4)
1 zero 1 zero 1 zero 1 zero
Cook's test Top 10 Top 10 Non-par.
Avg. foreign tax rate -0.115*** -0.086*** -0.127*** -0.126***
(0.028) (0.030) (0.030) (0.029)
Complexity 0.006*** 0.007*** 0.007***
(0.002) (0.002) (0.002)
Top10 0.004
(0.019)
Top10×Complexity -0.002
(0.005)
Complexity D2 0.000
(0.006)
Complexity D3 0.004
(0.004)
Complexity D4 0.008*
(0.004)
Complexity D5 0.009**
(0.004)
Complexity D6 0.010**
(0.004)
Complexity D7 0.007
(0.005)
Complexity D8 0.012**
(0.005)
Complexity D9 0.014***
(0.005)
Complexity D10 0.021***
(0.006)
Constant 0.214*** 0.110*** 0.121*** 0.125***
(0.018) (0.008) (0.008) (0.008)
Observations 210,390 191,406 212,516 212,516
R-squared 0.053 0.042 0.045 0.045
CountryFE Yes Yes Yes Yes
IndustryFE Yes Yes Yes Yes
SizeBinFE Yes Yes Yes Yes
Note:
[START_REF] Bilicka | Comparing UK Tax Returns of Foreign Multinationals to Matched Domestic Firms[END_REF] reports that most of the difference in reported profits between MNEs and domestic companies in the UK is related to the former reporting zero taxable profit.
See Heckemeyer and Overesch (2017) for a literature review.
At the MNE level,[START_REF] Wagener | The Relevance of Complex Group Structures for Income Shifting and Investors' Valuation of Tax Avoidance[END_REF] show that complexity correlates with incentives to shift profits measured as presence in tax havens or the difference between the maximum and minimum statutory tax rate within the MNE, the more so for income-mobile firms.
Additionally, Gumpert et al. (2016) provide evidence that German multinationals have an increasing probability of tax haven presence when their affiliates face higher corporate tax rates in their country of operation.[START_REF] Devereux | Taxes and the location of production: evidence from a panel of US multinationals[END_REF] and[START_REF] Barrios | International taxation and multinational firm location decisions[END_REF] show that the statutory tax rate has a negative impact on the decision of setting up a subsidiary in a country. However, these papers do not address directly firm organization.
[START_REF] Ajdacic | The wealth defence industry: A large-scale study on accountancy firms as profit shifting facilitators[END_REF] show that MNEs audited by one of the Big Four accountancy firms have a higher network complexity and that it increases the use of OFC and holding and management affiliates.
The minimum percentage of control in the path from a subject company to its GUO must be at least 50.01%.
It is possible for an MNE to have more than 10 layers. However, we do not observe those affiliates in our data.
We do not consider more specific dimensions of ownership complexity, such as using shared ownership or joint ventures, ownership hubs where an affiliate controls several other affiliates or cross-
Further information on data collection and cleaning is reported in Appendix A.
The coverage of financial information in the Orbis database is notably better for EU countries. TableA13in Appendix A provides the detailed geographic allocation of affiliates with financial information included in our sample.
Figure8in Appendix B shows that complexity is also increasing in the size of the MNE as measured by the number of employees and total assets, although less so than in the number of affiliates, but the relationship is not mechanical.
This dimension of fixed effect accounts for the relationship between the maximum number of layers and the number of affiliates as emphasized in section 3. We therefore compare MNE with similar size in terms of number of affiliates but different levels of complexity.
See Table A12. The figure differs slightly from what appears on the figure due to the size of the bins on figure (3).
Note that the lower and less precisely estimated coefficients in column (2) and (4) are related to
CEPII and OFCE seminars for helpful discussions. The authors also thank Katazyna Bilicka and Dhammika Dharmapala for helpful comments. This work is co-funded by a French government subsidy managed by the Agence Nationale de la Recherche under the framework of the Investissements d'avenir programme ANR-17-EURE-0001. Manon Francois acknowledges support of the grants GA No. TAXUD/2022/DE/310 and of the Research Council of Norway No. 325720 and 341289. The views expressed here are those of the author(s) and not those of the EU Tax Observatory. EU Tax Observatory working papers are circulated for discussion and comment purposes. They have not been subject to peer-review or to the internal review process that accompanies official EU Tax Observatory publications. CEPII and OFCE seminars for helpful discussions. The authors also thank Katazyna Bilicka and Dhammika Dharmapala for helpful comments. This work is co-funded by a French government subsidy managed by the Agence Nationale de la Recherche under the framework of the Investissements d'avenir programme ANR-17-EURE-0001. Manon Francois acknowledges support of the grants GA No. TAXUD/2022/DE/310 and of the Research Council of Norway No. 325720 and 341289. EU Tax Observatory -Paris School of Economics -Paris 1 University,
Alternative specification
In this section, we estimate two alternative specifications using as the dependent variable either the log of profits or the return on assets as follows:
Y i is either the logarithm of reported profits or the return on assets. X i are a set of non-tax factors -the logarithm of employment and the logarithm of fixed assets -which determine profits under the assumption of Cobb-Douglas production function and no differences in productivity across MNEs. The tax factor tax f or ij represents the incentive to engage in profit shifting and Complex j is our complexity variable. As in Equation 1, we add fixed effects by size bins (number of affiliates), country c and industry k (NACE 2-digit). Standard errors are clustered at the multinational firm level.
Equation 4 is the standard framework used to study profit shifting when log prof its i is the dependent variable (e.g. [START_REF] Hines | Fiscal Paradise: Foreign Tax Havens and American Business[END_REF], Huizinga and Laeven (2008), [START_REF] Dharmapala | What Do We Know About Base Erosion and Profit Shifting? A Review of the Empirical Literature[END_REF]). Note however that log prof its i is not defined for negative or zero reported profit, so that results using this variable may suffer a selection bias (in line with the results of the previous section showing that the probability of reporting zero profit depends on the complexity of the MNE network as well as the average tax rate of the MNE). We improve on this standard specification by using as dependent variable the return on assets (ROA) instead, which can be estimated in level and is defined over positive as well as negative or zero profits [START_REF] Vicard | Profit shifting, returns on foreign direct investments and investment income imbalances[END_REF][START_REF] Bilicka | Organizational Capacity and Profit Shifting[END_REF]. In the sample with information on employment, 31% of observations have zero or negative values for profits. ROA is computed as profits divided by total assets and is trimmed for the top and bottom 1%. When using the ROA as dependent variable, the logarithm of fixed assets is excluded from control variables X i .
Table 8 reports the results. We alternatively use the unweighted average tax rate across all affiliates of multinational firm j but subsidiary i and the tax rate of the GUO of the MNE. In all specifications, complexity has a significant and negative coefficient, as expected: affiliates of more complex MNEs report lower profits or return on assets than other affiliates in the same country and sector. The point estimates imply that an increase in complexity by one layer decreases profits by 2.8% and the return on assets by 0.03 to 0.04 percentage points, which represents an reduction of 7% to 9%.
A Data
A.1 Orbis data: extraction A.1.1 Consolidated data:
We download financial information of Global Ultimate Owners (GUO) that have affiliates in the European Union. We drop MNEs with no financial information and duplicates. All information was downloaded on March 4th and 5th 2021. We then downloaded the ownership information for these 75505 GUOs. We ask for the ownership level of affiliates that are owned up to level 10. Information was downloaded on March 17th 2021. The initial database contains about 4 millions affiliates for the 75505 GUOs. We remove MNEs that either have no affiliates or have one affiliate that is in the same country as the GUO. We remove duplicates for affiliates that have the same ID, same GUO and same level. We also drop them if they have the same GUO but different level of ownership. In that case, we keep the lowest level of ownership. For affiliates that appear several times with different GUOs, we download directly from Orbis the ID of the GUO and the main shareholders and match this information with our initial information. We only keep observations for which we have a match. Finally, we drop firms for which the maximum number of layers is larger than the number of affiliates that we observe in the database. The final sample includes 1,330,423 affiliates owned by 66,539 different GUOs. When we downloaded the ownership information of the 75,505 GUOs, we asked Orbis to give us the list of affiliates of each GUO up to level 10, which means that the
C Alternative measures of complexity
We consider three alternative measures of complexity of the MNEs' ownership network. First, we consider the highest level of layer per group which we call "Complexity (max)". Figure 7 confirms the finding above and shows that most multinational firms have a flat ownership structure: half have a full horizontal ownership structure with only one layer of ownership (52.4%), while another quarter have a maximum of 2 layers of ownership (25.8%). On the other end of the spectrum, 6.0% of multinationals hold their affiliates through up to 5 layers or more. Second, we use a Shannon Entropy measure as in [START_REF] Ajdacic | The wealth defence industry: A large-scale study on accountancy firms as profit shifting facilitators[END_REF] computed as follows:
Shannon entropy = -
where i is the layer number and F i the fraction of affiliates at i's level. This measure enables to operationalize the measure of complexity by combining the width and depth of the structure. A firm can own affiliates through an ownership tree, where it directly holds subsidiary A that itself owns subsidiary B and so on. But complexity can can also occur horizontally: the relations within the group do not only occur between a parent (or immediate shareholder) and its affiliate but also between affiliates that are at the same level. This measure increases with the number of layers and with a more even distribution of the number of affiliates per layer. Third, we consider the skewness of the distribution of affiliates across layers as in [START_REF] Altomonte | Business groups as knowledge-based hierarchies of firms[END_REF]. A positive skewness implies a right-skewed distribution and therefore there are more affiliates at layers close to the GUO. When skewness is negative, the distribution is left-skewed and there are more affiliates far (in terms of the number of layers) from the GUO. This measure is defined only for MNEs that have at least 2 layers and is therefore not available for all MNEs.
The maximum number of layers and the shannon entropy measure correlate with levelmean at over 80%, the skewness somewhat less at about 40%. |
04103795 | en | [
"info.info-ai",
"sdv.neu.sc"
] | 2024/03/04 16:41:22 | 2023 | https://inria.hal.science/hal-04103795/file/Symbolic_RL.pdf | Waris Radji
Corentin Léger
Lucas Bardisbanian
Early Empirical Results on Reinforcement Symbolic Learning
Keywords: Reinforcement Symbolic Learning, Ontology, Edit Distances, Learning Sciences
Reinforcement learning is a subfield of machine learning that is concerned with how agents learn to make decisions in an environment in order to maximize some notion of reward. It has shown a great promise in a variety of domains, but it often struggles with generalization and interpretability due to its reliance on black-box models. One approach to address this issue is to incorporate knowledge representation into reinforcement learning. In this work, we explore the benefits of using symbolic representation in reinforcement learning and present preliminary results on its performance compared to standard reinforcement learning techniques. Our experiments show that the use of symbolic representation can significantly improve the generalization capabilities of reinforcement learning agents. We also discuss the potential challenges and limitations of this approach and suggest avenues for future research.
Premiers résultats empiriques sur l'apprentissage par renforcement symbolique
Résumé : L'apprentissage par renforcement est un sous-domaine de l'apprentissage automatique qui s'intéresse à la manière dont les agents apprennent à prendre des décisions dans un environnement afin de maximiser une récompense. Il s'est avéré très prometteur dans divers domaines, mais il se heurte souvent à des problèmes de généralisation et d'interprétabilité en raison de sa dépendance à l'égard de modèles de type "boîte noire". Une approche pour résoudre ce problème consiste à incorporer la représentation des connaissances dans l'apprentissage par renforcement. Dans ce travail, nous explorons les avantages de l'utilisation de la représentation symbolique dans l'apprentissage par renforcement et présentons des résultats préliminaires sur sa performance par rapport aux techniques d'apprentissage par renforcement standard. Nos expériences montrent que l'utilisation de la représentation symbolique peut améliorer de manière significative les capacités de généralisation des agents d'apprentissage par renforcement. Nous discutons également des défis potentiels et des limites de cette approche et suggérons des pistes de recherche pour l'avenir.
Mots-clés : Apprentissage symbolique par renforcement, Distances d'édition, Ontologie, Sciences de l'éducation.
Introduction
The human brain has evolved to process and interpret the world symbolically [START_REF] Alexander | The human mind: The symbolic level[END_REF], by utilizing abstract concepts such as language and reasoning, and by drawing connections between seemingly dissimilar observations, such as perceiving oranges and lemons as similar due to their shared sensory characteristics. However, classical artificial intelligence agents operate differently by processing information through numerical values and statistical patterns, which may not always be intuitively understandable to humans. One popular approach to reinforcement learning is Q-learning [START_REF] Christopher | Learning from delayed rewards[END_REF], which uses a value function known as the Q-function to estimate the expected cumulative reward for each action in a given state.
The Q-function is updated iteratively using the Bellman equation, which expresses the expected value of the Q-function in terms of the expected reward for taking a given action in a given state, plus the discounted expected value of the Q-function in the resulting state. The Bellman equation is expressed as follows:
Q[s, a t ]+ = α(r t+1 + γmax a Q(s t+1 , a) -Q[s, a t ]) (1)
Here, s t and a t represent the current state and action, r t+1 represents the immediate reward received after taking action a t in state s t , γ is the discount factor (used to give less weight to future rewards), and α is the learning rate (used to control the rate at which the Q-function is updated).
Q-learning has proven to be a powerful technique for a variety of problems, including game playing, robotics, and control systems. However, one limitation of Q-learning is that it can struggle to generalize to new and unseen states. This is because Q-learning relies on an explicit representation of the state space, which can be difficult to achieve for complex and high-dimensional environments.
In order to address this limitation and better understand how humans learn, the Inria Mnemosyme team has introduced a paper where they present an alternative of Q-Learning with symbolic data, called reinforcement symbolic learning [START_REF] Mercier | Reinforcement Symbolic Learning[END_REF]. This technique is based on the fact that it is possible to define an edit distance between two symbolic objects under certain representation constraints, and therefore use the knowledge acquired from previous experiences to adapt faster and better with new states.
In the context of symbolic objects, the edit distance is a metric that measures the minimum number of operations (insertions, deletions, or substitutions) required to transform one symbolic object into another. It is a way of quantifying the similarity between two symbolic objects.
Considering a space of symbolic states S in a reinforcement learning environment, a distance d is defined as ∀s i ∈ S, s j ∈ S, d(s i , s j ). The operation of updating the Q-value (at each step in the environment), proposed by the reinforcement symbolic learning approach is :
Q[s, a t ]+ = αe -d(s,st)/ρ (r t+1 + γmax a Q(s t+1 , a) -Q[s, a t ]) (2)
where ρ is the exponential weighting of radius, α the learning factor and γ the discount factor. In classical Q-Learning, a Q-Value is updated at each iteration. In this approach all Q-Values are updated according to the radius ρ which updates with a stronger intensity the states s which are closest to s t according to the edit distance. This theoretically allows to have agents that have better generalization capabilities, which can also be similar to embedding generation in deep reinforcement learning.
In this paper we present some preliminary results of the application of reinforcement symbolic learning on a self-defined environment and edit distance. Some details of the implementation of reinforcement symbolic learning will also be given. For the reproduction of the results the code can be found on this GitHub repository.
2 The Symbolic Environment: A creature that must survive in the unknown
For the experiments, we decided to create an environment called SymbolicEnv which is built on top of an ontology, which allows us to have a standard way of representing our symbolic data (OWL in our case), and to take advantage of the power of symbolic reasoners. Figure 1 graphically illustrates the ontology of the SymbolicEnv.
In the SymbolicEnv, an agent will have to use its four senses (sight, taste, touch, smell), to choose the most appropriate actions and survive for the longest time, driven by emotions and sensations. To illustrate this, it is possible to identify several parts in the ontology:
Entity The instances that inherit from this class are the entities that the agent may encounter in the environment, e.g. example apple and stone.
ExternalSense The external senses that the agent can perceive, based on these 4 senses, e.g. the texture or colour of an entity.
Inria
InternalSense What the agent feels internally, e.g. his mood, his health level or his hunger.
Action Actions that can be taken in the environment, such as eating or attacking.
At each step, the agent receives an observation which is composed of an integer that represents the internal state of the agent, and another integer that represents the entity that the agent encounters.
To create the internal state of an agent, statistics that can evolve naturally over time, or through events, are maintained. Positive stats (energy, health, joy) have a value between 0 and 20 and negative stats (anger, sadness, fear) have a value between 0 and 10. A function allows to discretize the value of a stat into 4 bins and returns an integer between 0 and 3, representing the bin that the value belongs to.
From each positive statistic and the average of the negative ones, it is possible to construct a vector of dimension 4 which represents each discretized value. Since each vector is unique, we can calculate a unique identifier by associating each vector with a value between 0 and 4 4 -1.
The SymbolicEnv is composed of 6400 observation combinations (256 internal states and 25 entities).
The agent must then choose an action that corresponds to an integer between 0 and the number of action -1, and the agent's statistics are updated accordingly (with an internal logic that is created by reading properties in the ontology).
Edit distance
The edit distance used in this study quantifies the similarity between two observations, each consisting of an internal state i and an associated entity e. The distance between two observations is defined as the average of the distances between their internal states and their associated entities.
d(s 1 , s 2 ) = d((i 1 , e 1 ), (i 2 , e 2 )) = (d(i 1 , i 2 ) + d(e 1 , e 2 ))/2
The distance between two internal states d(i 1 , i 2 ), is calculated by the Euclidean distance normalized distance between 0 and 1 of their two representation vectors.
For the entities, each property p is associated with an ExternalSense, and an empirical "distance" is defined between ExternalSense of the same type. For example, the distance between yellow and orange might be defined as 1, while the distance between yellow and red might be defined as 2.
To calculate the distance between two entities d(e 1 , e 2 ), the method combines the "tree distance" and "properties distance". The tree distance is defined as the sum of the length of the ancestors of each individual in the ontology. The properties distance is calculated as the sum of the distances between the properties of the two entities, normalized by the maximum distance value of ExternalSense of the same type. Finally, the distances are normalized by the maximum distance value.
Implementation details
The ontology are initially written in the YAML format, and translated thanks to Python scripts into the OWL format. This allows to automate the creation of some objects in the ontology, because even with very few entities the amount of information to be specified in the ontology becomes very large. We also use SPARQL queries to retrieve information efficiently from the database.
The SymbolicEnv is built on top of OpenAI Gym [START_REF] Brockman | Openai gym[END_REF] (a standard API for reinforcement learning) which makes agents testing simple.
The Reinforcement Symbolic Agent
The SymbolicAgent is built on top of a classical Q-Learning agent, with a modification in the method that updates the Q-table at each step in the environment. In the case of the SymbolicEnv, the Q-table is of dimension 256×25×9 (number of internal states × number of entities × number of actions).
The agent takes as argument the distance function to compute the distance between two observations, and for faster computation, distances between each observation are pre-computed and stored in a matrix of size 256 2 .
Equation 1 (Symbolic Q-Learning) is implemented in the agent using vectorized operations. Operations likely to be repeated multiple times are stored in the cache.
It will take at least 57600 steps in the SymbolicEnv for a classical Q-Learning agent to be updated at least once every value in its Q-Table, assuming that the agent visits a different state and performs differently at each step. The SymbolicAgent should have much better generalization capabilities and should need far fewer steps in the environment to perform.
Preliminary Results
We trained our agents in the SymbolicEnv for 10000 steps, using a linear epsilon greedy policy, and evaluated the agents performances every 10 training steps. The evaluation involved performing 100 rollouts in 100 SymbolicEnv with fixed seeds, and the score of an agent was calculated as the sum of the cumulative rewards obtained during each rollout.
All agents have the same hyperparameters, except for the radius ρ which we vary in the case of the SymbolicAgents. The experiment consists in observing the differences in performance between a classical Q-Learning agent, and the SymbolicAgents with several values of ρ. Figure 2 shows the evolution of the scores of the different agents throughout the training. Unsurprisingly, the Q-Learning agent struggles to learn, which is simply due to the fact that Inria the Q-Table is too large, and updating the elements one by one would take much longer to get a good agent. The results of the SymbolicAgents are interesting and we can propose several hypotheses:
• With a very small ρ for example (0.01) Symbolic Q-Learning will not update the Q-Table significantly, so the performance will be close to that of classical Q-Learning.
• With a slightly larger radius (0.04) it is possible to obtain performances that are significantly better than the classical Q-Learning, so we see that the ρ hyperarameter is very sensitive, and a small change can drastically change the results.
• When ρ is a bit too large (0.06) the SymbolicAgent seems to try to generalise too much, and probably has to update values that it shouldn't, which makes its learning go quite badly, and even performs worse than Q-Learning.
It should be noted here that the results are quite stable, which is not the case with larger radius values as shown in Figure 3. Figure 3 shows the same experiment but with agents that have a radius ρ greater than 0.1. The results are very stochastic and difficult to interpret, but it is easy to see that the irregularity is considerable. In these cases many elements of the Q-Table are updated, whether they are relevant or not. This means that the choice of a bad action by the agent can be very penalizing.
These agents are able to reach very high scores early in the learning process, but these are rarely maintained over time.
We can imagine having a variant of reinforcement symbolic learning that lowers the radius over time in order to have an agent that will be more and more stable over time
Conclusion
In conclusion, we have presented a framework to create a symbolic environment based on an ontology (with OWL), and connect it to an gym environment to make it easier to train agents in it. With this tool, we have created a symbolic environment where an creature needs to adapt to a complex environment in order to survive. We have compared the performance of Q-learning and symbolic Q-learning with varying radius values in this environment, and the results show that the performance of the symbolic Q-learning approach can be significantly improved with an appropriate choice of the radius parameter.
It is important to note that although the results we obtained with Symbolic Q-Learning are promising, we still do not fully understand how the radius parameter affects the learning process. As we have seen, small changes in the radius value can have a significant impact on the agent's performance and the stability of the learning process, which suggests that the relationship between radius and performance may not be linear. One possible direction for further investigation is to explore ways of dynamically adjusting the radius parameter over time or allowing the agent to learn the optimal radius value. These approaches could potentially improve the adaptability and robustness of the Symbolic Q-Learning algorithm, making it more effective in a wider range of environments.
Despite this, our work has demonstrated the potential of using symbolic environments in reinforcement learning, and we have provided a framework that makes it easy to create symbolic environments using an ontology. We have also shared the Symbolic environment used and the code for our experiments on our open source GitHub repository, which can be used by other researchers and developers to create their own symbolic environments and test new algorithms.
In conclusion, our work highlights the potential benefits of incorporating symbolic reasoning into reinforcement learning and provides a tool for creating symbolic environments that can be used by the research community. The results we obtained with Symbolic Q-Learning suggest that this approach could be a promising direction for future research in the field of RL.
Figure 1 :
1 Figure 1: Ontology of the SymbolicEnv.
Symbolic Q-Learning with ρ = 0.01 Symbolic Q-Learning with ρ = 0.04 Symbolic Q-Learning with ρ = 0.06
Figure 2 :
2 Figure 2: Comparison of Q-Learning and Symbolic Q-Learning Performance with Varying ρ (< 0.1)
Figure 3 :
3 Figure 3: Comparison of Q-Learning and Symbolic Q-Learning Performance with Varying ρ (> 0.1)
RR n°9509
avenue de la Vieille Tour 33405 Talence Cedex Publisher Inria Domaine de Voluceau -Rocquencourt BP 105 -78153 Le Chesnay Cedex inria.fr ISSN 0249-6399
Acknowledgment
We would like to thank the Inria research team Mnemosyne and AEx AIDE for their support during this project. Special thanks go to our supervisors Chloé Mercier, Thierry Viéville, and Axel Palaude for their guidance and expertise, which were instrumental in the development of our work. We also acknowledge their contribution in proposing a version of symbolic Reinforcement Learning that we adapted for our experiments, and without which this project wouldn't have existed.
Work done during a Master project at ENSEIRB-MATMECA. |
04103916 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2022 | https://shs.hal.science/halshs-04103916/file/Note4_Effecttive-sanctions-against-oligarcs-and-the-role-of-a-European-asset-Registry_EU-Tax_-WIL_March-2022.pdf | Theresa Neef
Panayiotis Nicolaides
Lucas Chancel
Thomas Piketty
Gabriel Zucman
Theresa Written
Panayiotis Neef
Lucas Nicolaides
Thomas Chancel
Gabriel Piketty
Zucman
Effective sanctions against oligarchs and the role of a European Asset Registry | 1 Effective sanctions against oligarchs and the role of a European Asset Registry
This note provides data on wealth inequality in Russia and advocates for a European Asset Registry. Russia exhibits the highest wealth inequality in Europe. Further, Russia's wealthiest nationals conceal a large share of their wealth through tax havens. The current architecture of the global financial system impedes comprehensive knowledge on beneficial ownership across asset types and jurisdictions. Under the roof of a European Asset Registry, the already existing but currently dispersed information could be gathered. This would change the state of play, resulting in better-targeted sanctions and effective tools to curb money laundering, corruption and tax evasion. The European Union could have a pioneering role in taking the next step towards more financial transparency.
This note provides data on wealth inequality in Russia and advocates for a European Asset Registry. Russia exhibits the highest wealth inequality in Europe. Further, Russia's wealthiest nationals conceal a large share of their wealth through tax havens. The current architecture of the global financial system impedes comprehensive knowledge on beneficial ownership across asset types and jurisdictions. Under the roof of a European Asset Registry, the already existing but currently dispersed information could be gathered. This would change the state of play, resulting in better-targeted sanctions and effective tools to curb money laundering, corruption and tax evasion. The European Union could have a pioneering role in taking the next step towards more financial transparency.
Wealth Inequality in Russia and Offshore Wealth
Russia exhibits the highest wealth inequality in Europe. The top 10% wealthiest Russian residents own about 74% of total household wealth. In comparison, the 10% wealthiest in France own about 59%. In the U.S. this share amounts to about 71%, in Poland to about 62%, and in China to about 68%. The wealthiest 1% in Russia own about half of Russian household wealth (48%). In comparison, this share is about 27% in France, 35% in the U.S., 31% in China and 30% in Poland (see Figure 1). The foundations of this strong wealth concentration among a small group were laid by the rapid and highly unequal privatization process in Russia in the early 1990s after the collapse of the Soviet Union. i severance of their financial ties from the rest of the international financial system, is expected to worsen labour market conditions, and lead to a gradual deterioration in standards of living in the Russian population. In addition, a 37% devaluation of the rouble (against the Euro) since the start of the war, has destabilized the economy substantially. v With at least half of Russia's foreign reserves frozen, the central bank was forced to increase its key interest rate from 9.5% to 20% overnight, with significant impact on borrowers. vi Moreover, the instability in the rouble works its way to consumers' and firms' purchasing power through import-led inflation, while the imposition of capital controls in Russian banks erodes trust, strangles the real economy and restricts the population's access to their savings. The combination of these factors will most likely affect savers, wage-earners and pensioners in Russia disproportionately. vii Undoubtedly these measures increase the pressure on the Russian government to change course on Ukraine, but they come at the expense of lower standards of living and economic hardship for the general Russian population.
Media Contacts
In contrast, part of the wealth of rich Russian nationals is hidden in tax havens.
Sanctions directed towards the wealthiest Russians are targeting the so-called
Offshore wealth and financial transparency
Hiding wealth is more pervasive in Russia due to the high concentration of wealth, but by no means does it constitute a Russian-specific phenomenon.
Alstadsaeter, Johannesen and Zucman (2018) estimate that the equivalent of 10% of global GDP is held as offshore wealth. This share has been relatively stable throughout the 2000s and 2010s. About $8.3 trillion were held in tax havens in 2016. xii Recent leaks from offshore financial institutions underline that the users and beneficiaries of shell companies come from all over the world. xiii The international dimension of this practice requires a broader, more systematic approach: EU countries can improve the effectiveness of sanctions by addressing long-standing problems in the financial governance structure. Namely, addressing the existence and role of tax havens, the use of financial and asset secrecy, and the use of shell companies to hide company ownership and wealth. In recent years, EU laws and a number of OECD policies have improved the framework of financial transparency in Europe:
Evidence of worldwide interlinkages of
• The 5 th Anti-Money Laundering Directive introduced public beneficial ownership registries in Member States for companies, trusts and other legal arrangements.
However, in practice gaps in the ownership records remain. For instance, evidence from the Open Lux investigation indicates that the identity of the final beneficial owner oftentimes continues to remain hidden from the public. xx In addition, a number of exceptions exist, such as on the ownership structure of listed companies in the stock market. To this date, not all EU Member States have made their registries available.
• New regulations are currently being discussed on setting up a new EU Anti-Money Laundering Authority, complemented by a 6 th Anti-Money Laundering Directive.
This will seek to harmonise rules and procedures while enhancing cooperation between the Member States' Financial Intelligence Units. Importantly, the scope of the new regulations extends beyond Member States since they will require "foreign entities" (i.e. shell companies in tax havens outside the EU) to register their beneficial owners when purchasing real estate in the EU. xxi
• At the EU, there is renewed effort to tackle the misuse of shell companies by ensuring real economic activity and automatic exchange of information. The "Unshell" proposal seeks to establish more rigorous criteria to identify shell companies, and exclude them from tax reliefs and benefits of Member States' tax treaty networks.
Figure 1 :
1 Figure 1: Own visualization based on data on the World Inequality Database (https://wid.world).
"
oligarchs". The EU list comprises about 680 Russian nationals & 53 entities. viii Part of these oligarchs' wealth is held in foreign financial & non-financial assets (therefore, not in roubles), which are hard to trace due to financial secrecy and lack of a comprehensive record of beneficial owners. In terms of targeting, the group of rich Russian nationals or residents benefiting from keeping wealth offshore is considerably larger than the 680 sanctioned individuals: Alstadsaeter, Johannesen and Zucman (2018) estimate that in 2007 the richest 0.01% of Russian nationals (about 10,000 individuals) owned more than 12% of the total Russian household wealth and held 60% of their wealth in offshore tax havens (see Figure2). According to the latest data from the World Inequality Database, there were about 20,000 individuals in Russia (0,02% of the adult population) with net wealth above 10 million euros in 2021 ix and about 50,000 individuals (0,05% of the adult population with net wealth above 5 million euros. x About one-half to two-thirds of these individuals' assets are held offshore.xi Effective sanctions against oligarchs and the role of a European Asset Registry | 4
Figure 2 :
2 Figure 2: Visualization by the EU Tax Observatory reproducing results by Alstadsaeter et al. (2018).
which aim at isolating the Russian economy, have a large impact on the general population. The
latest projections on the macroeconomic
impact of the sanctions point to a modest economic contraction (about 1.5% in 2022
and 2.5% in 2023) under current measures. iv Prospects might worsen further as more
sanctions are announced in the coming weeks. This impact, in combination with the
ongoing exodus of Western firms, international isolation of Russian banks and
). Novokmet, Piketty, and Zucman (2018) estimate that about half of total Russian household wealth is held abroad. To give a sense of scale, "there is as much financial wealth held by rich Russians abroad-in the United Kingdom, Switzerland, Cyprus, and similar offshore centers-than held by the entire Russian population in Russia itself". ii This offshore wealth of Russian nationals amounted in 2015 to roughly 85% of national income; that is, 85% of what Russian residents earn in an entire year. iii
Recent sanctions,
Effective sanctions against oligarchs and the role of a European Asset Registry | 3
Systematic collection of asset ownership information would close gaps further and facilitate the ongoing development of national registers.
, was common practice. Firstly,[START_REF] García-Bernardo | Uncovering Offshore Financial Centers: Conduits and Sinks in the Global Corporate Ownership Network[END_REF] find that Russian companies used chains of shell companies via the British Virgin Islands and Cyprus to invest. xiv Secondly, evidence draws a link between Russian nationals investing in real estate in many European countries -often via shell companies. xv And thirdly, Russian nationals (among many other nationalities) utilised golden visa schemes offered by several European countries, e.g. Malta, Cyprus and the UK, thereby acquiring residence or citizenship in exchange for investments.xvi With the current sanctions in place and with European countries hardening their stance, Russian wealth might start shifting to non-European tax havens. Currently, anecdotal evidence points to Russian investors reallocating their wealth towards destinations outside of the sanctioning countries. .S. companies, and Clearstream and Euroclear for securities issued in the European Union), the access of which is not always free or publicly available.xviii In the past years, many countries have been developing government-held beneficial ownership registries. Despite substantial progress, these still exhibit blind spots and exceptions from registration obligations.xix
For instance,
information on real estate is typically held in land registers, but in many countries
reporting obligations do not require the individual beneficiary of a property. The use of
shell companies is regularly employed to report only the legal entity and to disguise
the individual controlling it. Moreover, national commercial registers record the
ownership of companies registered in the country. However, their access is sometimes
restricted and the registered ownership structures remain obscure (for instance, due
Russian wealth to tax havens are abundant. In recent years the use of different schemes, many of which in European Effective sanctions against oligarchs and the role of a European Asset Registry | 5 countriesxvii A comprehensive database tracking where and by whom wealth is held could increase the efficiency of targeted sanctions. Information on asset ownership of different asset types is held in private companies, banks, national beneficial ownership registries (yet incomplete), central securities depositories, and financial authorities. Governments find themselves in a disadvantaged position since databases are not comprehensively linked and restricted in access. As a result, they lack key information that can prove crucial in designing and implementing sanctions. Access to this information must be gained first in order to make sanctions more effective. The dispersion of information across different locations and institutions currently facilitates tax avoidance and evasion as well as money laundering. to a lack of reporting obligation for minor shareholders). Lastly, information on securities (bonds and stocks) is recorded by national and private central security depositories (e.g. the Depository Trust Company in the U.S. for securities issued by Effective sanctions against oligarchs and the role of a European Asset Registry | 6 U
guide for a European Asset Registry A Task Force for Asset Ownership could be set up to collect information on hidden wealth in a unified structure in the EU.
At the OECD level, the automatic information exchange under the Common Reporting Standard provides information to tax authorities regarding certain financial assets, namely bank accounts, securities, and interest income in Effective sanctions against oligarchs and the role of a European Asset Registry | 7 investment entities. However, the current Common Reporting Standard leaves several loopholes that allow beneficial owners to keep their wealth undetected. The task force will collect, crosscheck and analyse all available information on wealth and assets held in EU Member States by wealthy individuals (above a given threshold). To utilise information in the Member States, the task force can be mandated by EU leaders and supervised by the Eurogroup. Its form can follow that of the Task Force for Coordinated Action -a special working group of the Eurogroup that has coordinated the setup of even more ambitious EU structures such as the Banking Union and the European Stability Mechanism.
This
xxii • xxiii A
task force could pave the way for establishing a permanent European Asset Registry, tasked with systematically gathering and linking wealth information across all asset types.
The ultimate objective of the registry would be to record
comprehensively the ownership and movement of assets. Thus, the registry would
provide highest-quality data on the total amount of wealth held by individuals. It would
combine information from all available national and international sources, while
establishing new sources in the case where information is currently neither present
nor sufficiently recorded. Importantly, it should leave no gaps in the type of assets held
and it should rely on automatic exchange of information procedures between the
different sources. The medium-term goal in the EU would be a new institution (based
on EU law or created by a new inter-governmental treaty) tasked with collecting and
analysing information centrally.
Information gathered in the new structure could be cross-checked by specialised personnel and be made available to all Member States to fight financial crime.
If beneficial ownership information includes information on nationality and country of residence, hidden wealth would be revealed and sanctions targeting wealthy individuals could become more efficient.
Effective sanctions against oligarchs and the role of a European Asset Registry | 8
The
work of the task force could be complementary to an ongoing feasibility study.
The study was ordered by the European Commission last autumn, with expected deliverables in April 2023.xxiv The task force could create a fast-track pilot project that will implement the registry and run in parallel to the findings of the feasibility study.
A
European Asset Registry could prove key in addressing long-standing issues of financial secrecy such as avoidance and evasion, money and corruption.
By and large, most recent developments against tax evasion and tax avoidance have rested on increased transparency and cooperation between countries. The current juncture provides a unique opportunity for European countries to play a pioneering role in establishing this powerful instrument.programs e.g. in Spain. Sunak (2020) finds based on data mainly requested from national immigration offices that "Of the total applications approved in the EU, China accounts for the greatest demand, with nearly 50%, followed by Russians with 27%".Scherrer and Thirion (2018) give an overview of Citizenship-by-Investment and Residency-by-Investment programmes in the EU but underline that much of the evidence remains anecdotal. For Latvia, see this journalistic piece presenting data from the Latvian Immigration and Citizenship Office: https://www.occrp.org/en/goldforvisas/latvias-once-golden-visas-lose-their-shine-but-why.xvii Reported e.g. in the Financial Times, and Reuters. xviii For more information on existing data sources, see Zucman (2015): The hidden wealth of nations. University of Chicago Press, Chapter Four; Tax Justice Network (2022): 10 measures to expose sanctioned Russian oligarchs' hidden assets. xix See for a comprehensive assessment Harari et al. (2020). xx https://www.occrp.org/en/openlux xxi See European Commission (2022): https://ec.europa.eu/info/publications/210720-anti-money-launderingcountering-financing-terrorism_en. xxii https://ec.europa.eu/taxation_customs/taxation-1/unshell_en xxiii Several behavioral reactions were reported such as the relocation of financial assets to uncooperative tax havens(Johannesen & Zucman, 2014; Johannesen, 2014) or to jurisdictions not participating in the agreement such as the U.S.(Casi et al. 2020). Further, its coverage excludes for example real estate assets. This loophole has probably led to shifts from financial to more real estate wealth to avoid detection (see Bomare, Le Guern Herry (2021): Automatic Exchange of Information and Real Estate Investment. Mimeo.) xxiv See https://etendering.ted.europa.eu/cft/cft-display.html?cftId=8792. |
00410366 | en | [
"spi.meca.geme",
"phys.meca.geme"
] | 2024/03/04 16:41:22 | 2009 | https://hal.science/hal-00410366v2/file/Bisu-Gerard-Knevez-Laheurte-Cahuc-09.pdf | Claudiu F Bisu
email: [email protected]
Alain Gérard
email: [email protected]
Jean-Yves K'nevez •
Raynald Laheurte
email: [email protected]
• Olivier
O Cahuc
J-Y K'nevez •
Self-excited vibrations in turning : Forces torsor analysis
The present work deals with determining the necessary parameters considering a three dimensional model to simulate in a realistic way the turning process on machine tool.
This paper is dedicated to the study of the self-excited vibrations incidence on various major mechanics characteristics of the system workpiece / tool / material. The efforts (forces and moments) measurement using a six components dynamometer confirms the tool tip moments existence.
The fundamental frequency of 190 Hz proves to be common to the tool tip point displacements, the action application point or at the torque exerted to the tool tip point. The confrontation of the results concerning displacements and efforts shows that the applications points of these elements evolve according to similar ellipses located in quasi identical planes. The large and the small axes of these ellipses are increasing with the feed rate motion values accordingly to the mechanical power injected into the system. Conversely, the respective axes ratios of these ellipses are decreasing functions of the feed rate while the ratio of these ratios remains constant when the feed rate value is increasing.
In addition, some chip characteristics are given, like the thickness variations, the width or the hardening phenomenon.
Angle of main tool displacements direction included in the plane (x,y) (degree); α κ(yz)
Angle of main tool displacements direction included in the plane (y,z) (degree); ∆t Time corresponding of phase difference between two signals (s); Φ Primary share angle (degree); ϕ c
Chip width slopes angle between each undulation (degree); ϕ f ui Phase difference between the tool tip displacements components i=x, y, z (degree); γ Cutting angle (degree); λ s Inclination angle of tool edge (degree); κ r Direct angle (degree); θ e(xy)
Stiffness principal direction angle related to the plane (x,y) (degree); θ e(yz)
Stiffness principal direction angle related to the plane (y,z) (degree); ξ c Chip hardening coefficient;
1 Introduction
In the three-dimensional cutting case, the mechanical actions torsor (forces and moments), is often truncated: the moments part of this torsor is neglected fault of adapted metrology [START_REF] Mehdi | Dynamic behavior of thin wall cylindrical workpiece during the turning process, Part 2: Experimental approach and validation[END_REF]. However, efforts and pure moments (or torque) can be measured [START_REF] Couétard | Capteurs de forces à deux voies et application à la mesure d'un torseur de forces[END_REF]. Recently, an application consisting in six components measurements of the actions torsor in cutting process was carried out for the case of high speed milling [START_REF] Couétard | Mesure des 6 actions de coupe en fraisage grande vitesse[END_REF], drilling [START_REF] Laporte | A parametric model of drill edge angles using grinding parameters[END_REF], [START_REF] Yaldiz | Design, development and testing of a four-component milling dynamometer for the measurement of cutting force and torque[END_REF], etc. Cahuc et al., in [START_REF] Cahuc | Experimental and analytical balance sheet in turning applications[END_REF] presents another use of this six components dynamometer in an experimental study: the taking into account of the cut moments allows a better machine tool power consumption evaluation. The present paper is dedicated to the six components dynamometer use to reach with fine accuracy a vibrations dynamic influence evalu-ation in turning on the system Workpiece/Tool/Machine tool (WTM). The concepts of slide block system, moments and torsor (forces and moments) are directly related to the work, undertaken to the previous century beginning, on the mathematical tool "torsor" [START_REF] Ball | A treatise on the theory of screws[END_REF]. Unfortunately, until now the results on the cutting forces are almost still validated using platforms of forces (dynamometers) measuring the three components of those [START_REF] Lian | Self-organizing fuzzy control of constant cutting force in turning[END_REF], [START_REF] Marui | Chatter vibration of the lathe tools. Part 2 : on the mechanism of exciting energy supply[END_REF]. The actions torsor is thus often truncated because the torsor moment part is probably neglected fault of access to an adapted metrology [START_REF] Yaldiz | Design, development and testing of a turning dynamometer for cutting force measurement[END_REF], [START_REF] Yaldiz | A dynamometer design for measurement the cutting forces on turning[END_REF].
However, recent experimental studies showed the existence of cutting moments to the tool tip [START_REF] Cahuc | Metrology influence on the cutting modelisation[END_REF], [START_REF] Laheurte | Metrological devices in cutting process[END_REF], [START_REF] Laheurte | Evaluation de l'énergie mise en jeu et du comportement des outils de coupe dans l'usinage[END_REF], [START_REF] Toulouse | An experimental method for the cutting process in three dimensions[END_REF]. Their taking into account allows thus the best machine tools output approaches [START_REF] Cahuc | Experimental and analytical balance sheet in turning applications[END_REF], [START_REF] Darnis | Energy balance with mechanical actions measurement during turning process[END_REF]. Nowadays, the use of a dynamometer measuring the mechanical actions torsor six components [START_REF] Bisu | Nouvelle analyse des phénomènes vibratoires en tournage[END_REF], [START_REF] Couétard | Caractérisation et étalonnage des dynamomètres à six composantes pour torseur associé à un système de forces[END_REF], [START_REF] Laporte | A parametric model of drill edge angles using grinding parameters[END_REF] allows a better cut approach and should enable to reach new system WTM vibrations properties in the dynamic case.
Moreover, the tool torsor has the advantage of being transportable in any space point and in particular at the tool tip in O point. The study which follows about the cut torsor carries out in several stages including two major; the first relates to the forces analysis and second is dedicated to a first moments analysis to the tool tip during the cut.
After this general information on the torsor concept, an analysis of the efforts (section 2) exerted in the cutting action is carried out. Thanks to this analysis and experimental tests, we establish that the points of variable application exerted on the tool type are included in a plane, more precisely following an ellipse. Then, accordingly in section 3, the first moments analysis obtained at the tool tip in O point is performed. The torsor central axis is required (section 4) and the central axes beams deduced from the multiple tests strongly confirm the moments presence to the tool tip.
Some chip characteristics are presented in the section 5, and compaired to experimental data base. Before concluding, a correlation (section 6) displacements / cutting forces shows that these two dual elements evolve in a similar way following ellipses having the similar properties.
Forces analysis
Tests results
The experiments are performed within a framework similar to that of Cahuc et al [START_REF] Cahuc | Metrology influence on the cutting modelisation[END_REF]. For each test, except the feed rate values, all the turning parameters are constant. The mechanical actions are measured according to the feed rate (f) using the six components dynamometer [START_REF] Couétard | Capteurs de forces à deux voies et application à la mesure d'un torseur de forces[END_REF] following the method initiated in Toulouse [START_REF] Toulouse | Contribution à la modélisation et à la métrologie de la coupe dans le cas d'un usinage tridimensionnel[END_REF], developed and finalized by Couétard [START_REF] Couétard | Caractérisation et étalonnage des dynamomètres à six composantes pour torseur associé à un système de forces[END_REF] and used in several occasions [START_REF] Cahuc | Experimental and analytical balance sheet in turning applications[END_REF], [START_REF] Couétard | Mesure des 6 actions de coupe en fraisage grande vitesse[END_REF], [START_REF] Darnis | Energy balance with mechanical actions measurement during turning process[END_REF], [START_REF] Laheurte | Metrological devices in cutting process[END_REF], [START_REF] Laheurte | Evaluation de l'énergie mise en jeu et du comportement des outils de coupe dans l'usinage[END_REF]. On the experimental device (Fig. 1) the instantaneous spindle speed is permanently controlled (with an accuracy of 1%) by a rotary encoder directly coupled with the workpiece. During the tests the insert tool used is type TNMA 16 04 12 carbide not covered, without chip breeze. The machined material is an alloy of chrome molybdenum type 42CrMo24. The test-workpieces are cylindrical with a diameter of 120 mm and a length of 30 mm. They were designed starting from the Finite Elements Method being coupled to a procedure of optimization described in [START_REF] Bisu | Optimization and dynamic characterization system part in turning[END_REF]. Moreover, the tool geometry is characterized by the cutting angle γ, the clearance angle α, the inclination angle of edge γ s , the direct angle κ r , the cutting edge radius r ǫ , and the sharpness radius R [START_REF] Laheurte | Application de la théorie du second gradient à la coupe des matériaux[END_REF]. In order to limit to the wear appearance maximum along the cutting face, the tool insert is examined after each test and is changed if necessary (Vb ≤ 0.2 mm ISO 3685). The tool parameters are detailed in the Table 1. Two examples of resultant efforts measurements applied to the tool tip are presented: one of these for the stable case, ap = 2 mm (Fig. 2), and other for the case with instability, ap = 5 mm (Fig. 3). In the stable case it appears that the force components amplitudes remain almost independant from time parameter. Thus, the amplitude variation is limited to 1 or 2 N around their nominal values, starting with 200 N for (F x ) and until 600 N for (F y ). These variations are quite negligible. Indeed the nominal stress reached, the component noticed as the lowest value is the (F x ) one, while the highest in the absolute value is (the F z ) one. While taking as reference the absolute value of (F x ) the following relation between these three components comes:
γ α λs κr rǫ R -6 • 6 • -6 • 91 • 1,2 mm 0,02 mm
|F x | = |Fz| 2 = |Fy| 3 .
In the unstable case we observe that the efforts components on the cutting axis (F y ) has the most important average amplitude (1 500 N). It is also the most disturbed (± 700 N) with oscillations between -2 200 N and -800 N. In the same way the effort according to the feed rate axis (F z ) has important average amplitude (1 000 N) and the oscillations have less width in absolute value (± 200 N) but also in relative value (± 20%). As for the effort on the radial direction (F x ) it is weakest on average (200 N) but also most disturbed in relative value (± 200 N). These important oscillations are the tangible consequence of the contact tool/workpiece frequent ruptures and thus demonstrate the vibration and dynamical behaviour of the system WTM.
Finally, we note that the amplitudes of all these efforts components applied to the tool tip are slightly decreasing functions of time in particular for the component according to the cutting axis.
Frequency analysis
The signals frequency analysis performed by using FFT function enables to note in Fig. 4 the presence of frequencies peaks around 190 Hz. Around this frequency peak, we note for the three forces components, a quite high concentration of energy in a wide bandwidth around 70 Hz (36% of the fundamental frequency). All things considered, this width of frequency is of the same order of magnitude as observed (13% of fundamental) by Dimla [START_REF] Sr | The impact of cutting conditions on cutting forces and vibration signals in turning with plane face geometry inserts[END_REF] for a depth of cut three times lower (ap = 1.5 mm) but for an identical feed rate (f=0.1 mm/rev) and a cutting speed similar.This remark confirms that the efforts components is proportionnal to the depth of cut ap as indicated by Benardos et al., [START_REF] Benardos | Prediction of workpiece elastic deflections under cutting forces in turning[END_REF]. Let us recall that the same frequency of 190 Hz was observed in the tool tip displacements case (in conformity with [START_REF] Bisu | Nouvelle analyse des phénomènes vibratoires en tournage[END_REF]). Consequently, the cutting forces components variations and the self-excited vibrations are influenced mutually, in agreement with [START_REF] Ispas | Vibrations des systèmes technologiques[END_REF], [START_REF] Kudinov | Dinamica Masinilor Unelten[END_REF], [START_REF] Marot | Coefficient dynamique de coupe. Théories actuelles et proposition d'une méthode de mesure directe en coupe[END_REF], [START_REF] Tansel | The chaotic characteristics of three dimensional cutting[END_REF]. Also, in agreement with research on the dynamic cutting process [START_REF] Moraru | Vibratiile si Stabilitatea Masinilor Unelte[END_REF], we note that the self-excited vibrations frequency is different from the workpiece rotational frequency which is located around 220 Hz.
Forces decomposition
The forces resultant components detailed analysis highlights a plane in which evolves a variable cutting force F v around a nominal value F n (see further). This variable force is an oscillating action (Fig. 5) which generates u tool tip displacements and maintains the vibrations of elastic system block-tool BT [START_REF] Bisu | The regenerative vibration influence on the mechanical actions turning[END_REF].
Thus, the cutting force variable (Fig. 5) and the selfexcited vibrations of elastic system WTM are interactive, in agreement with research work [START_REF] Koenigsberger | Machine Tools Structures[END_REF], [START_REF] Moraru | Vibratiile si Stabilitatea Masinilor Unelte[END_REF]. The cutting forces variable part can be observed and compared. Not to weigh down this part, the cutting forces analysis is voluntarily below restricted at only two different situations:
-stable process using cutting depth ap = 2 mm (Fig. 7a), -unstable process (with vibrations) using cutting depth ap = 5 mm (Fig. 7b) The vibrations effects on the variable forces evolution are detailed on the Fig. 7b. Moreover, the variable forces and displacements analysis associated with the tool tip at the time of the unstable process shows that the forces variation ratio is equivalent to the tool tip displacements variation ratio (in conformity with [START_REF] Bisu | Etude des vibrations auto-entretenues en coupe tridimensionnelle: nouvelle modélisation appliquée au tournage[END_REF]). This aspect will be quantified further. Also we concentrate our attention mainly on the unstable case (ap = 5 mm).
Plane determination attached of the forces application points
The tests analysis shows that, in the vibratory mode, the load application points describe an ellipse (Fig. 8), that is not the case in the stable mode (without vibrations, Fig. 7a)).
The method used in [START_REF] Bisu | Displacements analysis of self-excited vibrations in turning[END_REF] to determine the tool tip displacements plane is taken again here (in conformity with Appendix, section 8.1) to establish the plane P f , place of the load application points, characterized by its normal n f (Table 2). As for the tool tip displacements study related, a new reference system (n fa , n fb ) is associated at the load application points ellipse place. In this new reference system (in conformity with section 8.2) the ellipse large (respectively small) axis dimensions a f (respectively b f ) are obtained (Fig. 9). The values of a f , b f as that of their ratio a f / b f are consigned in the Table 3. It should be noted that a f and b f are increasing feed rate (f ) functions. It is thus the same for the ellipse surface which grows with (f ), in perfect agreement with the mechanical power injected into the system which is also an increasing feed rate function. On the other hand, the ratio a f / b f is a decreasing feed rate function. Thus the ellipse elongation evolves in a coherent way in dependence with the feed rate. Table 3 Large and small ellipse axes attached of forces application points depending on feed rate parameter; case study using ap = 5 mm, f = 0.1 mm/rev and N = 690 rpm.
Let us look at now the moments evolution to the tool tip.
3 First moment analysis
Experimental results
For each test, the mechanical actions complete torsor is measured according to the method already detailed in the section 2.1 [START_REF] Couétard | Caractérisation et étalonnage des dynamomètres à six composantes pour torseur associé à un système de forces[END_REF]. Measurements are taken in the six components dynamometer transducer O' center and then transported to the tool point O via the moment transport traditional relations [START_REF] Brousse | Cours de mécanique (1 er cycle et classe prépa[END_REF]. As for the forces, the moments variable part is extracted from measurements. An example of the results measurement is given on the Fig. 10 which zooms on the moments variable part is presented in the Fig. 11. Taking into account the recordings chaotic aspect obtained, an accurate moments components frequency analysis is necessary. It is the object of the following section.
Moments frequency analysis
An example of moments signals frequency analysis during the vibratory cutting is presented in the Fig. 12. As for the forces analysis, the moments components FFT shows that the most important frequency peak is localized around 190 Hz.
Moreover of all the components, the most important fundamental amplitude is that corresponding to the moments components following to the cutting axis (y) as its Fig. 11 The moments components variable part zoom that action on the tool tip following the three cutting directions; case study ap = 5 mm, f = 0,1 mm/rev and N = 690 rpm.
transport at the tool tip confirms it. It should be noted that the force component following to this same axis is also most important but has obviously no influence on the moments related to this axis due to co-linearity of these two elements. Conversely, the least important vibration amplitude is that of the moment component located on z axis. However, the number of revolutions being important according to this axis the M z component contribution to the torques power consumption remains significant. The appearance of other peaks, which are harmonics, slightly modifies the three-dimensional moments representation (Fig. 13) which is not exclusively any more in a plane although the ellipse essence is in a plane. This representation approaches a light form of letter eight contrary to the elliptic planar form efforts characteristic. 4 Central axis
Central axis determination
It is well-known that with any torsor, it is possible to associate a central axis (except the torsor of pure moment), which is the single object calculated starting from the torsor six components [START_REF] Brousse | Cours de mécanique (1 er cycle et classe prépa[END_REF].
A torsor [A] O in a point O is composed of a forces resultant R and the resulting moment M O :
[A] O = R, M O . (1)
The central axis is the line defined classically by:
OA = R ∧ M O |R 2 | + λR, ( 2
)
where O is the point where the mechanical actions torsor was moved (here the tool tip) and A the current point describing the central axis. OA is thus the vector associated with the bi-point [O, A] (Fig. 14).
This line (Fig. 14-(a)) corresponds to points geometric place where the mechanical actions moment torsor is minimal. The central axis calculation consists in determining the points assembly (a line) where the torsor can be expressed according to a slide block (straight line direction) and the pure moment (or torque) [START_REF] Brousse | Cours de mécanique (1 er cycle et classe prépa[END_REF].
The central axis is also the points place where the resultant cutting force is co-linear with the minimum mechanical moment (pure torque). The test results enable to check for each point of measurement the co-linearity between the resultant cutting force R and moment M A calculated related to the central axis (Fig. 14-(b)).
The meticulous examination of the mechanical actions torsor six components shows that the forces and the moment average values are not null. For each measure point, the central axis is calculated, in the stable (Fig. 15-(a)) and unstable mode (Fig. 15-(b)). In any rigour the case ap = 2 mm should be described as quasistable movement, because the vibrations exist but their amplitudes are very low -of the order of the µm -, thus quasi null compared to the other studied cases. Considering the cutting depth value ap = 3 mm, the recorded amplitude was 10 times more important. In the presence of vibrations (ap = 5 mm) for a 68 rpm the workpiece speed (44 points of measure by rpm), the dispersive character of the central axes beam, compared to the stable mode, can be observed, where this same beam is tightened more and less tilted compared to the normal axis on the plane (x,y). This central axes dispersion can be explained by the self-excited vibrations which cause the variable moment generation.
Analysis of central axis moments related
While transporting the moment from tool tip to the central axis, the minimum moment (pure torque) M A is obtained. From the moment values to the central axis, the constant and variable part of this one is deduced. As for the efforts, the variable part is due to the self-excited vibrations as revealed below (Fig. 16).
Using this decomposition, the moments contribution on the zones of contact tool / workpiece / chip is expressed. The observations resulting from the analysis show that the tool vibrations generate rotations, cause variations of contact and thus generate variable moments, confirming the efforts analysis detailed in the section 2.1. This representation enables to express the moments along the three axes of the machine tool: swivel moment in the y direction and the two moment of rotation along x and z directions. Moments components analysis determined at the central axis allows noting a moments localization mainly on two distinct zones. Taking into account this aspect, the variable moments components are divided into two noted parts d 1 and d 2 on to the three directions related to the machine. The components of these variable moments on x, y, z, axes are noted respectively (M axv ), (M ayv ) and (M azv ).
In the vibratory case (ap = 5 mm), the first points family is located (Fig. 16) according to a line d 2 and d 1 (the large ellipse axis). In the case without vibrations (ap = 2 mm), the two d 1 and d 2 families merged to only one family located according to only one line. Thus, the elliptic form appearance around the right-hand side d 1 seems quite related to the self-excited vibrations (cas ap = 5 mm) [START_REF] Bisu | The regenerative vibration influence on the mechanical actions turning[END_REF]. In particular, the frequency associated with the d 1 part is higher than those associated the d 2 part. Furthermore, these frequencies are related to the frequencies domain found during moments FFT analy-sis Fig. 12. Finally, on the central axis, the d 1 families and d 2 seem to correspond to distinct elements from the generated surface.
5 Workpiece and chip geometry
Roughness measurements
In the processes of matter per cutting tool removal, it is well-known that the manufactured pieces surface quality is closely connected to the thrust force imposed on the matter [START_REF] Chen | A stability analysis of regenerative chatter in turning process without using tailstock[END_REF]. In particular, the more the thrust force exerted on the object surface is important, the more the imprint left by the tool on surface is consequent. The surface roughness east thus closely related to the thrust force intensity. The surface roughness is connected confidentially to the thrust force intensity. The self-excited vibrations have an influence on the workpieces quality surface. Now, in the self-excited vibrations case examined here, we have just noticed that the forces and moments are maximum with the resonance frequency which is situated around 190 Hz.
Taking into account the link between roughness and efforts applied to the surface, we should find a maximum of surface roughness around this frequency. In this purpose, we propose to control the profile of a generator length roughness of the cylindrical manufactured piece. Indeed, in every rotation of the machined piece, the tool leaves a imprint force function applied to the generator point observed which corresponds to a given and known moment. Along a given generator, each point of this one is the force image applied to the surface at a known given moment because the rotation speed of the workpiece is known.
The surface roughness examination along a generator should thus necessarily reveal the amplitudes of roughness associated with the efforts periodically applied by the tool in these points. Also the roughness profile FFT presents along a generator of the cylindrical part manufactured should pass, like the efforts, by a maximum around the frequency of 190 Hz. It is indeed what can be observed on the figure 16 where the roughness data FFT analysis shows a frequency peak located around 190 Hz (precisely 191,8 Hz), which is coherent with the previous data.
In addition, the surface roughness analysis gives a total roughness value R t = 1,6 µm.
Chip characteristics
Chip measurements under the Scanning Electron Microscop were carried out and enabled to determine the thickness variation and the chip width. All chips are type 1.3 (ISO 3685) with undulations.
The chip thickness variations between the maximum (h max ) and the minimal (h min ) thickness are about 2, and feed rate values independent. An example is presented (figure 18) for a sample of chip during a test with the feed rate f=0.05 mm/rev. Values obtained h max = 0,23 mm and h min = 0,12 mm. The measure of chip length corresponding to an undulation enables to find the self-excited vibrations frequencies starting from the cutting speed (in conformity with equation 3):
f cop = V l 0 .ξ c , (3)
with f cop the chip segmentation frequency, V chip speed, l 0 one chip undulation length and ξ c the chip hardening coefficient.
To determine the total chip length, it is necessary to measure the wavelength, taking into account the rate of hardening phenomenon (in conformity with equation 4) [19] of the chip during cutting process and the primary shear angle φ,
ξ c = cos(φ -γ) sin(φ) . (4)
In our case, it is measured on the chip undulation length l 0 = 11 mm, with a chip hardening rate ξ c = 1.8 and the cutting speed V = 238 m/min. A frequency of 206 Hz is then obtained, very near to the frequencies of tool tip displacements or of the load application points during cutting process.
The chip width is then measured with similar techniques. Substantial width variations are observed, about 0.5 mm. Indeed, the measured maximum width w max is 5.4 mm while the evaluated minimal width w min is 4.9 mm, which means a width variation about 10% -Fig. 19.
The chip width slopes angle ϕ c measurement (Fig. 10) between each undulation, close to 29 • , is equal, with the same errors, to the phase difference measured on the signals of tool tip displacements (28 • ) (cf. [START_REF] Bisu | Etude des vibrations auto-entretenues en coupe tridimensionnelle: nouvelle modélisation appliquée au tournage[END_REF]).
6 Correlation between displacements of the tool tip / applied forces A synthesis between the work-paper two parts is essential. It is carried out below in order to put in evidence the various correlations which exist between stiffness / displacements, displacements of tool / stiffness center or stiffness center / central axis.
Correlation between the plane of the displacements tool tip / the applied forces
The tool tip point displacements are localized (cf. [START_REF] Bisu | Nouvelle analyse des phénomènes vibratoires en tournage[END_REF]) in a tilted plane. Inside the stiffness matrix determined in the paper ( [START_REF] Bisu | Optimization and dynamic characterization system part in turning[END_REF]), a correlation exists between tool tip displacements and the cutting forces applied. In particular, the ratios between the large and the small ellipses axes of tool displacements (a u /b u ) and the efforts applied (a f /b f ) decrease feed rate functions while the ratio of these ratios remains constant (equal to 1,64) when the feed rate increase. These elements enable to determine accurately the real configuration of cutting process. 4 The normals (nf ,nu) of the tool points displacements planes and the forces applied along the three cutting directions, depending on feed rate.
These correlations are analyzed using the direct normal to the tool tip point displacements plane and the direct normal to the load application points place (Table 4). The existence of these two planes is particularly interesting and adapted to the establishment of a cutting process real configuration. This new aspect is in the course of implementation in order to express and exploit a simplified dynamic three-dimensional model in the reference system associated with these planes [START_REF] Bisu | Dynamic model of the threedimensional cut, 18 th Int[END_REF].
Self-sustained vibrations: experimental validation
From these studies, it comes out that the self-excited vibrations domain is around 190 Hz with accuracy of a few per cent. It is around this common fundamental frequency that the whole of the major characteristics (displacements, efforts, and moments) of system WTM have the most important amplitude variations. The analysis carried out to the measures of tool point displacements and the points for load application enables to evaluate existing constant phase difference between the forces components and corresponding displacements (Fig. 20 These phase differences confirm the efforts delay compared to the tool tip displacement. The self-excited vibrations appearance can be also explained by the delay force / displacement, which increases the system energy level. The existence of this delay could be explained by the machining system inertia and more particularly by the cutting process inertia [START_REF] Koenigsberger | Machine Tools Structures[END_REF]. Moreover, it is noted that the phase difference with the same range between efforts / displacements (Table 5) remains constant according to the feed rate.
Because of parts elasticity intervening in the operation of turning, it is logical that the response in displacements of unit BT-BW is carried out with a certain shift compared to the efforts variation applied to the tool, variation induced by the lacks of machined surface cir-cularity in the preceding turn which imply variations of the contact tool/workpiece (Fig. 17). Phase difference between the efforts and displacements thus remains a possible explanation to the self-excited vibrations appearance. Moreover, when the tool moves along the ellipse places e 1 , e 2 , e 3 (Fig. 23), the cutting force carries out a positive work because its direction coincides with the cutting direction. On the other hand, on the side e 3 , e 4 , e 1 , the work produced by the cutting force is negative since its direction is directly opposed to that of displacement. The comparison between these two ellipse parts, as shows that the effort on the trajectory e 1 , e 2 , e 3 is higher as on the trajectory e 3 , e 4 , e 1 because the cutting depth is more important. At the time of this process, work corresponding to an ellipse trajectory remains positive and the increase of result energy thus contributes to maintain the vibrations and to dissipate the energy in the form of heat by the assembly tool/workpiece.
Conclusion
The experimental procedures proposed in this work-paper, as well at the static and dynamic level, enabled to determine the elements necessary to a rigorous analysis of the tool geometry influence, its displacement and evolution of the contacts tool/workpiece and tool/chip on the machined surface.
In particular, by analyzing the efforts resultant torsor applied during turning process, the experimental results enabled to establish an efforts vector decomposition highlighting the evolution of a variable cutting force around a constant value. This variable effort evolves into a plane inclined compared to the machine tool reference system.
This cutting force, whose application point describes an ellipse, is perfectly well correlated with the tip tool displacement which takes place under similar conditions. In particular, the ellipses axes ratios remains in a constant ratio when the feed rate varies with a proportionality factor equal with 1,64.
Moreover, the highlighted coupling between the system elastic characteristics BT and the vibrations generated by cutting enabled to demonstrate that the selfexcited vibrations appearance is strongly influenced, as it was expect to, by the system stiffness, their ratio and their direction. We also established a correlation between the vibratory movement direction of the machine tool elastic structure, the thickness variations and the chip section.
These results enable to now consider a more complete study by completely exploiting the concept of torsor. Indeed, thanks to the six components dynamometer, we confirmed, for an turning operation, the moments existence to the tool tip not evaluated until now by the traditional measuring equipment.
The originality of this work is multiple and in particular consists in a first mechanical actions torsor analysis applied to the tool tip, with an aim of making evolve a model semi analytical cutting 3D. This study allows, considering a turning operation, to establish strong correlations between the self-excited vibrations and the mechanical actions torsor central axis. It is thus possible, thanks to the parameters use defining the central axis, to study the vibrating system tool/workpiece evolution. It also leads to the description of a "plane of tool tip displacements" in correspondence with "the load application points plane".
Thus, using the plane that characterizes the BT behaviour enable to bring back the three dimensional cutting problem, with space displacements, with a simpler model situated in an inclined plane compared with the reference system of machine tool. Nevertheless, that remains a specific model of three dimensional cutting.
Appendix
Determination of the place points plane of load application on the tool
The plane P u definition being load application points geometrical place on the tool starting from the experimental results is carried out using the computation Mathcad c software. We seek to determine the plane which passes by the load application points cloud on the tool (Fig. 8 section 2.4): ax + by + cz + d = 0.
Fig. 1
1 Fig. 1 Experimental device and associated measurement elements.
Fig. 2
2 Fig. 2 Signals related to the resultant components of cutting forces following the three cutting directions; test case using parameters ap = 2 mm, f = 0,1 mm/rev and N = 690 rpm.
Fig. 3
3 Fig. 3 Signals related to the resultant components of cutting forces following the three (x, y, z ) cutting directions; test case using parameters ap = 5 mm, f = 0,1 mm/rev and N = 690 rpm.
Fig. 4
4 Fig.4Cutting forces magnitude FFT signal on the three (x, y, z) directions; test case using cutting parameters ap = 5 mm, f = 0,1 mm/rev and N = 690 rpm.
Fig. 5
5 Fig. 5 Cutting force Fv evolution around the nominal value Fn; test case using cutting parameters ap = 5 mm, f = 0,1 mm/rev and N = 690 rpm.
Fig. 6
6 Fig.6The resultant cutting force components variable zoom on the three cutting directions; test case using parameters ap = 5 mm, f = 0,1 mm/rev and N = 690 rpm.
Fig. 7
7 Fig. 7 Stable process (a) and unstable process (b).
Fig. 8
8 Fig.8The ellipse plane P f attached of range of the forces application points considering ap = 5 mm, f = 0,1 mm/rev and N = 690 rpm.
Fig. 9
9 Fig.9Ellipse approximation attached of forces application points; case study using parameters ap = 5 mm, f = 0,1 mm/rev and N = 690 rpm.
Fig. 10
10 Fig.10The moments components time signals that action on the tool tip following the three directions; case study using ap = 5 mm, f = 0.1 mm/rev and N = 690 rpm.
Fig. 12
12 Fig. 12 Signal FFT related to the moment on the three (x, y, z) directions; case study using ap = 5 mm, f = 0.1 mm/rev and N = 690 rpm.
Fig. 13
13 Fig. 13 Moments place space representation.
Fig. 14
14 Fig. 14 Central axis representation (a) and of the colinearity between vector sum R and minimum moment MA on central axis (b).
Fig. 15
15 Fig. 15 Central axes representation obtained for 68 rpm the workpiece speed and feed rate f = 0,0625 mm/rev; a) -stable process ap = 2 mm; b) -unstable process ap = 5 mm.
Fig. 16
16 Fig. 16 Moment representation related of central axes; case study using ap = 5 mm, f=0,1 mm/rev and N = 690 rpm.
Fig. 17
17 Fig. 17Machined surface and FFT data related to the roughness; case study ap = 5 mm, f = 0,1 mm/rev, N = 690 rpm.
Fig. 17Machined surface and FFT data related to the roughness; case study ap = 5 mm, f = 0,1 mm/rev, N = 690 rpm.
Fig. 18
18 Fig. 18 Chip thickness variation evaluation.
Fig. 19
19 Fig. 19 Chip width evaluation.
Fig. 20
20 Fig. 20 Phase difference evaluation between forces / displacements along x axis.
Fig. 21
21 Fig. 21 Phase difference evaluation between forces / displacements along y axis.
Fig. 22 Table 5
225 Fig.22Phase difference evaluation between forces / displacements along z axis.ϕ f ux ϕ f uy ϕ f uz 13 • 23 • 75 •Table5Values of phase difference forces / displacements (ap = 5 mm, f = 0,1 mm/rev, N = 690 rpm.)
Fig. 23
23 Fig. 23 Movement tool / workpiece elliptical trajectory.
Keywords Self-excited vibrations • Experimental model • Turning • Torsor measurement • Torsor central axis
Nomenclature continuation
P u Plane attached to the tool point displace-
ments;
R Cutting forces vector sum who actuate on the
tool (N);
r ǫ Cutting edge radius (mm);
R Sharpness radius (mm);
R t Roughness (µm);
T Time related to one revolution of workpiece
(s);
u Tool tip point displacement (m);
V Cutting speed (m/min);
w max Maximum chip width (mm);
w min Minimum chip width (mm);
WTM Workpiece-Tool-Machine tool;
x Radial direction;
y Cutting axis;
z Feed rate direction;
α Clearance angle (degree);
α κ(xy)
Nomenclature
A Central axis point;
[A] o Actions torsor exerted on the tool tip
in O point;
ap Cutting depth (mm);
a f (b f ) Large (small) ellipse axis attached to
the geometrical place of forces applica-
tion points;
BT Block Tool;
BW Block Workpiece;
d i Directory line of moments projection to
the central axis (i = 1, 2);
e i Ellipse point belong attached of forces
application points (i=1-5);
f Feed rate (mm/rev);
f cop Chip segmentation frequency (Hz);
F v (F n ) Variable (nominal) cutting force (N);
F x Effort on cross direction (N);
F y Effort on cutting axis (N);
F z Effort on feed rate axis (N);
h max Maximum chip thickness (mm);
h min Minimum chip thickness (mm);
l 0 Chip undulation length (mm);
M A Cutting forces minimum moment in A
who actuate on the tool (N.m);
M O Cutting forces moment in the O point
who actuate on the tool (N.m);
N Spindle speed (rpm);
n f P f normal direction;
n f a (n f b ) Normal direction projection to the P f
on the a f (b f );
O Tool point reference;
O' Dynamometer center transducer;
P f Plane attached to the forces application
points;
Table 1
1 Tool geometrical characteristics.
Table 2
2 Directory normal of the plane P
f attached of forces application points on the tool; case study using ap = 5 mm, f = 0,1 mm/rev and N = 690 rpm.
Acknowledgements The authors would like to thank the CNRS (Centre National de la Recherche Scientifique UMR 5469) for the financial support to accomplish the project.
Claudiu F. Bisu et al.
The errors are noted with e rr and we have : e rr (x, y, z, x p , y p , z p , x n , y n , z n ) = [M (x, y, z) -P (x p , y p , z p ).n f (x n , y n , z n )] ,
where:
Here the superscript t indicates the operation of transposition.
Expressing the E rr function using e rr and introducing the displacements components (u xi , u yi , u zi ) of the tool load application points into the three space directions, it comes:
e rr (u xi , u yi , u zi , x p , y p , z p , x n , y n , z n ), and :
Now, the vector V is calculated by minimization:
where
It results from it that the direct normal n f components to the plane of the load application points on the tool are given by:
For the case study presented in the section 2.4 (Table 2), with ap = 5 mm and f= 0.1 mm it comes:
The required plane equation is then:
where s and t are constants, and V p is equal to :
and u 1 is given by:
with the plane u 0 orientation vector is:
Ellipse approximation
Using the ellipse plane determination [START_REF] Bisu | Displacements analysis of self-excited vibrations in turning[END_REF], it is possible to determine the characteristics given in Table 3. |
04103949 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2023 | https://shs.hal.science/halshs-04103949/file/Note6_Tax-Transparcney-by-Multinationals_February2023-1.pdf | Sarah Godar
Manon Francois
Panayiotis Nicolaides
Mona Barake
Country-by-Country Reporting is a key data source for understanding the activities of multinational firms. This note explores public Country-by-Country Reports (CbCRs) published by multinational companies to highlight several important trends. First, while only a small number of large multinationals currently publish their CbCRs, the number of companies is increasing rapidly for both large and smaller multinational firms. However, these reports are scattered across different sets of documents, making collecting and analysing them challenging. Second, CbCR publishing is driven by European companies, especially companies active in the extractive sector. Finally, published reports are generally not complete in terms of variables included but present a satisfactory geographical disaggregation in most cases.
Introduction
A decade of data leaks1 have revealed the extent of profit shifting conducted by multinational companies. Confidence in these multinationals has eroded, making the general public, non-governmental organizations, investors and institutions increasingly request transparency about multinationals' tax contributions around the world. Only last year, shareholders requested the publication of tax information at the annual meetings of major multinationals such as Amazon, Cisco and Microsoft (FACTCOALI-TION, 2022). In the coming years, public disclosure will become a reality in Europe,2 and potentially in Australia3 while in the United States, the bill requiring public tax information publication passed in the House in 2021 but stalled in the Senate (United States Congress, 2021).
In this context, Country-by-Country Reports (CbCRs) play a crucial role as a new source of information on multinational firms' activities. These reports are compiled by multinationals and provide for the first time a comprehensive and detailed overview of the country-level distribution of key tax-related financial items such as profit, taxes and economic activities. As such, they will be a central input in evaluating future corporate tax reforms and monitoring the tax avoidance behaviour of multinationals.
The two main initiatives that have accelerated the adoption of Country-by-Country Reporting are led by the public multilateral Organization for Economic Co-operation and Development (OECD) and the Global Reporting Initiative (GRI). The OECD minimum standard requires large firms to privately disclose CbCRs with the appropriate tax authority starting from the fiscal year 2016 for early adopters, while GRI 207 Tax 2019 sets expectations for voluntary public disclosure of CbCRs alongside a tax strategy and governance description.
In this note, we investigate the uptake of voluntary CbCR disclosure by multinationals. We are asking three main questions: How many companies are currently publishing their CbCRs? Which are the characteristics of these companies? How complete are their disclosures?
To investigate the uptake of public disclosures of CbCRs, we rely on the new Public CbCRs database4 built at the EU Tax Observatory which standardizes and compiles the reports of over 100 multinationals into a single dataset. These CbCRs, which are usually scattered throughout annual reports, sustainability reports and other types of reports, are collected in one single place. To the best of our knowledge, this database covers the large majority of multinationals publishing CbCRs on a voluntary basis.
Our main findings are the following: first, overall CbCR publishing rates are low (97 reports for 2020), but increasing rapidly. Second, CbCR publishing is concentrated in European countries and in the extractive sector. Last, there remains significant room for progress on the completion of the information provided and the accessibility of these reports: 55% of the reports do not include all the recommended variables and reports are published in a wide variety of documents. This note is structured as follows: section 2 explains what a CbCR is, section 3 shows trends in CbCR publication, section 4 characterises publishing multinationals, section 5 analyses reports' completeness.
2 What is a Country-by-Country Report?
Country-by-country reporting is the reporting of financial, economic and tax-related information for each jurisdiction in which a multinational operates. Its core item consists of a table where the different columns correspond to the required variables and each row corresponds to each jurisdiction in which the multinational is present. For each jurisdiction, the financial figures of all the resident entities are aggregated in one number, representing the total activities of the multinational in that specific country.
For example, in the CbCR of Shell, Figure 1, the first row aggregates the activities of all Shell's entities present in Albania.
Companies mainly follow two closely related frameworks when providing CbCRs: OECD BEPS Action 13 and GRI 207-4 (GRI, 2020). OECD's Action 13 final report provides a template for multinationals to disclose information for each tax jurisdiction in which they operate. This includes aggregated data on: related party, unrelated party and total revenues, profits before income taxes, income tax paid and accrued, stated capital, accumulated earnings, number of employees and tangible assets other than cash and cash equivalents. It is mandatory for the largest multinationals (with previous year consolidated revenues larger than €750 million) and does not entail public disclosure.
In terms of the required information, GRI 207-4 is closely related to the OECD standard: it requires the same set of variables with the exception of accumulated earnings and stated capital. In terms of scope, the GRI standard is wider as compared to the OECD as there is no size restriction to CbCR disclosure.
A crucial difference is that the GRI mandates voluntary public disclosure. 2.1 Where are the reports published?
Currently the reporting of this information is far from standardised, with no defined place where multinationals are required or recommended to publish the data. This translates in companies disclosing CbCRs in a wide variety of documents and formats, making it challenging to find and extract the data. For example, some multinationals use standalone tax payments reports, while others use tax transparency reports or sustainability reports and annual reports. Figure 2 shows a few examples of the collected reports.
FIGURE2
Examples of reports
Source: Publicly available reports, 2019-2021 3 How many reports have been published so far?
Although public reporting at a country-by-country level remains largely voluntary, a number of firms already disclose this information publicly. As can be seen in Figure 3, the number of reporting firms has been steadily increasing over the past years: 97 reports have been collected relative to the fiscal year 2020, compared with 20 for the fiscal year 2018. Reports are generally published with a one-or two-year lag, hence for the fiscal year 2021 reports are expected to be published between 2022 and 4 Which multinationals are publishing their Country-by-Country Reports?
Breakdown by company size
Few large multinationals publish CbCRs: as reported in Table 1 only 2.7% of the top 15005 firms publish CbCRs and this number is only slightly higher if focusing on the top 1000 (3.2%), the top 500 firms (4%)
or the top 100 firms (7%).
It could be expected that publishing is concentrated among large multinationals (with revenues above €750 million) that potentially face a reduced administrative burden and already comply with the requirement to file CbCRs with tax authorities. We find that this is partly the case: out of the 65 multinationals reporting revenues data, only about 14% (9/65) are below the €750 million threshold.
Percentage of top multinationals publishing CbCRs
Top Firm Source % of Firms Publishing
Forbes Global Index
Top 100 7%
Top 500 4%
Top 1000 3.2%
Top 1500 2.7%
Note: The Forbes Global Index contains the top 1500 firms -excluding banks -based on a composite index weighting sales, profits, assets and market value. We use the Public CbCRs database, EU Tax Observatory (2022), as a proxy for firms publishing.
There is some evidence that smaller multinationals are also publishing: only about 29% (32/111) of firms in our database are in the top 1000 Forbes firms and about 37% (41/111) are in the top 1500
Forbes firms. This means that almost two-thirds of multinationals in our database are not in the top 1500 largest global multinationals, pointing to significant reporting by smaller multinationals.
Breakdown by headquarter country
With minimal regulation and few reporting standards, the extent of public disclosures by firms varies significantly across countries and regions. This could be related to recent changes in disclosure requirements. In Italy, before the adoption of the Non-Financial Reporting Directive 2014/95/EU, non-financial information reporting had been largely voluntary (with the exception of banking). The transposition of the directive into national law with Legislative Decree No. 254/2016 made mandatory the disclosure of non-financial and diversity information in particular related to environmental issues, social and employee-related matters, respect for human rights and anti-corruption and bribery matters.
In Spain, the Law 11/2018, 6 amended the applicable rules on the disclosure of non-financial and diversity information introducing a requirement that large Spanish firms 7 disclose tax information as part 6 7 Firms with an average of over 500 employees during the year or meeting two of the three following criteria over the last two years are required to disclose: total assets amount to more than €20 of their non-financial reporting. Importantly, the disclosure of profits earned, tax paid on profits and public subsidies received is required on a country-by-country basis.
In the United Kingdom, the 2016 UK Finance Act has played a role in increasing tax transparency for large multinationals. This act requires firms to disclose their tax strategy if they have a turnover in the UK of over £200 million or a UK balance sheet of over £2 billion. International firms with sub-groups in the UK or with a global turnover of over €750 million are also required to report their tax strategy.
Breakdown by sector
There is significant variation in CbCR disclosure across sectors. The high level of disclosures for these sectors has likely been driven by sector-wide initiatives. For examaverage number of workers exceeds 250. Starting in 2022, the law will be strengthened and any firm meeting the previous criteria or with an average of over 250 employees during the year will be required to disclose. 8 "This does not take into account the mandatory reporting of European banks under the Capital Requirements Directives IV. See [START_REF] Barake | Tax planning by european banks[END_REF] for additional information on reporting by banks. " Other sectors might be exposed to specific tax risks driven by the nature of their business models. The financial sector is under greater scrutiny since the financial crisis and the technology sectors have faced greater tax scrutiny as their reliance on intellectual property assets and exposure to digitization could be exploited for aggressive tax planning.
FIGURE4 Multinationals disclosing CbCRs by sector
Note: This figure shows the number of multinationals in the Public CbCRs database and the number of these firms that are in the top 1500 of the Forbes Global Index for each sector.
As the disclosure of CbCR information is still voluntary, there is considerable variation in the amount of information reported by companies both in terms of variables disclosed and geographical disaggregation.
Concerning variables, in almost all reports firms publish at least profit and one tax variable (either tax paid or accrued) with the exception of a few companies displaying only tax information. It is rare for companies to publish complete reports according to the OECD standard, only 11% of them do so. On the other hand, almost half (45 %) publish all recommended variables according to the GRI 207-4 standard.
It is crucial for companies to publish all variables and not limit them to a subset, including both profit, taxes and variables describing real economic activities such as number of employees or tangible assets.
This allows to fully appreciate where a company performs its economic activities and whether these are aligned with where profits are booked and taxes paid. Including only profit or tax variables limits this kind of analysis.
FIGURE5
Percentage of multinationals disclosing all variables required. Concerning geographical disaggregation, both the OECD and GRI require the information on all jurisdictions to be disclosed, but not all multinationals apply this requirement when publishing their CbCR reports.
This practice is widespread in CbCR: about 44% (49/111) of multinationals in our database make use of aggregated geographical reporting categories. They account for a relatively small amount of activities:
across the whole sample, these aggregated geographical categories account for 4.9% of total related revenues, 3.6% of total unrelated revenues, 5.7% of total tax paid, 4% of total employees, 2.1% of total positive profits.
Overall the geographical disaggregation provided by companies is quite complete, but a minority of Tax Transparency by Multinationals | 10 companies report only the top countries in which they are present and others present groups of countries or entire regions in aggregated categories. In addition, in a small number of companies the aggregated geographical categories account for a large share of activities (close to 30% of revenues). Only one company reports extremely aggregated activities disclosing separate figures for the headquarter jurisdiction and aggregating foreign jurisdictions in regional categories such as Europe and Latin America.
The use of these aggregated categories should be limited, as it decreases transparency facilitating the dissimulation of tax avoidance activities. This is because aggregate activities could include information on tax havens summed with those of other countries, dissimulating tax avoidance activities.
Conclusion and way forward
This note has underlined several important trends. First, while only a small number of large multinationals currently publish their CbCRs, the number of companies is increasing rapidly for both large and smaller multinational firms. However, these reports are scattered across different sets of documents, making collecting and analysing them challenging. Second, CbCR publishing is driven by European companies, especially companies active in the extractive sector. Finally, published reports are generally not complete in terms of variables included but present a satisfactory geographical disaggregation in most cases.
The EU public CbCR directive 2021/2101 will be a considerable step forward in increasing the public availability of these data: large multinationals present in the EU will be required to publish CbCRs in a machine-readable format from fiscal years starting in July 2024.
However, there will still be some obstacles to exploiting this data and fully enhancing tax transparency.
First, the directive will require limited geographical disaggregation, which might hamper the analysis of profit-shifting activities. Second, it requires a more restricted set of variables as compared to GRI and OECD. Third, there is no single place in which all published reports will be automatically collected. In the future, several improvements could include: requiring a specific publication title that facilitates finding the reports regardless of the language, requiring worldwide country-level data and extending the set of variables included.
Tax Transparency by Multinationals | 11
FIGURE1
FIGURE1Country-by-Country Report example
FIGURE3Country-by-Country Reports collected for fiscal years between 2016 and 2021
Figure 4 shows the number of firms in the Public CbCRs database by sector and the number of these firms that are in the top 1500 Forbes firms. The most represented sectors in the Public CbCR database are Mining and Extraction (17 firms), Chemicals, Petroleum, Rubber and Plastic (15 firms) and Banking, Insurance and Financial Services (14 firms). 8 Two of these sectors also concentrate the highest number of top 1500 firms: Mining and Extraction (7 top 1500 firms) and Banking, Insurance and Financial Services (8 top 1500 firms). Utilities and Chemicals, Petroleum, Rubber and Plastic are the next sectors with the most top 1500 firms. 9
9
[START_REF] Bourne | Global trends in corporate tax disclosure[END_REF] find similar sectoral results.Tax Transparency by Multinationals | 8 ple, there have been multiple initiatives in the extractive sector, such as Publish What You Pay and the Extractive Industries Transparency Initiative, that have long focused on transparency around payments to governments. This might explain why the Mining and Extraction and the Petroleum and Utilities sectors account for a large portion of the reported CbCRs and the top 1500 firms.
Note:
The figure shows the percentage of CbCRs included in the sample that discloses all OECD variables and all GRI variables according to the Public CbCRs database, EU Tax Observatory (2022). The two standards require the same set of variables with the exception of "Accumulated earnings" and "Stated Capital" which are not required by the GRI.
countries account for about 80% of multinationals publishing CbCRs while
Table
2
presents the five countries and regions with the highest number of firms publishing CbCRs according to the Public CbCR database. The top five countries -Italy, Spain, the United Kingdom, the Netherlands and Norway -account for about 64% of multinationals publishing CbCRs. At the regional level, European other regions have a lower number of publishing multinationals, accounting for between 0.9% and 8.1% of the overall sample.
Consistent with
[START_REF] Bourne | Global trends in corporate tax disclosure[END_REF]
, these results point to a concentration in CbCR disclosure in a small group of European countries, showing that firms in European countries have the most comprehensive disclosures for tax policy reporting.
TABLE2 Countries and Regions with most multinationals publishing CbCRs
HQ Country or Region % of public CbCRs
Top Countries
Italy 20.7%
Spain 16.2%
United Kingdom 13.5%
Netherlands 8.1%
Norway 5.4%
Total 63.9%
Top Regions
Europe 80.1%
Americas 8.1%
Asia 6.3%
Oceania 4.5%
Africa 0.9%
million, annual net revenues exceed €40 million or Tax Transparency by Multinationals | 7 Note: The table shows the percentage of multinationals headquartered in the top five countries and regions in the Public CbCRs database.
Such as the Luxembourg Leaks (2014), the Panama Papers (2016) and the Pandora Papers (2021).
For fiscal years starting after July 2024, EU (2021) for the full legal text.
The Australian Treasury made a proposal for mandatory CbCR in October 2022(Australian Treasury Department, 2022).
The EU Tax Observatory developed a public exploration tool where this dataset can be freely downloaded.Tax Transparency by Multinationals | 3
To identify the largest multinationals we rely on the Forbes Index, that ranks top firms using a weighted index including the firms' sales, profits, assets and market value. Results are consistent when using market capitalisation as the size proxy. We exclude banks as their CbCRs are published under directive CRD IV, introduced before the BEPS framework.
This note has received funding from the European Union (GA No. TAXUD/2022/DE/310). The views expressed in this note are those of the authors and do not necessarily reflect the views of the European Commission. |
03660504 | en | [
"math.math-ra"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-03660504v2/file/El_%20Azhari.pdf | Mohammed El Azhari
Mohammed El
Jordan 2023 Homomorphisms
A note on n-Jordan homomorphisms
Keywords: Jordan homomorphism, homomorphism, n-Jordan homomorphism, n-homomorphism
By using a variation of a theorem on n-Jordan homomorphisms due to Herstein, we deduce the following G. An's result: Let A and B be two rings where A has a unit and char(B) > n. If every Jordan homomorphism from A into B is a homomorphism (anti-homomorphism), then every n-Jordan homomorphism from A into B is an n-homomorphism (anti-n-homomorphism).
Preliminaries
Let A, B be two rings and n ⩾ 2 an integer. An additive map h : A → B is called an n-Jordan homomorphism if h(x n ) = h(x) n for all x ∈ A. Also, an additive map h : A → B is called an n-homomorphism or an anti-n-homomorphism if h(
n i=1 x i ) = n i=1 h(x i ) or h( n i=1 x i ) =
n-1 i=0 h(x n-i ), respectively, for all x 1 , ..., x n ∈ A. In the usual sense, a 2-Jordan homomorphism is a Jordan homomorphism, a 2-homomorphism is a homomorphism and an anti-2-homomorphism is an antihomomorphism. It is obvious that n-homomorphisms are n-Jordan homomorphisms. Conversely, under certain conditions, n-Jordan homomorphisms are n-homomorphisms. We say that a ring A is of characteristic greater than n (char(B) > n) if n!x = 0 implies x = 0 for all x ∈ A.
Results
Lemma 2.1 [4, Lemma 1]. Let A, B be two rings, n ⩾ 2 be an integer and f : A n → B be a multi-additive map such that f (x, x, ..., x) = 0 for all x in A. Then σ∈Sn f (x σ(1) , ..., x σ(n) ) = 0 for all x 1 , ..., x n ∈ A, where S n is the set of all permutations on {1, ..., n}.
By using Lemma 2.1, we have the following lemma: Lemma 2.2. Let A, B be two rings, n ⩾ 2 be an integer and h : A → B be an n-Jordan homomorphism. Then
σ∈Sn (h( n i=1 x σ(i) ) - n i=1 h(x σ(i) )) = 0 for all x 1 , ..., x n ∈ A,
where S n is the set of all permutations on {1, ..., n}.
Proof. Consider the map
f : A n → B, f (x 1 , x 2 , ..., x n ) = h( n i=1 x i ) - n i=1 h(x i ), f is clearly multi-additive and f (x, x, ..., x) = h(x n ) -h(x) n = 0 for all x ∈ A. By Lemma 2.1, σ∈Sn f (x σ(1) , ..., x σ(n) ) = σ∈Sn (h( n i=1 x σ(i) ) - n i=1 h(x σ(i) )) = 0 for all x 1 , ..., x n ∈ A.
It was shown in [START_REF] Gselmann | On approximate n-Jordan homomorphisms[END_REF] that if n ⩾ 2, A, B are commutative rings, char(B) > n and h : A → B is an n-Jordan homomorphism, then h is an n-homomorphism. The same was also proved for algebras in [START_REF] Bodaghi | n-Jordan homomorphisms on commutative algebras[END_REF] and [START_REF] Lee | Stability of n-Jordan homomorphisms from a normed algebra to a Banach algebra[END_REF]. Here we obtain this result as a consequence of Lemma 2.2.
Theorem 2.3. Let A, B be two commutative rings and char(B) > n. Then every n-Jordan homomorphism from A into B is an n-homomorphism.
Proof. Let h : A → B be an n-Jordan homomorphism. By Lemma 2.2 and since A, B are commutative,
σ∈Sn (h( n i=1 x σ(i) ) - n i=1 h(x σ(i) )) = n!(h( n i=1 x i ) - n i=1 h(x i )) = 0 for all x 1 , ..., x n ∈ A, hence h is an n-homomorphism since char(B) > n.
Now we give the following variation of a theorem on n-Jordan homomorphisms due to Herstein [4, Theorem K].
Theorem 2.4. Let h be an n-Jordan homomorphism from a ring A into a ring B with char(B) > n. Suppose further that A has a unit e, then h = h(e)τ where h(e) is in the centralizer of h(A) and τ is a Jordan homomorphism.
Proof. Since h is an n-Jordan homomorphism, h(e) = h(e n ) = h(e) n . By Lemma 2.2 and putting 1) and so nh(x) = h(e) n-1 h(x) + h(e) n-2 h(x)h(e) + • • • + h(x)h(e) n-1 (1) since char(B) > n. By multiplying on the right by f (e) both sides of the equality (1), nh(x)h(e) = h(e) n-1 h(x)h(e) + h(e) n-2 h(x)h(e) 2 + • • • + h(x)h(e) (2) Also, by multiplying on the left by f (e) both sides of the equality (1), nh(e)h(x) = h(e)h(x) + h(e) n-1 h(x)h(e) + • • • + h(e)h(x)h(e) n-1 (3) By ( 2) and ( 3), (n -1)h(x)h(e) = (n -1)h(e)h(x) and consequently h(x)h(e) = h(e)h(x) (4) since char(B) > n. Then h(e) is in the centralizer of h(A). By (1) and (4), nh(x) = nh(e) n-1 h(x) and so h(x) = h(e) n-1 h(x) (5) since char(B) > n. By Lemma 2.2, (4) and putting
x 1 = x, x 2 = x 3 = • • • = x n = e, n!h(x) = (n -1)!(h(e) n-1 h(x) + h(e) n-2 h(x)h(e) + • • • + h(x)h(e) n-
x 1 = x 2 = x, x 3 = • • • = x n = e, n!(h(x 2 ) -h(e) n-2 h(x) 2 ) = 0 and hence h(x 2 ) = h(e) n-2 h(x) 2 (6) since char(B) > n. Consider the map τ : A → B, τ (x) = h(e) n-2 h(x), τ is clearly ad- ditive. By (5), h(x) = h(e) n-1 h(x) = h(e)h(e) n-2 h(x) = h(e)τ (x) for all x ∈ A. By (6), τ (x 2 ) = h(e) n-2 h(x 2 ) = h(e) 2(n-2) h(x) 2 = (h(e) (n-2) h(x)) 2 = τ (x) 2 for all x ∈ A, thus τ is a Jordan homomorphism.
As a consequence, we obtain the following result of G. An [START_REF] An | Characterizations of n-Jordan homomorphisms[END_REF].
Corollary 2.5 [START_REF] An | Characterizations of n-Jordan homomorphisms[END_REF]Theorem 2.4]. Let A and B be two rings where A has a unit e and char(B) > n. If every Jordan homomorphism from A into B is a homomorphism (anti-homomorphism), then every n-Jordan homomorphism from A into B is an n-homomorphism (anti-n-homomorphism).
Proof. Let h : A → B be an n-Jordan homomorphism. By Theorem 2.4, h = h(e)τ where h(e) is in the centralizer of h(A) and τ is a Jordan homomorphism. If τ is a homomorphism, h(x
1 x 2 • • • x n ) = h(e)τ (x 1 x 2 • • • x n ) = h(e)τ (x 1 )τ (x 2 ) • • • τ (x n ) = h(e) n τ (x 1 )τ (x 2 ) • • • τ (x n
) since h(e) = h(e n ) = h(e) n = h(e)τ (x 1 )h(e)τ (x 2 ) • • • h(e)τ (x n ) since h(e) commutes with each τ (x) = h(x 1 )h(x 2 ) • • • h(x n ) for all x 1 , ..., x n ∈ A, hence h is an n-homomorphism. Similarly, if τ is an anti-homomorphism, then h is an anti-n-homomorphism. |
04103996 | en | [
"spi.nano"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04103996/file/DDECS___Monaco_and_Verilog_Oscilloscope.pdf | Sergio Vinagrero Gutiérrez
email: [email protected]
Pietro Inglese
email: [email protected]
Giorgio Di Natale
email: [email protected]
Elena-Ioana Vatajelu
email: [email protected]
Open Automation Framework for Complex Parametric Electrical Simulations
Keywords: electrical simulation, parametric, Verilog-A, Spice
The need to achieve statistically relevant results in electrical simulations requires a large number of iterations under different operating conditions. Moreover, the nature of parametric simulations makes the collection and filtering of the results non-trivial. To tackle these issues, scripts are normally used to control all the parameters. Still, this approach is usually ad-hoc and platform dependent, making the whole procedure hardly reusable, scalable and versatile. We propose a generic, opensource framework to generate complex stimuli and parameters for electrical simulations, together with a programmable Spiceand Verilog-A-based module capable of observing and logging internal states of the circuit to facilitate further result analysis.
I. INTRODUCTION
Design complexity, ultra-low-power requirements, reliability, robustness and security are becoming increasingly important concerns when designing electronic systems. Moreover, designers are facing novel challenges related to aggressive CMOS technology scaling and the introduction of beyond-CMOS devices (CNT-FET, Memristive, Spintronic). Indeed, several issues must be considered at design time such as fabricationinduced variability, technology-dependent defects, extreme operating/environmental conditions, stochastic behaviours, aging, and possible perturbations (noise, radiations, malicious attacks).
For consecrated technologies and applications, the electricallevel issues are well-understood and easily translatable to higher levels of abstraction (RTL, micro-architectural, system), thus enabling very fast and accurate simulations. However, with the aforementioned issues faced by today's designs, electricallevel simulations are unavoidable since they allow designers to accurately model and understand the behaviour of the target system, and to extrapolate physical-level properties to a higher level of abstraction.
In order to explore the behaviour of an electrical circuit under different conditions and parameters, multiple simulations are performed with the desired setting. In today's circuits based on new technologies, the number of parameters and their interdependencies can be prohibitively large to be exhaustively studied. Monte Carlo [START_REF] Metropolis | The monte carlo method[END_REF] and parametric simulations are the classical approaches to cope with this type of problem. Nevertheless, algorithms implemented in industrial CAD tools * Institut National Polytechnique Grenoble Alpes do not offer the designer full control and observability of desired parameters. Moreover, the range and the distribution of parameters are defined in technological libraries (provided by silicon foundries). However, these data are not easily available for emerging technologies (or at least not yet embedded in industrial libraries).
We propose a unified framework which works for both statistical and parametric simulations performed independently or concomitantly. This allows the designer to fully control the range of parameters and fully observe the circuit behaviour. Moreover, it allows correlating the circuit behaviour with the parameters used at simulation time. This framework implements the following steps:
1) Define the set of parameters relevant to the study. Here we can target process variability, operating/environmental conditions, stochastic behaviours, aging, perturbations; 2) Identify the relevant signals that fully characterise the circuit behaviour; 3) The value of each parameter is generated by a userdefined function; 4) For each combination of parameters, a netlist is generated and simulated. The relevant signals provide the information required about the circuit behaviour. Another important aspect of electrical simulations is the ability to reproduce results. Indeed, there are situations when it is important to repeat the simulations with the same parameters, for instance when performing design-space exploration, defect analysis, evaluation of countermeasures in secure circuits, etc. Moreover, in the scientific research environment, the repeatability of experiments and results is of paramount importance. However, since several commercial tools exist and their manipulation and operation are not identical, it is difficult to reproduce simulations performed on one tool to another. Most commercial CAD tools provide utilities to generate parameters but in some cases, it is very difficult to replicate the environment. Moreover, there is also the problem of incompatibility between simulators and licenses. Even if the tools and license requirements are met, the aggregation of results after complex parametric simulations is cumbersome and takes a big part of the project time just to prepare the data.
In this paper, we show an open-access and open-source framework to carry out complex parametric electrical simulations and analyses. This framework is a wrapper to commercial tools, which allows researchers to perform and repeat a large 979-8-3503-3277-3/23/$31.00 ©2023 IEEE number of simulations, no matter the underlying simulation environment. The framework is implemented in Python and Verilog-A and it is licensed under the MIT License. The code and documentation can be found online [2], [START_REF] Inglese | Verilog-a state observer[END_REF].
This paper is organised as follows: the current state of the art is summarised in section II, followed by the motivation of this project in section III; In section IV the two parts of the framework are described in detail and some use cases are provided in section V.
II. STATE OF THE ART
The main format used to describe electrical netlists is called Spice [START_REF] Nagel | SPICE (simulation program with integrated circuit emphasis)[END_REF], which stands for Simulation Program with Integrated Circuit Emphasis. An electrical simulator will take the spice netlist, parse it and simulate it. The same netlist can be simulated under different compatible simulators as it is just a description of the circuit. All of the available tools for electrical simulations provide a graphical user interface to generate circuits based on a drag-and-drop mechanism which is very user-friendly. The graphical user interface can be also used to customise every aspect of the simulation to provide the necessary parameters to the simulator. However, not every available tool provides a mechanism to easily save this configuration to be used later, or on a different computer. Therefore, in order to share a project between different teams, the exact same configuration of the project and sometimes even the operating system have to be precisely copied, which is an unreasonable task to do. Moreover, this problem grows larger with the complexity of the circuit.
CAD tools automatically perform the conversion between the graphical schematic and the textual netlist. However, the other way around is not always performed. Moreover, there are some tools available online that allow parsing netlists or converting results from one file format to another, but these tools are limited since parsing netlists is contextual and simulator-dependent. The same is applied for the file formats containing simulation results as most of the time a special tool is needed to view the results, which makes aggregating results challenging. An example of an automated CAD tool is the AWR Design Environment (AWRDE), provided by Cadence, working in a custom language called SAX. This tool works via a simple interface where the user can control any aspect of the environment through code. Users have also access to powerful tools that allow them to modify part of the circuit and generate custom reports programmatically, and even perform analysis in Python. However, this tool requires an expensive licence that not everyone is able to afford and requires some time to learn to properly use its functionalities.
Other tools like PySpice [START_REF] Salvaire | Pyspice[END_REF] allow the generation of netlists and the launch of simulations directly with Python. In this way, both stimuli and configuration live under the same block of logic, which reduces the time between iteration cycles. However, this solution is limited to the NGSpice and Xyce simulators and this can have an impact on the workflow.
III. MOTIVATION
The main issues that motivated the creation of this framework are related to the complexity of generating, manipulating and analysing large parametric and statistical simulations and the repeatability of experiments.
There is a massive trend today towards the sharing of information and the enabling of reproducible science. Unfortunately, when it comes to electrical simulations, the use of proprietary CAD tools (which differ from provider to provider) and foundry-licensed device model libraries makes reproducible science almost impossible. This is mainly due to the fact that even having the full description of an electronic circuit, performing parametric or statistical electrical simulations requires CAD tool-specific configurations, which, if not set exactly, might lead to different results than originally reported. This can hinder the repeatability of experiments as scientists are not able to exactly repeat experiments if they are not provided with all the files and settings of the initial research. This framework was created to solve this issue and allow users to create configurations for projects that can easily be shared between different teams and modified to scale up the number of components by exploiting the facility of manipulating text files (Spice netlists). This also makes our framework easily usable with any CAD tool and underlying electrical simulator.
One of the advantages of our proposed framework is that new users don't need to invest time in learning how to use new simulators or get familiar with the user interface since the syntax is minimal and allows very fast results with minor modifications of the files provided by a commercially available tool.
Fig. 1: Functional diagram of the framework. The combination of templates and parameters is done with the framework outside of the simulator, while the state observer has to be placed by the user inside the netlist.
IV. DESCRIPTION OF THE FRAMEWORK
The framework is divided into two parts. The first part is a spice netlist and simulation configuration builder written in Python. The second one is a Verilog-A module to write the state of an electrical component to a CSV file. Figure 1 shows a small diagram representing how the different parts of the frameworks are combined.
A. Netlist and simulation configuration tool
The main objective of the Python tool is to facilitate the generation of complex electrical parameters as well as inject them inside predefined spice netlists and simulation configuration files, in a similar manner to a text macro system. This tool, which from now on will be referred to as builder, exploits the way electrical components are defined in spice netlists. For example, a MOSFET transistor is defined as shown below. The syntax can slightly change between different spice implementations, but the definition of the parameters is identical. This builder exploits the textual nature of netlists in order to generate the necessary stimuli for the simulations. A set of values can be generated, inserted into the stimuli templates and feed to the simulator. To achieve this, the user needs to define the different stimuli files that will be used as templates and define the functions to generate the values for each iteration. The different input files are treated as templates and processed by the Jinja [6] template engine. An advantage of using a template engine like Jinja is that logic can be embedded directly into the templates, which means that certain parts of a template can be rendered or discarded depending on the values of some variables.
The builder can be configured through a YAML or TOML config file. This file contains metadata, user-defined properties, the command to execute, the list of files to use as templates and the number of iterations to execute. These variables can be accessed directly in the templates.
In order to generate values, the user needs to define one or multiple ValueFactories in Python. These Factories receive the configuration from the YAML file and generate the values for each generation. The user can define as many Factories as desired and they will all be used for every iteration. If one of the Factories however is exhausted (i.e. it doesn't generate more values) the process is stopped. This allows stopping the iterations if some conditions have been met. The user can also define a callback that is run after the command has been executed. This callback has access to the values that have been generated for that iteration and the results of the simulation and can be used for example to perform a clean-up after the simulation.
B. State observer
The state observer is designed for Spice-and Verilog-Abased simulators. It is a sub-circuit that is placed in the schematic and is connected to the components in need of measurements.
For transients simulations, the state observer will write the state measured every time step to a CSV file, so that it can be easily loaded into any conventional software to analyse and extract information. For this reason, the simulation parameter STEP_SIZE found in the simulation environment is particularly important: it allows running the state analyser with the desired granularity. Moreover, as shown in the examples section, the state observer can be modified to save the wanted data only in certain cases, to allow a lighter and more applicationspecific log.
The idea of being able to externally save results in a userdefined file comes also useful in case we want to modify the parameters for the next simulations according to the results just obtained. As an example, we could increase or decrease the step size of parameters depending if we are close or not to the expected working behaviour.
Most of the commercially available software allows exporting simulation results into external files. This process is nevertheless carried out most of the time through a graphical user interface, which slows down the simulation iteration cycle, or the supported file types are not the desired ones.
V. EXAMPLES
A. Ring Oscillator-based Physical Unclonable Function
One example where this framework has been used is the reliability analysis of Ring Oscillator-based Physical Unclonable Functions [START_REF] Vinagrero Gutierrez | On-line reliability estimation of ring oscillator puf[END_REF]. Due to manufacturing-induced variability, the strength of the inverters in the Ring Oscillator shifts the frequency of oscillation from the nominal value. This effect gets aggravated with voltage and temperature variation. The oscillation frequency depends directly on the number of inverters in the Ring Oscillator. Normally, Ring Oscillators tend to have hundreds of inverters. The vast number of stages, as well as the different environmental conditions, make the process of generating these parametric simulations very convoluted.
In order to obtain a distribution of oscillating frequencies, it is necessary to perform multiple simulations while reproducing process variability. Source code 2 shows a small section of the spice netlist that has been converted to a Jinja template. This part generates a chain of inverters where the number of stages is determined by the user variable nstages. In this example, the variability is induced by altering the length and the V th of each PMOS and NMOS. The values are generated using a Factory that draws the values from normal distributions, as seen in the ParamFactory shown in the examples in [2]. Thanks to the Jinja templates, the number of inverters for each Ring Oscillator can be embedded in the logic and configured by the user.
Once the schematic is created, the parameters to simulate the process variability are defined in the same way they have been described in section IV-A and are inserted into the netlist.
B. State observer with VTEAM
The state observer has been highly useful in the analysis of memristive-based logic operations for Logic In-Memory-Computing (IMC) [START_REF] Inglese | Memristive Logic-in-Memory Implementations: A Comparison[END_REF], [START_REF]On the Limitations of Concatenating Boolean Operations in Memristive-Based Logic-In-Memory Solutions[END_REF]. In fact, we noted that IMC with memristors is very sensitive to slight variations. To have a working operation, the IMC operations have constraints on the voltage(s) enabling the operations. Variations of the control voltage(s) and of the duration of the operations itself, as well as memristive values not reaching the nominal value, can affect the behaviour of the operation and not allow it to reach the correct result. For this, it is very important to examine the internal state of memristors at any given time to know if the operations have been properly carried out and understand what affected them in case the result is not optimum.
The VTEAM memristor model [START_REF] Kvatinsky | VTEAM: A General Model for Voltage-Controlled Memristors[END_REF] provides several parameters to simulate the behaviour of memristors. Being an open Verilog-A description model, it can be easily modified to add new terminals and provide information about interesting internal variables. For this application, it is important to obtain the internal resistance and the distance of the threshold of the doping region. The state observer saves that data, allowing to have a detailed description of the behaviour that can be used to drive in real-time the evolution of the simulation and the proper parametric settings. For all of the memristors that are connected to the state observer, the absolute time of measurement, the internal resistance and the distance of the doping region are written to a parameterised CSV file.
Source code [START_REF] Inglese | Verilog-a state observer[END_REF] shows an example of the state observer used to save the state of two memristors under the execution of Logic-in-Memory (LIM) operations. The goal of the simulation was to sweep over the control voltage(s) and the duration of the LIM operations to find the best working setting [START_REF]On the Limitations of Concatenating Boolean Operations in Memristive-Based Logic-In-Memory Solutions[END_REF]. Hence, for this code, we concatenated the automatic generation of parameters with the state observer. We automatically generated the inputs and the parameters to run the simulation and we saved the simulation times at which each LIM operation is concluded. We also saved the input value(s), the expected output value and selected parameters, including the control voltage(s) and the duration of the operation. During the operation, the resulting file is read by the state observer and it is used to choose the simulation times at which it will save the wanted states. Finally, the resulting CSV generated by the state observer will have all the fields coming from the builder and the read values: This allows the user to have all the information about the executed operation in the same CSV and it supports directly performing data analysis from the collected data to assess the correct behaviour of the studied operations.
Another example where this framework could have facilitated the parametric simulations and the data analysis is in the work described in [START_REF] Fieback | PVT Analysis for RRAM and STT-MRAM-based Logic Computation-in-Memory[END_REF], which presents an in-depth analysis of the process, voltage and temperature variability for CIM.
Since the state observer is defined in a Verilog-A file, it can also be integrated with the builder in order to automatically generate the correct number of signals depending on the userdefined configuration. This simplifies the exploration of all the possible space, as the schematic itself will be rendered according to the configuration and the values generated for each iteration.
VI. CONCLUSIONS
The framework shown in this article is able to generate complex parametric electrical simulations in a reliable and repeatable manner. This framework is open-access and opensource and works with any available commercial simulator as it doesn't require major modifications to the files used.
This framework could be further expanded by adding an automated system that can generate the state observer for a given amount of different components based on the configuration of the builder.
1 M1Source Code 1 :
11 Spice definition of a MOSFET transistor. The slashes indicate that the definition continues in the next line
1 V0( 2 3
12 vdd 0) vsource dc=${vdd} type=dc {% for s in range(1, props.nstages) -%} 4 I{{loop.index}} (0 net{{s}} net{{s-1}} vdd) INV \ 5 vthp={{ params['vthp'][loop.index0] }} \ 6 vthn={{ params['vthn'][loop.index0] }} \ 7 lp={{ params['lp'][loop.index0] }} \ 8 ln={{ params['ln'][loop.index0] }} 9 {% endfor -%} Source Code 2: Template to generate inverters. The Inverter subcircuit definition has been omitted.
• Provided by the builder -Input value(s), -Expected output value, -Duration of the operation, -Control voltage, -Timestamp for save action execution.• Added by the state observer -Read value(s). |
04104018 | en | [
"shs.droit"
] | 2024/03/04 16:41:22 | 2020 | https://sciencespo.hal.science/hal-04104018/file/IUS-COMPARATUM-VOL-1-1.-SENEGACNIK-1.pdf | Ius Comparatum rassemble chaque année des publications académiques sur diverses questions juridiques ayant fait l'objet d'une analyse de droit comparé.
Toutes les publications sont disponibles sur le site Web de l'Académie et sont publiées avec l'ambition de faire avancer la recherche en droit comparé.
La qualité de la publication est garantie par une sélection en interne suite à un appel à contributions pour le thème choisi chaque année. Le contenu est la responsabilité des a u t e u r ( e ) s . L e s a r t i c l e s p e u v e n t ê t r e téléchargés par des particuliers, pour leur propre usage, sous réserve des règles ordinaires du droit d'auteur.
Tous les droits sont réservés.
Aucune partie de cette publication ne peut être reproduite sous quelque forme que ce soit sans l'autorisation des auteur(e)s. Ius Comparatum gathers each year academic publications on diverse legal issues analyzed from a comparative law perspective.
AVANT-PROPOS -FOREWORD
Alexandre Senegacnik* « Tout d'abord sur un plan purement pratique : il est bon de montrer que le droit comparé n'est pas l'affaire de rêveurs, et que le commerce international, bien au contraire, a besoin de comparatistes ».
René David 1
Le recours à la méthodologie du droit comparé en arbitrage international -le choix de ce sujet pour le premier volume de la nouvelle publication en libre accès de l'Académie internationale de droit comparé s'est imposé avec évidence. Un choix évident, mais pas nécessairement pour les raisons qui pourront venir en premier lieu à l'esprit. En guise d'introduction de ce volume, le professeur Diego P. Fernández Arroyo expose les multiples raisons qui font de l'arbitrage un domaine qui intéresse tout naturellement les comparatistes. L'arbitrage international, tous types confondus 2 , offre un terreau fertile pour des comparaisons. Nul doute que l'appel de René David il y a maintenant plus de soixante ans a été entendu : le droit comparé n'est certainement pas l'affaire des rêveurs/rêveuses, si tant est qu'il l'ait réellement été. Force est de constater qu'il existe un véritable réflexe de la comparaison tant dans la pratique que dans la doctrine.
"First of all, from a purely practical perspective: it is good to show that comparative law is not the business of dreamers, and that, quite to the contrary, international commerce needs comparatists".
René David 1
The use of comparative law methodology in international arbitration-the choice of this topic for the first volume of the new open access publication of the International Academy of Comparative Law seemed evident. An evident choice, but not necessarily for the reasons which might first come to mind. In the introduction of this volume, Professor Diego P. Fernández Arroyo recalls the many reasons which explain why arbitration is evidently a domain of particular interest for the comparatists. International arbitration, all types considered, offers a fertile ground for comparisons. There is no doubt that René David's call more than sixty years ago, has now been heard: comparative law is obviously not the business of dreamers, assuming that it ever really was. One will easily accept that there exists now a true comparative reflex in both practice and scholarship.
L'extraordinaire succès pratique de l'arbitrage international et la compétition parfois féroce qui existe entre les différents acteurs peuvent en partie expliquer ce besoin de (se) comparer. Mais l'attrait principal de l'arbitrage international pour le comparatiste réside sans doute ailleurs ; plus particulièrement dans la singulière réunion d'un ensemble divers de règles mais aussi d'acteursétatiques ou non étatiques, à l'échelon national, international et transnational. C'est la délicate articulation entre ces derniers qui peut alors assurer la résolution efficace des différends. Chaque tribunal arbitral éclaire de manière singulière les questions et concepts de droit auxquels il fait face -kaléidoscope, il peut proposer à chaque fois de nouvelles réponses et approches.
Il importe de préciser que le premier volume de Ius Comparatum n'a pas pour objectif d'offrir des recherches sur ce qu'il semble opportun d'appeler le « droit comparé de l'arbitrage ». Chaque étape de la procédure arbitrale gagne assurément à être éclairée par une analyse comparative des diverses approches législatives et jurisprudentielles (judiciaires mais aussi arbitrales) pertinentes en la matière. L'Académie ne manque d'ailleurs certainement pas de participer à cet effort de recherche aves ses Congrès 2 .
The extraordinary practical success of international arbitration and the sometimes fierce competition which exists between the different actors can explain this need to compare (oneself). But the main interest of international arbitration for the comparativist probably lies elsewhere; more particularly in the singular reunion of a diverse set of rules but also actors-state and non-state, at the national, international and transnational levels. It is the subtle articulation between these which can ultimately ensure the efficient resolution of disputes. Each arbitral tribunal sheds a unique light on the legal questions and concepts it faces-like a kaleidoscope, it can each time propose new answers and approaches.
It is important to clarify that the first volume of Ius Comparatum does not aim to propose new insights on what one may call the "comparative law of arbitration". Certainly, each stage of the arbitration procedure benefits from a comparative analysis of the diverse and relevant approaches in legislations and Case Law (judicial but also arbitral). The Academy certainly does not fail to engage in this research effort through its Congresses. 2 Il s'agit néanmoins avant tout dans ce volume de s'intéresser au recours à la méthodologie du droit comparé par l'arbitre international. C'est ainsi l'utilité du droit comparé dans le quotidien de l'arbitre international qui est ainsi en premier lieu questionnée dans les pages qui suivent. Comment, selon quelle méthodologie et à quel moment l'arbitre recourt-il au droit comparé afin d'accomplir sa mission ? Quelle est la place réelle et souhaitable pour le droit comparé dans la mission de l'arbitre ? Voici quelques questions que le présent volume ambitionne de discuter.
Le présent volume rassemble les contributions d'une conférence organisée le 8 octobre 2019 à Paris. Cette dernière a indéniablement été un succès à bien des égards. L'Académie peut en effet se féliciter d'avoir organisé, avec le soutien de l'École de droit de Sciences Po, une conférence sur un sujet qui mérite sans doute une plus grande attention. La participation record à cette conférence semble confirmer cette intuition. Le Bureau et le Secrétariat de l'Académie remercient tout particulièrement le professeur Emmanuel Gaillard pour sa leçon inaugurale qui éclaire sous un nouveau jour ce singulier tandem que forment ensemble le droit comparé et l'arbitrage international. Il ne fait nul doute qu'il était impossible de trouver une meilleure entrée en la matière. The primary aim is however in this volume to focus on the use by international arbitrators of comparative law methodology. It is thus the usefulness of comparative law in the daily life of the international arbitrator which is ultimately questioned in the following pages. How, according to what methodology and when does the arbitrator use comparative law to accomplish her mission? What is the real and recommended place for comparative law in the mission of the arbitrator? Here are some questions which the present volume aims to discuss. This volume brings together the contributions of a conference organized on October 8, 2019 in Paris. The latter was undeniably a success in many ways. The Academy can pride itself on having organized with the support of Sciences Po Law School, a conference on a topic which undoubtedly deserves greater attention. The record attendance at the conference appears to support this intuition. The Executive Committee and the Secretariat of the Academy particularly thank Professor Emmanuel Gaillard for his inaugural lesson which sheds new light on this singular tandem formed by comparative law and international arbitration. There is no doubt that it would have been impossible to find a better entrée en la matière.
Le Professeur Gaillard réussit le tour de force de discuter une majorité -si ce n'est l'intégralité -des questions soulevées par le sujet de la conférence.
L'Académie peut aussi certainement se féliciter d'avoir permis à des chercheurs et chercheuses en provenance de Professor Gaillard succeeds to discuss most-if not all-the questions raised by the topic of the conference.
The Academy can also certainly pride itself on allowing researchers from several continents to present and discuss their work for the first volume of Ius Comparatum. The Executive Committee and the Secretariat particularly welcome the dialogue initiated between these different generations of researchers.
Finally, the Executive Committee and the Secretariat particularly wish to thank Professors Sylvain Bollée and Pierre Tercier who accepted a largely unprecedented theatrical exercise to conclude our conference. The aim was to stage a discussion about the use of the comparative law methodology between two fictional arbitrators. One should certainly not question the usefulness of this little play. Far from being improvised, it was only light in tone as it offered a different way to discuss the essential questions of the very topic of the conference. It was for evident reasons not possible to include in this volume a mere transcript of this theatrical exchange.
Le lecteur ou la lectrice n'ayant pas eu la chance d'y assister se consolera néanmoins avec, en guise de conclusion, une petite illustration qui rappelle l'un des nombreux points de désaccord entre, d'un côté, l'illustre arbitre Bronson, c.-à-d. celui qui n'accorda décidemment aucune place au droit comparé, et de l'autre l'arbitre Wayne, qui au contraire défend l'idée que le recours au droit comparé est bien un exercice inhérent à la mission de l'arbitre.
Le parti pris de ce premier volume de Ius Comparatum a été de proposer une perspective de recherche originale dans un format inédit pour l'Académie. C'est un grand plaisir de pouvoir enfin partager ce premier volume. Je remercie très chaleureusement l'ensemble des auteurs et le Bureau de l'Académie. Des remerciements particuliers sont adressés au Secrétaire général pour sa confiance tout au long de l'élaboration et de la réalisation de ce projet.
Nevertheless, the reader, who has not had the chance to participate in it, will have to console herself with a little concluding illustration which evokes one of the many points of disagreement between, on the one side, the infamous Arbitrator Bronson, i.e. the one who refuses to give any room to comparative law, and on the other side, Arbitrator Wayne, who to the opposite defends the very idea that the use of comparative law is indeed an inherent exercise in the arbitrator's mission.
The aim of this first volume of Ius Comparatum was to offer an original research perspective in an unusual format for the Academy. It is a great pleasure to finally be able to share this first volume. I warmly thank all authors and the Executive Committee of the Academy. Special thanks are addressed to the Secretary General for his trust throughout the development and implementation of this project.
Avant-propos -Foreword' Ius Comparatum 1(2020) IV-IX [International Academy of Comparative Law: aidc-iacl.org]
'Arbitrage et droit comparé' 1959 RIDC 11(1).
Les différentes contributions dans ce volume traitent à la fois de l'arbitrage international commercial, d'investissement et sportif. The different contributions in this volume deal with international commercial, investment and sports arbitration.
V. par exemple / see for instance. George A. Bermann (Dir. de publication/Editor), Recognition and Enforcement of Foreign Arbitral Awards: the Interpretation and Application of the New York Convention by National Courts, Springer, Ius Comparatum -Global Studies in Comparative Law, 2017, ISBN 978-3-319-50915-0. |
04104047 | en | [
"shs"
] | 2024/03/04 16:41:22 | 2022 | https://shs.hal.science/halshs-04104047/file/WorldInequalityLab_WP202214.pdf | Thomas Blanchet
email: [email protected]
Emmanuel Saez
email: [email protected]
Gabriel Zucman
email: [email protected]
Real-Time Inequality
This paper constructs high-frequency and timely income distributions for the United States. We develop a methodology to combine the information contained in high-frequency public data sources-including monthly household and employment surveys, quarterly censuses of employment and wages, and monthly and quarterly national accounts statistics-in a unified framework. This allows us to estimate economic growth by income groups, race, and gender consistent with quarterly releases of macroeconomic growth, and to track the distributional impacts of government policies during and in the aftermath of recessions in real time. We test and successfully validate our methodology by implementing it retrospectively back to 1976. Analyzing the Covid-19 pandemic, we find that all income groups recovered their pre-crisis pretax income level within 20 months of the beginning of the recession. Although the recovery was primarily driven by jobs rather than wage growth, wages experienced significant gains at the bottom of the distribution, highlighting the equalizing effects of tight labor markets. After accounting for taxes and cash transfers, real disposable income for the bottom 50% was 20% higher in 2021 than in 2019, but fell in the first half of 2022 as the expansion of the welfare state during the pandemic was rolled back. All estimates are available at https://realtimeinequality.org and are updated with each quarterly release of the national accounts, within a few hours.
Introduction
A major gap in the economic statistics of the United States is the lack of timely information on the distribution of income. Thanks to a sophisticated system of national accounts and labor market statistics, detailed macroeconomic data are published almost in real time. Estimates of quarterly gross domestic product are released less than a month after the end of each quarter; monthly personal income, jobs, and unemployment statistics within a month; unemployment claims data weekly. These figures, scrutinized by the business community, are a vital input for the analysis of the business cycle and the conduct of monetary and fiscal policy. But they are not disaggregated by income level. While we know how GDP evolves quarterly, we do not know which social groups benefit from this growth, or which are most affected by economic crises as they unfold. This gap limits the ability of governments and central banks to design effective policies in crisis situation and in the aftermath of recessions.
Our paper attempts to address this gap by creating high-frequency and timely distributions of income. We propose a methodology to combine the information contained in high-frequency public data sources, including monthly household and employment surveys, quarterly censuses of employment and wages, and monthly and quarterly national accounts series. The result of this combination is a set of harmonized monthly micro-files in which an observation is a synthetic adult (obtained by statistically matching public micro-data) and variables include income and its components. These variables add up to their respective national accounts totals and their distributions are consistent with those observed in the raw input data. Using these files, we can estimate quarterly economic growth by social group as soon as official macroeconomic growth is released. Following a recession, it becomes possible to estimate "distributional output gaps," that is the extent to which income remains below its pre-recession level or trend for the bottom 50% of the distribution, the next 40%, and the top 10%. Since our files incorporate comprehensive tax and government transfer variables, they can be used to monitor how losses for different social groups during a crisis are mitigated by stabilization policies as they are implemented. Looking forward, these files could be used to estimate parameters of macroeconomic models of the business cycle with heterogeneous agents and to calibrate such models. Our files and real-time distributional growth statistics, available at https://realtimeinequality.org, are updated with each release of the national accounts, within a few hours. We automated the code and website with a view to being able to provide high-frequency updates sustainably. This should allow us to analyze future business cycles in real time, maximizing the usefulness of this tool for economists, policymakers, and the broader public.
distributional national accounts micro-files of [START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF], which allocate 100% of annual national income, household wealth, and many components of these macroeconomic aggregates using primarily individual tax data. Taking moving averages of current and adjacent-year micro-data, or using the latest file (2019) from 2020 onwards, we create monthly files by rescaling each component of national income to its monthly or quarterly seasonallyadjusted aggregate value. We then statistically match these files to Current Population Survey micro-data using optimal transport matching methods. To our knowledge this is the first time such a "one-to-one" statistical match is conducted. This approach allows us to incorporate highfrequency changes in employment. An added benefit of this match is to bring race and education variables-which are missing in tax data-into the Piketty, [START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF] distributional national accounts, thus allowing us to produce the first statistics on the distribution of national income by race and educational attainment.
In the second step of our methodology we incorporate high-frequency changes in the wage distribution, using monthly survey micro-data and tabulations of monthly and quarterly surveys and administrative records. Building on the important work of [START_REF] Lee | Business Cycles and Earnings Inequality[END_REF], we show that public tabulations of the Quarterly Census of Employment and Wages by 6-digits NAICS industry × county × ownership sector (public vs. private) can be used to predict changes in wage inequality remarkably well, including at the top of the distribution which is not well covered in existing household surveys. For example, the share of wages earned in the top 1% of industries × counties × ownership sector with the highest average wage (e.g., securities brokerage in Manhattan, which is New York county; Internet publishing and broadcasting in Santa Clara county) is strongly correlated with the share of wages earned by the top 1% workers. This allows us to project high-frequency changes in wages within the top 10% of the distribution reliably. For the bottom 90%, which is well covered by household surveys, we estimate real-time wage levels by averaging predictions from tabulated employment surveys and from the monthly Current Population Survey.
Third, we model changes in other components of pretax and posttax national income. For business and capital income, we account for changes in the aggregate value of each component (rental income, corporate profits, etc.) and assume that within-component distributions are unchanged in the short term. For government transfers, we model the distribution of new programs using program parameters, eligibility rules, and public sources, paying special attention to the new programs created during the Covid-19 pandemic such as the Paycheck Protection Program, for which detailed public data about beneficiaries and studies of incidence exist.
We test and successfully validate our methodology by applying it retrospectively back to 1976. Comparing predicted to observed income changes, we find that we correctly anticipate whether income is growing or falling 86% of the time for the top 1%, 93% for the next 9%, and 82% for the next 40% and for the bottom 50% (which often have negligible income growth over our sample period). We provide extensive quantification of the bias and noise of our projections, which for all income concepts (e.g., pretax vs. posttax) and income groups are found to be limited. Our methodology delivers accurate predictions during and in the immediate aftermath of recessions, when real-time estimates are most valuable from a policy perspective. Even tough it does no rely on tax data, our methodology is unbiased for the top 1%.
The intuition for why our methodology delivers reliable results is the following. About 30% of national income is capital income. Because wealth is a stock variable, the concentration of the various components of capital income is relatively slow-moving at high frequency. The impact of capital income on total income inequality is mostly driven by changes in the size of the different components of aggregate capital income-such as corporate profits and housing rents-over the business cycle, changes which are captured by our methodology. For labor income, which accounts for about 70% of national income, short-term changes in the distribution can be large, as unemployment spikes in recessions. But in contrast to capital income, for labor income we do not assume stable distributions within component: we capture high-frequency distributional changes thanks to our combination of household and employment surveys.
Because our methodology only uses public data, it can easily be replicated, tested, and extended. Looking forward, it could be enriched by combining administrative datasets within government agencies or by incorporating additional data sources, such as private sector information (Chetty et al., 2020). We view our paper as constructing a prototype of real-time distributions combining all currently publicly available data sources-a prototype that could be refined using additional data and eventually incorporated into official national account statistics. 1Using our monthly micro-files to examine the Covid-19 pandemic yields three main findings.
First, all social groups recovered their real pre-crisis pretax income levels within 20 months of the start of the Covid recession. The recovery was much more equal than the recovery from the Great Recession of 2008-2009, during which it had taken nearly 10 years for the bottom 50% to recover its pre-crisis pretax income level-even though GDP per adult recovered in 4 years. The Covid recovery was also more equal across gender and racial groups. These findings illustrate the fact that a given trajectory of GDP growth is compatible with widely different market income dynamics for the various social groups, highlighting the usefulness of timely distributional growth statistics.
Second, labor earnings experienced significant gains at the bottom of the labor income distribution during the Covid recovery, in a context of loose monetary policy and tight labor market. Between February 2020 (the eve of the recession) and May 2022-two months with nearly equal employment rates-real average labor income for low-wage workers increased by more than 10%, faster than for all other groups of the population. There was strong growth in labor income in the top 1%, but limited gains elsewhere. Thus the Covid recovery was characterized by a reduction in wage inequality among the bottom 99%, a break from the trend prevailing since the early 1980s that highlights the equalizing effects of tight labor markets.
Third, government programs enacted during the pandemic led to an unprecedented-but short-lived-improvements in living standards for the working class. After accounting for taxes and cash and quasi-cash transfers, disposable income for adults in the bottom 50% was 20% higher in 2021 than in 2019. However, disposable income fell in the beginning of 2022, as the expansion of the welfare state enacted during the pandemic-e.g., an expanded child tax credit and earned income tax credit-was rolled back. The only reason why disposable income for the bottom 50% was higher in 2022 than in 2019 (by about 10% in real terms) was the higher market income for this group, driven by wage gains.
The rest of this paper is organized as follows. In Section 2 we relate our work to the literature. Sections 3 and Sections 4 detail our methodology. Section 5 provides validation tests. In Section 6 we study the dynamics of income inequality during the Covid-19 pandemic and in its aftermath, and contrast it with the Great Recession of 2008-2009. We discuss racial inequality in Section 7 and conclude in Section 8.
Related Literature
Previous Attempts at Estimating Inequality at a High Frequency
There has been and there are ongoing efforts to provide timely estimates of inequality in the United States.
The Federal Reserve Bank of Atlanta maintains a monthly wage growth tracker, constructed using microdata from the Current Population Survey following a methodology developed in Daly et al. (2011). 2 The tracker reports the median percent change in the hourly wage of employed individuals observed 12 months apart. Breakdowns by, e.g., wage quartiles, gender, occupation, and census divisions are shown. Although a useful tool, this wage tracker has some limitations. First, it does not account for non-workers, hence the statistics do not map onto overall income inequality. During recessions, the median wage of employed workers in the bottom quartile often rises through composition effects as low-wage workers are laid off; even though bottom wages may appear to be growing relatively fast, inequality may in fact be rising. Second, the data are top-coded at $150,000 in annual wage, roughly the 95 th percentile of the wage distribution.
They miss the dynamic of income in the top 5%, a group that earns about a quarter of all wages. In contrast to the Atlanta wage growth tracker, our statistics include non-workers, top earners, and all other forms of income beyond wage income (e.g., capital income and transfers), making it possible to distribute all of national income and to decompose its growth.
Since 2019, the Federal Reserve has published Distributional Financial Accounts (DFA), distributing aggregate household wealth at the quarterly frequency [START_REF] Batty | Introducing the Distributional Financial Accounts of the United States[END_REF]. Following [START_REF] Saez | Wealth Inequality in the United States since 1913: Evidence from Capitalized Income Tax Data[END_REF], the DFA allocate the official Federal Reserve Financial Accounts totals across the population. In contrast to [START_REF] Saez | Wealth Inequality in the United States since 1913: Evidence from Capitalized Income Tax Data[END_REF] who primarily rely on individual income tax data and the capitalization method for this allocation, the Federal Reserve uses the Survey of Consumer Finances, a triennal survey of about 6,000 families. In this paper, although our focus is primarily on income we also construct real-time estimates of wealth inequality. As detailed in Section 6.4, the evolution of wealth inequality we obtain is consistent with the DFA estimates. Our value-added is to capture the top of the distribution all the way to the top 0.01% (while the top group considered in the DFA is the top 1%), to provide longer time series (back to 1976(back to , while the DFA starts in 1989)), to have more distributional information at the annual frequency (due to the annual nature of tax data, as opposed to the triennal nature of the Survey of Consumer Finances), and to provide current-day estimates of wealth inequality (updated daily on https://realtimeinequality.org), based on daily changes in stock market indices.
Recently, [START_REF] Fixler | The Feasibility of a Quarterly Distribution of Personal Income[END_REF] build on the annual distributional personal income statistics created by the Bureau of Economic Analysis [START_REF] Fixler | A Consistent Data Series to Evaluate Growth and Inequality in the National Accounts[END_REF] to explore the feasibility of a quarterly distribution of personal income. The main methodological difference with our work is that [START_REF] Fixler | The Feasibility of a Quarterly Distribution of Personal Income[END_REF] do not attempt to project changes in distributions within components, but simply rescale the annual personal income totals component by component to match the corresponding quarterly totals. As they show (and as we confirm in Section 5.3 below), this methodology produces reasonable results in years of normal growth but significantly underestimates inequality during recessions. A key contribution of our work is to demonstrate that a more sophisticated methodology-projecting changes in the distribution of labor income using high-frequency household and employment surveys-overcomes this issue. There are a number of additional methodological differences between the two projects. In contrast to [START_REF] Fixler | The Feasibility of a Quarterly Distribution of Personal Income[END_REF] who distribute personal income, we distribute national income, the aggregate used to compute macroeconomic growth. 3 We start from annual estimates which are largely based on individual tax return data, while [START_REF] Fixler | The Feasibility of a Quarterly Distribution of Personal Income[END_REF] rely primarily on the Current Population Survey, making it harder to provide estimates within the top 10%. These differences notwithstanding, both projects share the same objective of creating timely inequality statistics consistent with the national accounts. Our work was inspired by discussions with staff of the Bureau of Economic Analysis and the ongoing dialogue between academics and researchers within government agencies is in our view highly valuable.
Impacts of the Covid-19 Pandemic on Inequality
Our work also relates to the literature on the impact of the Covid-19 pandemic on inequality, recently surveyed in [START_REF] Stantcheva | Inequalities in the Times of a Pandemic[END_REF]. The literature emphasizes the equalizing effects of government intervention in high-income countries, while suggesting several channels through which the pandemic may, once these interventions fade out, eventually widen economic disparities.
Relative to this body of work, our main contribution is to provide a general methodology that can be applied to all business cycles and could be implemented throughout the world. The main feature of our methodology is its comprehensive character (capturing 100% of national income), timeliness (estimates are available online within a month), and granularity (with estimates available from the bottom 50% to the top 0.01% for pretax income, posttax income, disposable income, and wealth). Applied to the Covid-19 crisis, our methodology delivers new insights, such as the fast recovery of working class incomes even before government intervention, and the decline in wage inequality in the tight post-Covid labor market.
Rescaling to Match Monthly Income Aggregates
There are two main steps in our methodology. First, we rescale existing annual income distributions to match monthly and quarterly macroeconomic income totals, income component by component. Second and most importantly, we incorporate information on changes in the distribution of income within key components, most notably wage income. In this Section we define and construct our monthly aggregates and explain how we rescale annual distributions to match these aggregates, before turning to changes in distributions within components in Section 4.
Definition of Income
Our goal is to estimate the monthly and quarterly distributions of the income concepts studied in [START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF] and in the distributional national accounts literature [START_REF] Blanchet | Distributional National Accounts Guidelines: Methods and Concepts used in the World Inequality Database[END_REF]: factor, pretax, posttax, and disposable income. 4 Factor income is the income earned from labor and capital, before any tax and government spending and before the operation of the pension system. Pretax income is factor income after the operation of the pension system (public and private), disability insurance, and unemployment insurance. Contributions to pensions (including Social Security taxes) and to unemployment and disability insurance are removed, while the corresponding benefits are added. Pretax income thus in particular captures the effect of expanded unemployment insurance during the Covid-19 pandemic. Posttax income is pretax income minus all taxes (other than Social Security taxes, already subtracted from pretax income), plus all government transfers (other than Social Security and unemployment benefits, already included in pretax income) and the government deficit.
Factor, pretax, and posttax income all add up to national income. National income is the most comprehensive and harmonized notion of income: it includes all income that accrues to resident individuals, no matter the legal nature of the intermediaries through which this income is earned. In contrast to personal income, national income is not affected by business decisions to operate as corporations vs. non-corporate businesses such as partnerships, a decision influenced by the tax system. This feature of national income maximizes comparability over time. National income is computed following internationally-agreed conventions and methods, maximizing comparability across countries. Last, it is closely related to GDP, the aggregate most often used to compute economic growth: National income is GDP minus capital depreciation plus net income received from abroad. Since capital depreciation and net foreign income account for a relatively small fraction of GDP, the growth of national income is conceptually close to the growth of GDP.5 Our focus on national income is in line with recommendations made by the Commission on the Measurement of Economic Performance and Social Progress [START_REF] Joseph | Report by the Commission on the Measurement of Economic Performance and Social Progress[END_REF].
Factor income-the sum of income from labor and capital, the two factors of productionnaturally lends itself to decompositions of economic growth. Pretax income and posttax income include income which is socialized through social insurance and the tax-and-transfer system. At the individual level, the growth of pretax and posttax income thus reflects both output growth and changes in transfers. Comparing the growth of posttax income to the growth of factor income provides a comprehensive view of the extent to which taxes and government spending equalize growth across the distribution.
We also consider a fourth income concept, disposable income. It is equal to pretax income minus all taxes, plus all cash or quasi-cash (e.g., food stamps) transfers. Disposable income captures the income that individuals have at their disposal to consume private goods and to save.
In contrast to posttax income, disposable income excludes in-kind government transfers such as Medicare and Medicaid, collective consumption expenditures, and the government deficit.
Disposable income does not add up to national income and thus cannot be used to decompose growth. It is, however, a useful concept to study the distributional impacts of stabilization policies during economic crises.6
Construction of Monthly Income Aggregates
To construct aggregate monthly factor, pretax, disposable, and posttax income and their components, we use the monthly and quarterly national accounts published by the Bureau of Economic Analysis. We start from the most detailed components of personal income (published monthly) and domestic product and income (published quarterly) available. All the monthly and quarterly aggregates used in this paper are seasonally-adjusted and expressed in real dollars using the national income price deflator.
Four remarks about the construction of our aggregates are in order. First, factor income is estimated using the income approach of the national accounts. As is well known, there is a statistical discrepancy between gross domestic income (GDI) and gross domestic product (GDP) in the US national income and product accounts (e.g., [START_REF] Fixler | The Revisions to Gross Domestic Product, Gross Domestic Income, and Their Major Components[END_REF]. We do not allocate the statistical discrepancy. Our estimates match income growth, not product growth; whenever the statistical discrepancy is not zero, we do not exactly capture GDP growth. 7 Second, corporate profits and GDI are only available one month after the publication of the first estimate of GDP. 8 As a result, national income and our estimates of quarterly growth by social group are published one month after the initial estimate of GDP. Third, in addition to being published quarterly in the context of GDP statistics, most components of factor income-such as compensation of employees, proprietors' income, and rental income, but not corporate profits-as well as government transfers are also published monthly as part of personal income, about four weeks after the end of each month. This allows us to estimate certain monthly statistics within a month, such as factor and disposable income growth for the bottom 50% (where corporate profits are negligible) and the distribution of labor income.
Fourth, for the components of income that are only available quarterly but not monthly-e.g., for factor income, corporate profits; for postttax income, collective government expenditure-we disaggregate the quarterly series when they become available using [START_REF] Denton | Adjustment of Monthly or Quarterly Series to Annual Totals: An Approach Based on Quadratic Minimization[END_REF] method, following the International Monetary Fund (2017) recommendations to compile high-frequency national accounts.
From Annual to Monthly Micro-Files
In the same way as monthly and quarterly national accounts data are seasonally adjusted and annualized (i.e., presented in levels equivalent to a full year), our monthly and quarterly distributional data are seasonally adjusted and annualized so that they are also directly comparable in level to annual inequality estimates. Seasonal adjustment is important because some forms of income such as executive bonuses are highly seasonal (e.g., most bonuses are paid once a year 7 Appendix Figure A1 compares the growth of GDI to the growth of GDP. Both track each other closely but not perfectly. The statistical discrepancy has been significant during the recovery from the Covid-19 recession, during which GDI has recovered faster than GDP. The other reason why we do not exactly match GDP growth is the fact that net foreign income (included in national income but not in GDI) and depreciation (included in GDI but not in national income) can grow at different rates than GDI.
8 The first estimate of quarterly GDP is available near the end of the first month after each quarter. A second estimate is released about a month after, and a third and final estimate about a month after the second estimate, i.e. about three months after the end of the quarter. Relative to GDP, quarterly GDI is produced with a a lag of an additional month (2 months for the fourth quarter) as it requires estimating corporate profits, which is done using the detailed (but not quite as timely) Census Bureau Quarterly Financial Report (US Census, 2022).
in January). Annualization means that we estimate the distribution of what annual income would be if seasonally-adjusted monthly income totals and their distributions remained stable over 12 months. 9 Concretely, seasonal adjustment means that a January bonus is spread out over the 12 months of the year; annualization means that if bonuses double from one January to the next, this doubling is spread out smoothly over 12 months.
Another way to measure inequality at a high frequency would be to estimate the inequality of actual monthly income. Because of income mobility (e.g., losing or starting a job in the middle of a year), this approach would lead to more inequality at the monthly frequency than at the annual frequency. By contrast, our procedure which annualizes income makes inequality statistics comparable at high vs. low frequency. 10 Our starting point to estimate monthly distributions is the annual distributional national accounts synthetic micro-data of [START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF]. These files combine IRS tax micro-data, surveys, and national accounts data to construct annual distributions of income and wealth consistent with national accounts totals. Since their first publication, a literature has developed to test assumptions, conduct robustness tests, develop improvements, and maximize comparability with other countries where similar methods are followed [START_REF] Blanchet | Distributional National Accounts Guidelines: Methods and Concepts used in the World Inequality Database[END_REF].
The current files, updated in Saez and Zucman (2020b), incorporate the results of this body of work. A new file is created each year when the most recent tax statistics and annual national accounts become available. The last annual micro-file is for the year 2019, the year preceding the Covid-19 pandemic. 11 This file currently serves as our baseline for 2020 and onwards estimations.
To convert these annual files to the monthly frequency, we normalize the population and the distribution of each income component to one. We then create monthly versions of the annual files mixing samples from two adjacent years with unequal weights. Specifically, to create a file corresponding to month m in year y, we combine the micro-data for the year y with its weights multiplied by m/12 with the micro-data for the year y -1 with its weights multiplied by 1 -m/12. Therefore, each monthly file is a moving average of the yearly files over the last twelve months. This procedure smoothes out short-run, year-specific, mean-reverting variations, which are not informative of the distribution for a given month, and would otherwise introduce discontinuities in the monthly series. Like in the annual micro-files, each observation in the 9 Quarterly income is computed as the average of monthly income over the three months of the quarter. 10 There is no micro-data in the United States allowing one to track the longitudinal evolution of household income month after month or quarter after quarter [START_REF] Fixler | The Feasibility of a Quarterly Distribution of Personal Income[END_REF]. Our approach that focuses on monthly and quarterly distributions of annualized incomes does not require longitudinal data and usefully by-passes this issue.
11 The Covid pandemic caused a backlog at the Internal Revenue Service so that paper returns for 2020 incomes were not yet fully processed as of June 2022.
11 monthly micro-files represents an adult individual, defined as an individual aged 20 or more.
We then rescale the components of factor, pretax, posttax, and disposable income so that they add up to their seasonally-adjusted monthly total value, component by component and at the most granular level possible. Specifically, for factor income we rescale wages and salaries, supplements to wages and salaries, proprietor's income, rental income, corporate profits, interest income, production taxes, production subsidies, non-mortgage interest payments, and government interest payments to their respective monthly totals.12 For pretax income we additionally rescale private pension contributions, Social Security taxes, contributions to unemployment insurance, private pension benefits, Social Security benefits, and unemployment insurance benefits; for disposable income, Medicare taxes, direct taxes, the estate tax, veteran benefits, and other cash benefits; and for posttax income, Medicare, Medicaid, other in-kind transfers, collective expenditures, and the government deficit. Components of household wealth are similarly rescaled to their end-of-month values, as detailed in Appendix B.
Because the various components of aggregate income do not grow at the same rate from one month to another, the mere act of rescaling to match monthly totals changes the distribution of income. From the second to the third quarter of 2020, for example, corporate profits grew by close to 25%, much faster than wages. Since corporate profits are more concentrated than wages toward the top of the distribution, this pushes towards a higher top 1% pretax income share.
Rescaling to match aggregates, however, is not sufficient to accurately capture high-frequency changes in inequality, especially during recessions. With publicly available data it is possible to do more, namely to project changes in distributions within key components, a task we now turn to.
Incorporating Changes Within Income Components
The most important step of our methodology involves incorporating information on the monthto-month evolution of the distribution of labor income, which accounts for about 70% of national income. We estimate both changes in the extensive margin (number of employed vs. non-employed individuals, including recipients of unemployment insurance benefits) and in the intensive margin (changes in the wage distribution).
Changes in Employment
Changes in employment status at the individual level are incorporated by using real-time information contained in the monthly Current Population Survey. Ideally, one would use the joint distribution of current-month employment status and past-year annual income. However, the monthly CPS does not contain sufficient information on annual income. To overcome this limitation, we statistically match the updated [START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF] annual distributional national accounts micro-files to the March CPS and the Survey of Consumer of Finances, surveys which contain detailed information on annual income. This allows us to bring race, education, and age variables into our annual-and thus monthly-distributional files. We then compute employment rates by race × education × gender × age × marital status cells in the monthly CPS, and use those tabulations to impute employment rates by cells in the monthly micro-files.
One-to-One Statistical Matching to Survey Data
Method. Consider two datasets: A (sometimes referred to as the base file, the annual distributional national accounts micro-files in our case) and B (sometimes referred to as the supplemental file, e.g., the March CPS). Assume A and B have common variables, denoted by X (e.g., income and its components). The remaining variables are denoted by Y in file A and Z in file B.
The goal is to bring the Z variables (e.g., education) into file A. The optimal way to do so is to implement a constrained statistical match (e.g., [START_REF] Rodgers | An Evaluation of Statistical Matching[END_REF], in which each observation from B is matched "one-to-one" with an observation in A, while minimizing the sum of distances over the X variables between matched observations. This constrained statistical match can be implemented using optimal transport methods.
Formally, assume the two datasets are of size n and m respectively, and that observations in each datasets have weights u = (u 1 , u 2 , . . . , u n ) and v = (v 1 , v 2 , . . . , v m ). Without loss of generality, assume weights sum to one. Denote by D ij the distance between observation i in the first dataset and j in the second over the X variables (in our application we use the L 1 norm, i.e., the sum of the absolute values of the differences). The optimal transport map Γ ∈ R n×m which matches observations "one-to-one" while minimizing the sum of distances between matched observation is the solution of the following linear programming problem:
min Γ∈R n×m n i=1 m j=1 Γ ij D ij such that Γ1 = u Γ 1 = v Γ ≥ 0
The main appeal of constrained statistical matching is that the multivariate distribution of all the Z variables are preserved in the matched dataset. In that sense, there is no less of distributional information, in contrast to other matching procedures. 13 The main practical obstacle to implementing constrained statistical matches so far has been computational requirements.
Finding the optimal Γ involves solving for a large-scale linear programming problem: if the two datasets to be matched each have 100,000 observations (which is the order of magnitude in our case) then the matrix of pairwise distances has 10 billion entries. Thanks to recently developed optimal transport algorithms, solving this type of problem has become doable in a reasonable amount of time.
Implementation. To statistically match the distributional national accounts micro-files to survey data, we start by supplementing the March CPS with group quarter observations from the American Community Survey: individuals living in correctional facilities, nursing homes, college dormitories, etc., who are not sampled by the March CPS. This allows us to capture the entire population of US residents, as in our distributional micro-files, which is important to capture income dynamics at the bottom of the distribution. We then statistically match the augmented March CPS to our annual micro-files over the following X variables (observed in both datasets): wage income, pension income, business income, interest, dividend and rents, Social Security benefits, welfare benefits, and government transfers other than Social Security and welfare benefits. We bring in the following Z variables: race, education, age. The match is done at the household level, separately for married vs. singles, individuals aged more vs. less than 65, and employed vs. unemployed individuals (i.e., 9 different groups in total).
We similarly match our micro-files to the Survey of Consumer Finances (SCF), a triennal survey of about 6,500 families that over-samples wealthy households. We annualize the SCF by taking moving averages of the two closest waves of the SCF and match it to our micro-files over wage income, pension and Social Security income, business income, interest and dividend income, capital gains, financial and business assets, housing assets, and debts. This allows us to bring socio-economic characteristics for high-income households, which are not well covered in the March CPS. We use the socio-economic variables transported from the SCF for households in the top 5% of the income or wealth distribution, and those transported from the March CPS for the bottom 95%.
Value and limitation. The key value of the one-to-one statistical match is to join together several micro-databases (including individual tax data, the CPS, and the SCF) in a single file.
Variables that are common across databases exist in versions corresponding to each dataset (e.g., SCF net wealth and net wealth estimated from tax data) and are close to each other record by record thanks to the matching procedure. This makes it straightforward to switch from one database to another as needed.14 For example, researchers who are used to working with the CPS can primarily focus on CPS variables and replace them with tax-data variables whenever the analysis concerns the top of the distribution.
However, we emphasize that this matched database cannot provide reliable information about the joint distribution of variables that are not jointly included in at least one of the original database. For example, because neither the public-use individual tax data, nor the CPS
or the SCF provides comprehensive information on the joint distribution of income and state,15
we cannot analyze patterns in income growth by state × social group. As another example, because the only information on the joint distribution of wealth and race comes from the SCF and the SCF is noisy above the top 1% (due to small sample sizes), it is not possible to use the matched dataset to obtain reliable estimates of the racial composition of wealth above the 99 th percentile. The limitations of the source files carry over to the matched file and knowledge of these limitations is critical to make an informed used of the matched database. An obviously superior way to construct a unified database would be to proceed to exact matches across administrative sources and surveys, as can be done for research purposes in, e.g., Scandinavian countries. However, this is not yet fully feasible in the United States even within government agencies (see, e.g., [START_REF] Card | Expanding Access to Administrative Data for Research in the United States[END_REF] and certainly not using public data. The statistical match we implement leverages the strength of US public data, which are rich but scattered.
Adjustment of Employment Rates and Unemployment Insurance Benefits
Employment rates in our monthly distributional files are adjusted based on the changes in employment rates observed in the monthly CPS by race × education × gender × age × marital status. In addition to reproducing the changes in employment rates observed in those cells, our procedure reproduces the monthly change in the macroeconomic employment rate. Specifically, we estimate the number of workers using the Bureau of Labor Statistics (BLS) monthly release of non-farm employment at the national level. Since the number of employed people in a given year is mechanically higher than in a given month, we adjust monthly employment numbers to make them commensurable with yearly estimates from Social Security tax data, using the strong and consistent linear relationship observed between the BLS and the Social Security numbers over time. We then adjust monthly employment rates in our race × education × gender × age × marital status cells so that they match these adjusted macroeconomic rates. Finally, employment in each cell is updated at the micro level by randomly moving people in or out of employment within cell. This way, our distributional statistics at the monthly level are fully comparable with annual distributional statistics.
Once we have adjusted employment, we can estimate the number of unemployment insurance (UI) recipients each month. To do so we use the Department of Labor's weekly publication of unemployment claims. We aggregate this data by month and adjust it for seasonal variations using the X11 procedure [START_REF] Shiskin | The X-11 Variant of the Census Method II Seasonal Adjustment Program[END_REF]. Since the number of UI recipients in a given week is mechanically lower than in a given year, we adjust the number of UI claims by a constant coefficient to match the annual levels recorded in the tax data (consistent with our goal of constructing monthly distributions of annualized income). We then use this series to disaggregate yearly unemployment claims using [START_REF] Denton | Adjustment of Monthly or Quarterly Series to Annual Totals: An Approach Based on Quadratic Minimization[END_REF] method. UI benefits are assigned in priority to working-age individuals who are not employed.
Changes in the Distribution of Earnings
We estimate the distribution of wages at the monthly frequency by combining all the available evidence on this issue: monthly and quarterly employment surveys, and the monthly CPS. Our monthly wage distributions average predictions made from these two sources.
Predicting Wage Inequality from Tabulated Employment Surveys
We first estimate wage inequality monthly using the timely employment censuses and estab- Building on Lee (2020), we construct quarterly wage income distributions using the QCEW data. The idea is to use the QCEW as if it were a micro-level dataset, treating each 6-digits-NAICS-industry × county × type-of-ownership cell as an observation whose weight is the employment count and whose value is the average wage. Each wave of the QCEW contains about a million such observations in recent years, much more than a typical wage survey. We remove outliers, defined as cells whose wage is less than half of a full-time minimum wage job. We then estimate the average wage by percentile, after implementing three adjustments described below. As detailed in Section 5 below, our procedure delivers remarkably accurate predictions of trends in wage inequality.
lishment
The adjustments we apply are the following. We first convert the QCEW wage data from quarterly to monthly. Employment counts are reported monthly in the QCEW, but wage earnings only quarterly. This is not a significant issue since wages are sticky, so changes in the wage distribution in the short run are driven by changes in the relative employment of low-wage and high-wage workers rather than by changes in their respective salaries. We run the wage data in the QCEW through a moving average of the last twelve months to get smooth monthly wages and to get rid of the seasonality in the wage data (due, for example, to end-of-year bonuses).
Whenever this procedure introduces missing values, we impute them back by regressing log wages on county, time, industry, and type-of-ownership fixed effects. 16Second, because the QCEW data is aggregated, it understates the level of inequality between individual workers. We fix this by conducting a simple adjustment to the monthly series.
Specifically, we regress the tax data wage against the QCEW wage for each percentile, and use the prediction from these regressions as our monthly estimate for each percentile. This procedure works well because the relationship between the average wage of a given percentile in the QCEW data and the tax data is strongly linear. Importantly, this correction does not vary with time and thus does not weaken the predicting power of the QCEW.
Third, we supplement the QCEW with the Current Employment Statistics survey to get timely estimates. The QCEW is published with a lag of one to two quarters. The Current Employment Statistics are released every month, albeit with a coarser level of aggregation: about 19,000 monthly series covering up to 300 industries and 450 areas, compared to about a million series in the QCEW. We match each QCEW cell to three Current Employment Statistics series. The first matches the location of the QCEW cell as precisely as possible, the second matches it at the state level, and the third at the national level. Because there is a trade-off in the Current Employment Statistics series between the level of geographical and industry disaggregation, using those three series allows us to extract as much information as possible.
We average the trend from these three series in each cell and use this average trend to extend the QCEW data in the most recent quarters.
Combining Predictions from Employment and Household Surveys
We also estimate average wages by percentile using the weekly earnings variable of the monthly CPS. Our final estimate of average wage by percentile in a given month is an average of the CPS and QCEW predictions. For the bottom 80% of the wage distribution, QCEW and CPS predictions are weighted equally. The weight on the CPS prediction then gradually falls to 0 as we move to the 90 th percentile, and is 0 above the 90 th percentile where the CPS is not informative because of top-coding.
Once we have our final estimate of wages by percentile, the distribution within percentiles is interpolated using generalized Pareto interpolation methods [START_REF] Blanchet | Generalized Pareto Curves: Theory and Applications[END_REF]. We use the resulting distribution to update our monthly micro-files as follows. In the monthly CPS, we compute the average rank in the wage distribution of each race × education × gender × age × marital status cell. We also compute these average ranks in the 12 preceding months. In our monthly micro-files, we adjust the average rank by cell to replicate the evolution seen in the CPS. Once the ranks are adjusted, we assign each individual observation the wage corresponding to his or her adjusted rank.
Distribution of New Government Transfers
In the last step of our methodology, we model the distribution of new government transfers. We simulate the key components of the government response to the Covid crisis. Three programs warrant special treatment: the Paycheck Protection Program, Covid relief payments, and the expanded refundable tax credits.
The Paycheck Protection Program was a loan program designed to keep small businesses afloat, representing about $1,000 billion, or 5% of national income. The government forgave most of these loans, assuming companies kept their employees and wages stable. Following [START_REF] Autor | The $800 Billion Paycheck Protection Program: Where Did the Money Go and Why Did it Go There?[END_REF], we distribute 70% of the program's expenditures to business owners and the remaining 30% to wage earners. We construct a novel estimate of the program's distributional effect for the incidence on wages. We use the publicly available data on each loan, which we match to the QCEW data based on the date of the loans, the industry, and the location of the business. We manage to match about 5,700,000 loans to 5,500,000 QCEW cells. We estimate both an extensive margin (fraction of the workforce covered) and an intensive margin (fraction of the wage bill covered) for each percentile of the labor income distribution, which we use to simulate the effect of the Paycheck Protection Program on workers.
The three waves of Covid relief payments ("economic impact payments") are allocated based on program rules using taxable income as reported in the updated Piketty, Saez and Zucman (2018) files. Finally, we allocate the expanded refundable tax credits (child tax credit and earned income tax credit) based on income and eligibility using our micro-data. Refundable tax credits corresponding to incomes earned in year t are generally paid out in year t + 1 and counted as transfers in year t + 1 in the national accounts, a convention we follow. In 2021, however, half of the expanded child tax credit was paid monthly (in the last 6 months of 2021) and is assigned to 2021 (not 2022).
Summary of All the Sources Used and Timing of Release
Table 1 summarizes all the sources used to construct our real-time estimates, including frequency of publication and timing of availability. Our approach only relies on public data sources.
Appendix C describes the structure of the programs used to construct our real-time estimates.
The programs are available online at https://realtimeinequality.org, making it possible for researchers to assess-and improve-any aspect of the methodology.
Validation Tests
Since we apply our methodology back to 1976 (the first year of the QCEW), we have a large number of monthly micro-files that can be used to test the accuracy of our approach. This Section presents the results of these tests.
Wage Distribution Prediction
We begin by examining how well our monthly wage inequality series match the actual distribution of wage income-by far the largest component of national income-over the 1976-2019 period. Figure 1 compares the wage distribution in the updated Piketty, Saez and Zucman (2018) micro-files, which are based on public tax micro-files and tabulations of Social Security data, to the distribution constructed in this paper using the QCEW and the CPS. Each panel depicts the share of total wage income earned by a specific group (bottom 50%, middle 40%, next 9%, and top 1%) among adult individuals with positive wage income.
Our monthly estimates track the annual tax-data-based statistics well for all groups, including for the top 1% which is not measured well in traditional household surveys. First, the long-run trend in rising wage concentration is accurately captured by our combination of QCEW and CPS data: as in the annual tax-data-based statistics, the top 1% gains 5.5 points between the late 1970s and 2019 and the next 9% gains 3 points, while the middle 40% loses 7 points and the bottom 50% loses 1.5 point. Second, our methodology accurately predicts short-run variation, the focus of this paper. Over the 44 years t from 1975 to 2018, we correctly capture whether a share has risen or fallen between t and t + 1 82% of the time for the top 1%. It is most instructive to focus on inequality dynamics during economic downturns, the periods when the real-time methodology is most valuable from a policy perspective. From 1976 to 2019, there are 8 recession years (1980,1981,1982,1990,1991,2001,2008,2009). Focusing on these years and the year following each recession (1983,1992,2002,2010), we correctly predict whether the top 1% and bottom 50% shares are falling or rising in 19 cases out of 24. These results are consistent with the analysis of [START_REF] Lee | Business Cycles and Earnings Inequality[END_REF] who first used the QCEW to create quarterly wage distributions for the macroeconomic analysis of the business cycle. To assess the merits of this procedure, Figure 2 compares the size of capital income components (as measured by their share of national income) and the concentration of these income components (as measured by the share going to the top 10% of the pretax income distribution).
Volatility of Capital Income
All series are normalized to 100 in 1976 to contrast volatilities. The top left panel reports results
for corporate profits, the largest form of capital income, and shows that aggregate profits are highly volatile. During or just before recessions, it is common for the share of profits in national income to fall by 10%-20% (e.g., 1980, 2000, 2008). The share of profits earned by the top 10% exhibits comparatively little year-to-year variation. The same conclusion holds for other components of capital income, as reported in the other three panels. Because changes in aggregates swamp short-term changes in distributions, movements in macroeconomic aggregates capture the bulk of the contribution of capital income to changes in inequality.
Retrospective Validation
Last and most importantly, we retrospectively check whether our methodology combining prioryear annual micro-files with current-year high-frequency data sources provides accurate estimates of current-year distributions. This test incorporates all forms of income, as opposed to wages or components of capital income only.
Methodology. For each year t from 1975 to 2018 (or 2017), we start from the year t updated annual micro-files of Piketty, Saez and Zucman (2018) and implement our real-time methodology to age these data into a year t+1 (or year t+2) simulated annual micro-dataset-using the same sources (high-frequency national accounts aggregates, household and employment surveys, etc.)
and assumptions (e.g., stable distributions of capital income components) as in our real-time methodology.17 Using these simulated micro-data for year t + 1 or t + 2, we can compute any distributional statistics and compare them with the statistics coming out of the actual annual micro-data. The most directly relevant statistic for our purposes is real income growth. For each group of the population, we compute actual income growth using the annual distributional national account micro-files for both years t and t + 1 (or t + 2), and predicted income growth rates using the actual annual micro-files for year t but the simulated micro-data for year t + 1 (or t + 2). The t + 2 simulations are relevant given that our real-time inequality estimates for year t are typically based on annual micro-files for year t -2, due to the nearly 2-year delay in the availability of tax data.
Results. Figure 3 compares actual and predicted real income growth rates for the bottom 50% (top panels) and the middle 40% (bottom panels) of the factor income distribution, with income equally split among married spouses. Left panels show actual vs. predicted growth from year t to t + 1 and right panels from t to t + 2. In all cases the dots are closely scattered around the 45-degree line, showing that our predictions are highly informative of the actual growth in bottom 50% and middle 40% incomes. For example, we correctly predict whether bottom 50% is rising or falling 82% of the time one year forward and 93% of the time 2 years forward. 18 One can condition on certain actual growth rates to visually ascertain how well our methodology performs in different contexts. For instance if one conditions on actual growth below -2.5% (or below -5% in the 2-year ahead graph), typically corresponding to recession years, the correlation between actual and predicted growth remains very high. Figure 4 repeats this analysis for top 1% incomes and next 9% incomes. The dots again align well with the 45-degree line. We correctly predict whether top 1% incomes are rising or falling 86% of the time one year forward and 91% of the time 2 years forward.
One caveat when considering the top of the distribution is that predicting short-run growth during tax reforms can be challenging. The largest errors are observed in 1987 and 1988, when predicted growth-though strongly positive-is significantly lower than observed growth. As Rescaling vs. adjustments of distributions. To better understand which aspects of our methodology matter to generate accurate predictions, Appendix Figure A2 reports similar fig-
ures of actual vs. predicted growth rates but using a simplified prediction methodology that only rescales macroeconomic aggregates (following Section 3) without incorporating any changes in the distribution of labor income. The simplified methodology performs well in years of normal growth, but delivers significantly worse results than our full methodology for the bottom 50%
during recessions, when it significantly over-estimates growth. This finding echoes the results of [START_REF] Fixler | The Feasibility of a Quarterly Distribution of Personal Income[END_REF] discussed in Section 2.1 and shows that adjusting the labor income distribution is critical to project inequality during recessions. Once the distribution of labor income is adjusted, our methodology accurately predicts growth during downturns.
Goodness of fit summary. To summarize the performance of our approach, Table 2 reports detailed statistics for goodness of fit and noise. For each group (bottom 50%, middle 40%, next 9%, and top 1%) and income concept (and wealth), we compute the fraction of years in which we correctly predict whether income (or wealth) is growing or falling, the mean difference between predicted and observed 1-year growth, the standard error of this mean, and the root mean square error. For reference we also report the standard deviation of actual 1-year growth rates. We consider three samples of years: all 44 years from 1976 to 2019, the 34 years which are not tax reform years, and the 12 years corresponding to recessions and their immediate aftermath. A number of results are worth noting.
First, the good fit and limited noise obtained for factor income (Figures 3 and4) extends to other income concepts-pretax, disposable, and posttax-and to wealth. Across concepts and samples of years, we correctly predict the sign of growth around 90% of the time. This Section uses our real-time estimates to analyze the dynamics of income and wealth during the Covid-19 pandemic and in its aftermath. We start by studying the dynamic of income and wages before government intervention, then move to disposable income, before turning to wealth inequality. Unless otherwise noted, all the statistics we report are for "equal-split adults," defined as individual adults with income and wealth equally split between married spouses. On https://realtimeinequality.org, we also report statistics at the household level, where a household is a tax unit as defined by the tax code, i.e., either a single person aged 20 or above or a married couple, in both cases with children dependents if any. All growth numbers are adjusted for inflation using the official national income deflator. The same deflator is used for all groups of the population. The groups with the largest losses in 2020 experienced the largest gains in 2021. On average, real factor income per adult grew almost 8% in 2021, but by close to 12% for the bottom half of the distribution and by 13% for the top one percent. Growth was lower for the middle 40% (+5%). One caveat, as already noted, is that our methodology distributes national income, not domestic product. Although income and product growth should conceptually be nearly identical, in practice they can diverge because income and output are estimated using largely independent data sources. As shown by Figure A1, during the Covid recovery GDI has grown faster than GDP: +7.8% for GDI per adult in 2021 vs. +5.4% for GDP per adult. This gap may shrink when final 2021 national accounts estimates are published. If national income growth is revised down, then so too will factor income growth for at least some social groups. If GDP growth is revised up, our distributional growth statistics will not change.
According to our estimates, all income groups recovered their pre-crisis factor income level within 20 months, but not at the same pace. The groups least affected by the crisis recovered first; the most affected ones recovered last. For the middle 40% and the next 9%, the recovery took eight months; for the top 1% ten months; and for the bottom 50% twenty months. Because the bottom 50% was hit the hardest and recovered last, the pandemic, had, by the end of 2021, exacerbated factor income inequality. The share of factor income earned by the top 1% was 19.4% in December 2021, its highest level in the post-World War II era.
Comparing the Covid-19 and Great Recession recessions and recoveries. It is well known that in the aggregate, the recovery from the Covid-19 crisis (13 months) was much faster than the recovery from the Great Recession (4 years and a month). Our real-time estimates allow us to move beyond aggregates and compare recovery patterns for the working-class. To do so, Figure 5b focuses on the working-age population (to control for population aging in the 2010s, during the long recovery of the Great Recession) and normalizes income to 100 in the month preceding each recession. Two main results emerge.
First, in the aftermath of the Great Recession it had taken a staggering 9 years and 2 months for the bottom 50% of the working-age population to recover its pre-crisis real factor income level. From 2008 to 2012, a period during which the economy rebounded and crossed its pre-crisis output level, the bottom 50% of working-age adults experienced virtually no growth. Income started growing in 2013 but slowly, so that it only exceeded its December 2007 level in February 2017. The slow recovery of the working class is a robust feature of the Great Recession.20 It is not an artifact of population aging (since we restrict to the working-age population), but rather reflects the stagnation of wages at the bottom of the distribution (detailed in Section 6.2 below).
Second and by contrast, real factor income for the bottom 50% immediately and sharply rebounded after the Covid recession. By the time average income had recovered from the Great Recession, average income for the bottom 50% was still 10% below its pre-crisis income level and still five years away from a full recovery. By the time average income had recovered from the Covid-19 crisis, by contrast, the bottom 50% had almost recovered its pre-crisis income level and was booming. These results vividly illustrate the fact that a given trajectory of GDP growth is compatible with widely different market income dynamics for the working class, highlighting the usefulness of timely and disaggregated growth statistics.
Wage Growth After Recessions: Covid-19 vs. Great Recession
In the aftermath of the pandemic, the unemployment rate reached historically low levels (3.6% in March 2022), in a context of loose monetary policy (with interest rates of 0% until March 2022) and expansionary fiscal policy. Who benefited most from the tight labor market? To shed light on this issue we can use our micro-files to study the month-to-month dynamics of labor income. The main finding is that the Covid recovery was characterized by a reduction in wage inequality-a break from the trend prevailing since the early 1980s-due to strong wage growth at the bottom of the distribution.
Methodology to study labor income inequality. To establish this result, we analyze changes in labor income inequality in the working-age population (including non-workers) and compute growth rates of labor incomes by percentiles of labor income. 21 Our goal is not to characterize wage growth for a given worker or to fix the composition of the workforce; rather, we want to describe the evolution of the distribution of labor income as comprehensively as possible. The growth statistics by percentile we compute capture the effect of changes in both employment and wages. Our measure of labor income includes wages, supplements to wages and salaries (such as health insurance and retirement benefits) and the labor income of selfemployed individuals (defined as 70% of self-employment income), before any tax or deduction for pension contributions. Conceptually, it corresponds to the total cost, for employers, of employing a worker. Our analysis, as always in this paper, is cross-sectional in nature: we do not follow individuals over time. 22 To provide the context required to interpret changes in the distribution of labor income, Figure 6a depicts the employment-to-working-age population ratio from January 2019 to May 2022. This rate was 78% before the Covid-19 pandemic, fell to 68% at the trough of the recession, and by May 2022 had returned to its pre-Covid level. 23 This means that although the composition of the workforce may not be the same in May 2022 as in January 2019, a
21 To better connect to the labor economics literature and because we have good measures of individual wages, for the analysis of labor income inequality we focus on individualized income series, i.e., we do not split income equally between married spouses. 22 We follow a long long tradition in labor economics studying changes in cross-sectional wage inequality; see, e.g., [START_REF] Katz | Changes in Relative Wages, 1963-1987: Supply and Demand Factors[END_REF]. The main difference is that our statistics incorporate the entire working-age population including non-workers (as opposed to workers only). This approach allows one to comprehensively capture how government policies affect the labor market, including changes in the extensive margin. With our micro-files, it is also possible to focus on employed individuals only. We view both approaches as complementary.
23 Employment rates in our micro-files are higher by about 5 percentage points throughout because our microfiles match annual employment rates, while Figure 6a reports raw actual monthly employment rates (which are mechanically lower); see Section 4.1 for a discussion.
comparison of the level of labor income in these two months is not confounded by changes in the level of employment, which facilitates the interpretation of labor income growth.
The decline of labor income inequality in 2019-2022. Figure 6b depicts the evolution of average labor income by labor income group from January 2019 to May 2022. Since the bottom quartile of the working-age population is mostly unemployed, we focus on the next three quartiles. We also report income for the top 1%, which earns a sizable fraction of total labor income and cannot be studied with available household surveys. Average labor income in each group is normalized to 100 in January 2019. We can see that from January 2019 to May 2022, it is at the bottom that labor income grew the fastest. Because the same number of adults was employed in both months, this growth does not reflect a jobs (i.e., quantity) effect; it reflects the fact that low-paying jobs paid more in May 2022 than in January 2019, by about 12% in real terms. Real labor income grew by about 5% to 10% for all groups, but growth was stronger at the bottom, highlighting the equalizing effects of tight labor markets.
One caveat is that there is heterogeneity at the top of the labor income distribution. While the top quartile experienced relatively little growth over 2019-2022, the top 1% grew almost as fast as the second quartile. Consistent with the updated [START_REF] Kopczuk | Earnings Inequality and Mobility in the United States: Evidence from Social Security Data since 1937[END_REF] statistics (based on Social Security records) constructed and analyzed by [START_REF] Mishel | Wage inequality continued to increase in 2020[END_REF], we find that the top 1% grew faster than other groups in 2020. Moreover, our real-time estimates suggest that the top 1% grew especially fast in the first half of 2021. Thus, although wage inequality fell from 2019 to mid-2022 within the bottom 95%, the share of labor income earned by the top 1% rose. 24Analyzing month-to-month changes between January 2019 and May 2022, we can see that these dynamics largely reflect changes in employment. The rise in unemployment during Covid led to a drop in average labor income in all groups, particularly marked at the bottom. The recovery in the second quartile between the trough of the recession and May 2022 was primarily driven by jobs gains, although wage gains played a significant additional role. By the end of 2021 and spring of 2022 real labor income was stagnating or slightly falling for the third and fourth quartile of the labor income distribution, while the second quartile kept rising.
Labor income inequality: Covid vs. Great Recession. The dynamics of labor income inequality observed during and after the Covid pandemic contrast sharply with the one observed during and after the Great Recession. Figure 7 depicts real labor income growth rates by vingtiles of the labor income distribution above the 25 th percentile, from the eve of the Covid recession (February 2020) to May 2022 (as of writing the latest available month, which turns out to have an employment rate nearly equal to that of February 2020), and from the eve of the Great Recession (December 2007) to July 2017 (the month when the employment rate recovered its December 2007 level). In both cases we capture a full employment cycle and labor income growth statistics are not confounded by changes in aggregate employment.
The Great Recession and ensuing recovery were characterized by some gains in the middle of the labor income distribution-and a stagnation at the bottom and at the top. The Covid cycle is the mirror image: large earnings gains at the bottom and top of the distribution-and a quasi-stagnation in the middle. More precisely, during the Covid cycle, labor income grew at annual rates of close to 3% at the bottom. By contrast, between December 2007 and July 2017, average labor income for the second quartile of the working-age population grew only 0.2% a year. By the time the employment rate had recovered its pre-Great Recession level (in July 2017), average earnings for low-wage workers were barely higher than a decade before.
Conversely, growth was strong from the 75 th to the 95 th percentile during the Great Recession, while these percentiles experienced almost no gains during the Covid cycle. Finally, the top 1% grew fast during Covid while it stagnated from 2007-a peak year for top labor incomes, which include stock options-to 2017.
The Effects of Government Intervention
Government intervention during recessions affect the level and distribution of disposable income, sometimes massively as during the Covid pandemic. In 2021, average real disposable income per adult in the United States was about 10% higher than in 2019 due to large government deficits.
The bottom 50% most benefitted from the increase in government spending. After accounting for taxes and cash and quasi-cash transfers, average disposable income for the bottom 50% was 20% higher in 2021 than in 2019. Figure 8 shows a step-by-step decomposition of this evolution. To facilitate the interpretation of the results, we focus on the working-age population (aged 20 to 64) and we always rank by factor income so that all figures for a given month refer to the same group. 25 The figure reveals the relative importance of the different government programs enacted during the pandemic.
In the early months of the crisis, the Paycheck Protection Program lifted incomes. But available evidence on the incidence of the program [START_REF] Autor | The $800 Billion Paycheck Protection Program: Where Did the Money Go and Why Did it Go There?[END_REF] implies that the effect is limited: by our estimates, the Paycheck Protection Program increased the average monthly income of the bottom 50% of working-age adults by about $100. It replaced about a fifth of the decline in factor income that occurred in the first months of the crisis for this group (from about $2,000 in February 2020 to about $1,500 in April and May 2020). Unemployment insurance, which was expanded during the crisis, had much larger effects, lifting average bottom 50% monthly income by about $800 in May, June, and July 2020, and by up to $400 a month through to the summer of 2021.
The three waves of Covid-relief payments (April 2020, January 2021, and March 2021) had massive but temporary effects on monthly income. Disposable monthly income for the bottom 50% peaked in March 2021 following the third payment, to reach $4,000-twice as much as before the pandemic ($2,000). For the bottom 50%, disposable income that month was twice as large as factor income (about $2,000). This gap between disposable and factor income was historically high: disposable income is usually close to factor income for the bottom 50% (that is, this group usually pays about as much in taxes as it receives in cash and quasi-cash transfers).
By the fall of 2021, disposable monthly income for the bottom 50% had declined to $2,400. The main reason why disposable income was higher for the working class in the end of 2021 than before the pandemic was the expanded child tax credit and the expanded earned income tax credit for adults with children. 26 In the beginning of 2022, disposable income fell as the expanded tax credits expired. The only reason why it remained higher than pre-Covid (by about 10%) is that factor income was higher-driven by the real wage gains documented above. In sum, government programs enacted during the pandemic led to a dramatic and unprecedented-but short-lived-improvements in living standards for the lower half of the income distribution. 27 25 Appendix Figure A3 shows disposable income in the full adult population (i.e., not restricting to working-age adults) ranking by factor income; the results are similar.
26 As in the national accounts, refundable tax credits-i.e., cash transfers administered through the tax system-are categorized as cash transfers (not negative taxes); thus the child tax credit and the earned income tax credits show up as "regular cash transfers" in Figure 8. 27 Figure A4 shows that the same conclusion holds true when looking at total posttax income, i.e., including Medicaid and Medicare, other in-kind government spending and collective consumption expenditures, and the government deficit.
Changes in Wealth Concentration
Last, we study the effect of the Covid-19 crisis on wealth inequality. We produce monthly and daily estimates of wealth levels across the distribution. Monthly estimates are obtained by rescaling quarterly Financial Accounts aggregates to their monthly value using real estate and equity indices, and daily estimates by updating equity values using daily stock indices. 28This makes it possible to track changes in wealth inequality during periods of turmoil in asset markets, which is valuable to simulate wealth effects on consumption in real time and to improve the analysis and management of the business cycle.
Figure 9 shows the monthly dynamic of wealth across the distribution from July 2019 to May 2022, with projections to June 20, 2022 using our daily methodology. Two distinct phases can be observed. First, until the end of 2021, wealth grew strongly for all groups and wealth concentration rose. From the end of 2019 to the end of 2021, average real wealth per adult grew 26%, primarily due to the rise in asset prices, both in housing and equity markets. For the top 10% the increase was 27% and for the top 0.1% it reached 34%. The share of wealth owned by the top 0.1% adults increased 1.2 point from the end of 2019 to the end of 2021, to reach 18.8%-the highest level recorded in the post-Word War II era. Wealth gains were then partly erased by the decline in stock prices in the first half of 2022. In a context of still rising real estate prices, this decline affected the wealthiest disproportionately, leading to a fall in top wealth shares. More than half of the wealth gains made in 2020 and 2021 by the top 0.1% were erased in the first half of 2022. By June 2022, the top 0.1% wealth share was back to its pre-Covid level.
Our findings on the dynamics of wealth inequality are consistent with other existing evidence.
The Federal Reserve Distributional Financial Accounts (DFA) show similar patterns, both for the Covid-19 crisis and over the long run. If anything, the DFA suggest an even stronger increase in wealth concentration since 1989. From the third quarter of 1989 (the start of the DFA) to the first quarter of 2022, the top 1% wealth share grew 8.3 points according to the Federal Reserve (vs. +6.9 in our series over the same period of time); the next 9% were stable (-0.9 in our series); the middle 40% lost -7.4 points (-5.2 in our series); the bottom 50% lost -0.9 (-0.8 in our series).
Given that these two series rely on different distributional sources (a triennal survey of about 6,000 families in the case of the DFA, annual tax data in our case), this similarity suggests that the rise of wealth concentration in the United States since the 1980s is highly robust. 29 In both the DFA and our series, the top 1% wealth share was at its highest recorded level at the end of 2021 and fell by a few percentage points in the first quarter of 2022.
Racial Economic Disparities
The statistical match between survey and tax data implemented in this paper allows us to study the real-time dynamics of racial income disparities, in particular whether Black and white households recovered at the same pace after the Covid-19 recession.
A Comprehensive Measure of the Black-white Income Gap
To provide context for the analysis of the Covid-19 pandemic, we start by describing the mediumrun dynamics of the Black-white income gap. Although a large literature studies racial income disparities (e.g., [START_REF] Bayer | Divergent Paths: A New Perspective on Earnings Differences Between Black and White Men Since 1940[END_REF]Chetty et al., 2020;[START_REF] Derenoncourt | Minimum Wages and Racial Inequality[END_REF], to date there is no estimate of how average national income-the broadest notion of income-differs for Black vs. white Americans. Due to data limitations (the lack of information on race in tax data and the poor coverage of capital income in household surveys, in particular), most existing statistics focus on earnings or some measure of disposable income. Our approach, by contrast, allows us for the first time to provide a comprehensive measure of the Black-white income gap. Concretely, in 2021 average national income per adult in the United States was around $79,000; with our files we can ask: How does this number differ across racial groups?
Figure 10 shows the average pretax national income of Black adults relative to white adults.
On average Black Americans earn half of what white Americans do: $48,000 in 2021 vs. $95,000. This gap is significantly larger than the Black-white earnings gap that is the traditional focus of the literature. As Figure 10 shows, Black Americans (including nonworkers) aged 20 to 64 on average earn 65% of what working-age white Americans do. 30 The Black-white national income gap is even larger because racial disparities in capital income are larger than disparities 29 As detailed in Saez and Zucman (2020), top wealth shares, although they exhibit the same trend, are lower throughout in level in the DFA than in [START_REF] Saez | Wealth Inequality in the United States since 1913: Evidence from Capitalized Income Tax Data[END_REF]; e.g., the top 1% wealth share rises from 23.5% in 1989Q3 to 31.8% in 2022Q1 in the DFA (vs. 28.6% to 35.5% in our series). This is because in contrast to [START_REF] Saez | Wealth Inequality in the United States since 1913: Evidence from Capitalized Income Tax Data[END_REF] and this paper, the DFA include unfunded defined benefit pensions and vehicles in wealth, both of which are relatively equally distributed. Once the same definition of wealth is used, the level, trend, and composition of top wealth shares are nearly identical in the two projects (see, e.g., Saez and Zucman, 2020, Figure 1). Similarly, the estimates by [START_REF] Smith | Top Wealth in America: New Estimates and Implications for Taxing the Rich[END_REF] show a rise in top wealth shares close to the one we obtain and, once the definition of wealth is harmonized, similar levels (see [START_REF] Saez | Comments on Smith, Zidar and Zwick[END_REF]. 30 Restricting to workers, the Black-white earnings gap is lower (Black workers earn about 75% of what white workers do). As shown by [START_REF] Bayer | Divergent Paths: A New Perspective on Earnings Differences Between Black and White Men Since 1940[END_REF], taking nonworkers into account is critical to analyze the dynamics of the Black-white earnings gap, due to differential trends in employment.
in labor income: on average Black adults earn only about 20% of the average capital income of white adults. This gap itself primarily reflects the major disparities in wealth reported in Figure 10 and recently studied in, e.g., [START_REF] Derenoncourt | Wealth of Two Nations: The U.S. racial wealth gap, 1860-2020[END_REF]. 31 Property ownership remains much more unequal than labor market incomes and this inequality is a key contributor to the persistent racial income disparities that characterize the United States. 32 Capital income is even more unequally distributed than wealth because of differences in yields, in turn coming from differences in asset composition. Relative to white households, a greater fraction of the wealth of Black households is in relatively low-yield assets, namely housing and pensions. Business assets and corporate equity, which have a higher yield, are more concentrated among white households.
Figure 10 also shows that there has been no reduction in the Black-white income gap since the late 1980s. If anything, racial disparities are slightly higher in 2022 than in 1989. Inequality increased from 1989 to 2013; it then starting falling in the mid-2010s, in both cases driven by changes in labor income disparities. The recent decline in inequality was not enough to offset the previous increase, so that the average pretax income of Black adults relative to white adults remains lower in 2022 than in 1989.
Racial Disparities Over the Business Cycle
Turning to the recent dynamic, Figure 11 contrasts income growth at the quarterly frequency during the Great Recession and its aftermath and during the Covid crisis. During the Great Recession, the average income of Black people experienced a prolonged decline of 4 years. In the third quarter of 2011, it was 10% lower than on the eve of the Great Recession, while average income for white working-age adults had already fully recovered. This year corresponds to the peak in Black-white income disparities over the last three decades. Average Black income then started recovering, first at the same pace as for whites, then faster after 2015, leading to a decline in the Black-white income gap that continued until the Covid pandemic hit. The trend 31 The Black-white wealth gap displayed in Figure 10 is close to the one in the Federal Reserve Distributional Financial Accounts, the most comparable statistics. In 2019 the average wealth of Black households is 22% of the average wealth of white households in the DFA vs. 25% in Figure 10. The difference is due to the unit of observation (households in DFA vs. adult individuals in Figure 10). Because of differences in household size, racial wealth disparities are slightly larger at the household level. Using our microfiles one can also study racial wealth disparities at the household level; results are identical to the DFA. In the Survey of Consumer Finances (used, e.g., by [START_REF] Derenoncourt | Wealth of Two Nations: The U.S. racial wealth gap, 1860-2020[END_REF], racial wealth disparities are higher than in the DFA and our series because of the different wealth totals (in particular the higher total for private business wealth, whose ownership is concentrated among white households). Trends are similar in all series.
32 Similarly, Figure A5 show that even though Black individuals account for 12% of the entire adult population, they account for less than 8% of the top 10% of the wage distribution and 4.5% of the top 10% of the wealth distribution.
time private sector data (Chetty et al., 2020), by leveraging internal administrative data, or by collecting new administrative data.
A number of potential data improvements are worth highlighting. First, high-frequency national accounts totals are-like all important economic statistics-still a work in progress and could be refined. It would be valuable to develop processes to remove the statistical discrepancy between GDI and GDP, i.e., to systematically reconcile the income and expenditure approach. This would allow BEA to produce a single unified estimate of quarterly growth, as many countries do. Progress in that direction could be facilitated by a more systematic exploitation of high-frequency data on corporate profits, such as listed companies' quarterly earnings statements. 33 The national accounts could also be improved by reporting separate profits estimates for public vs. private companies (or C-corporations vs. S-corporations). Because profits of private companies tend to be more concentrated towards the top of the income distribution, this additional breakdown would improve the accuracy of distributional estimates. 34 Last, government agencies could produce additional high-frequency inequality estimates. Most importantly, the Bureau of Labor Statistics could compute a quarterly individual-level wage distribution using the administrative unemployment insurance micro-data that underly the QCEW. We view our real-time statistics as a prototype which we hope will be refined, enriched, and eventually incorporated into official national account statistics.
33 Public companies must submit quarterly earnings to the Securities and Exchange Commission within 40 days. If this delay was reduced to less than 30 days, these data could be used by BEA as a key input to form an estimate of corporate profits within a month of the end of each quarter. In turn, this would make it possible for BEA to simultaneously estimate growth from both the income and expenditure approach, making it easier to integrate data from these two approaches into a single number.
34 [START_REF] Krakower | Prototype NIPA Estimates of Profits for S Corporations[END_REF] present prototype NIPA estimates of profits for S-corporations, but there are no quarterly estimates to date. The Census Bureau Quarterly Financial Report-which serve as a key input for the estimation of quarterly profits by BEA-could include tabulations for public vs. private firms separately. Notes: This figure compares the wage distributions in the annual distributional national accounts of Piketty, Saez and Zucman (2018, updated), which are based on public tax micro-data and Social Security tabulations, and those obtained in our monthly micro-files using the QCEW and the CPS as described in Section 4.2. Each panel depicts the share of total wage income earned by a specific group (bottom 50%, middle 40%, next 9%, and top 1%) of individuals with positive wage income. Wages are individualized (they are not equally split between married spouses). The monthly series estimated from the QCEW and CPS track the annual micro-data closely for all groups, including the top 1% which is not measured well in traditional survey data. Notes: This figure compares the volatility of the size of capital income components (corporate profits, interest, rental income, proprietor's income), as measured by their share of national income, and the volatility of their concentration, as measured by the share of each component going to the top 10% of the pretax income distribution. For example the top left panel shows the share of pretax corporate profits in national income and the share of pretax corporate profits accruing to the 10% of adults at the top of the pretax income distribution, from 1976 to 2019. All series are depicted relative to a base 100 in 1976 to compare volatility. The figure shows that the size of capital income components can be highly volatile at high-frequency while their concentration is slow-moving. This lends support to our methodology which captures high-frequency changes in aggregate capital income and assumes that their concentration is unchanged in the short run. Predicted growth rate (%)
• Regular year Tax reform year
Notes: This figure compares predicted to actual growth in average real factor income per adult (with income equally split among married spouses) for the bottom 50% (top panels) and the next 40% (bottom panels).
Growth is computed from year t to t + 1 (left panels) and from t to t + 2 (right panels) for each year t from 1975 to 2018 (2017 in the right panels). Actual growth is obtained using the annual distributional national account micro-data for both years t and t + 1 (or t + 2). Predicted growth is obtained using the annual micro-data for year t but the projected micro-data using our methodology for t + 1 (or t + 2). Years of significant tax reforms (which can generate income shifting) are shown in red. Years t + 1 (or t + 2) with significant prediction errors are labelled. Overall, the dots align well with the 45-degree line depicted on the graphs: our methodology accurately predicts growth at the bottom and in the middle of the distribution. Predicted growth rate (%)
• Regular year Tax reform year
Notes: This figure compares predicted to actual growth in average real factor income per adult (with income equally split among married spouses) for the top 1% (top panels) and the next 9% (bottom panels). Growth is computed from year t to t + 1 (left panels) and from t to t + 2 (right panels) for each year t from 1975 to 2018 (2017 in the right panels). Actual growth is obtained using the annual distributional national account micro-data for both years t and t + 1 (or t + 2). Predicted growth is obtained using the annual micro-data for year t but the projected micro-data using our methodology for t + 1 (or t + 2). Years of significant tax reforms (which can generate income shifting) are shown in red. Years t + 1 (or t + 2) with significant prediction errors are labelled.
Overall, the dots align well with the 45-degree line depicted on the graphs: our methodology accurately predicts the growth of top incomes. Notes: This figure decomposes the average real monthly income of the bottom 50% from July 2019 to May 2022. We restrict to the working-age population (aged 20 to 64). Individual adults are ranked by their factor income, and income is equally split between married spouses. The figure reveals the relative importance of the different government programs enacted during the Covid-19 pandemic, most importantly the three waves of Covid-relief payments (April 2020, January 2021, and March 2021), the expansion of unemployment insurance, the expansion of refundable tax credits (EITC and child tax credit), and the Paycheck Protection Program. By the beginning of 2022 all of these programs had expired, and the only reason why average bottom 50% disposable income remained higher than pre-Covid (by about 10%) was the higher level of factor income. Notes: This table reports statistics for goodness of fit and noise of our 1-year ahead real income and real wealth growth predictions. "Std. dev." is the standard deviation of observed 1-year growth. "Correct sign" is the fraction of years in which we correctly predict whether income (or wealth) is growing or falling. "Bias" is the mean difference between predicted and observed 1-year growth, "Std. Err" is the standard error of this mean, and "RMSE" is the root mean square error capturing total error (RM SE 2 = bias 2 + std. err. 2 ). "All years" includes 44 observations (growth relative to the preceding year in 1976, 1977, ..., 2019). "All years excluding tax reforms" includes 34 observations (it excludes 1987, 1988, 1991, 1992, 1993, 2001, 2003, 2012, 2013). "Recessions" includes 12 observations, corresponding to recession years (1980,1981,1982,1990,1991,2001,2008,2009) and their immediate aftermath (1983,1992,2002,2010).
Appendix (for Online Publication)
A Link Between NIPA National Income Components and DINA Concepts
Our monthly micro-files distribute BEA's high-frequency national income accounts, starting from the annual distributional national accounts micro-files of [START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF]. These files are based on internationally harmonized guidelines [START_REF] Blanchet | Distributional National Accounts Guidelines: Methods and Concepts used in the World Inequality Database[END_REF], which themselves are based on the UN System of National Accounts and definitions of income components that maximize consistency with components of household wealth. The concepts used by the Bureau of Economic Analysis for the US national accounts are largely consistent with the System of National Accounts, but sometimes slightly differ. To clarify how the main variable in our micro-files relate to the headline aggregates of the official US national accounts, this Section provides a mapping of the main components of national income as published by BEA (henceforth NIPA) into the main components of factor national income in our distributional national accounts micro-files (henceforth DINA).
Recall that national income as published by BEA (NIPA Table 1.12) is decomposed into compensation of employees, proprietors' income, rental income of persons, corporate profits, net interest and misc. payments, taxes on production and imports less subsidies, net business transfer payments, and current surplus of government enterprises. The main differences with DINA are the following:
• In DINA, business transfer payments are allocated to corporate profits (for corporate transfers), to proprietors' income (for non-corporate businesses' transfers) and to rental income (for housing transfers).
• in DINA, the small current surplus of government enterprises is treated as a tax on production.
• In DINA, property taxes (business and real estate) are not treated as taxes on production but as direct taxes (like wealth taxes would be), hence allocated to corporate profits (for property taxes paid by corporations), proprietors' income (for property taxes paid by non-corporate businesses), and rental income (for residential property taxes).
• In the NIPAs, there are various imputations of interest income (e.g., dividends received by life-insurance companies; notional interest on underfunded pension plans) that are re-classified in DINA for consistency with household wealth.
As a result of these and other minor other reclassifications to improve consistency with household wealth aggregates, NIPA national income concepts map onto DINA factor income concepts as follows (NIPA NIPA taxes on production and imports less subsidies (Table 1.12 line 19 -line 20) = DINA sales and excise taxes (fkprk + flprl) + DINA residential property taxes (proprestax) + DINA business property taxes (propbustax) -DINA subsidies on production and imports (fksubk + flsubl) -NIPA current surplus of government enterprises (Table 1.12 line 25)
B Real-Time Wealth Distributions
To construct quarterly wealth distributions, we start from the Piketty, Saez and Zucman (2018) micro-files, last updated in February 2022,36 and re-scale the main components of household wealth to their end-of-quarter value, using the latest quarterly release of the Financial Accounts.
The wealth components we consider are, on the asset side, tenant-occupied housing, owneroccupied housing, S-corporation equity, C-corporation equity, equity in non-corporate businesses, fixed-income assets, pension assets; and on the liability side, tenant-occupied mortgages, owner-occupied mortgages, and non-mortgage debt. The aggregate value of all of these components are published quarterly by the Federal Reserve in the Financial Accounts of the United States, around 70 days after the end of each quarter. Following [START_REF] Saez | Wealth Inequality in the United States since 1913: Evidence from Capitalized Income Tax Data[END_REF], our estimates exclude unfunded pensions (such as promises of future Social Security benefits), consumer durables (which are not assets in the System of National Accounts and thus excluded from wealth in other countries; see United Nations, 2009), and the assets and liabilities of non-profit institutions such as private foundations.
We then assume that distributions are stable within each of these components from one quarter to the other and compute the implied distribution of wealth. We use real-time Forbes data to adjust the wealth of the top 400 tax units so that it matches the Forbes estimate at the end of each quarter. Thus our estimates of wealth inequality by construction match Forbes at the very top, like the annual estimates constructed in Saez and Zucman (2020b) and the SCF-year estimates of [START_REF] Batty | Introducing the Distributional Financial Accounts of the United States[END_REF]. The Forbes estimates contain valuable information on high-frequency wealth dynamics at the top-end because they combine public information on ownership of stock in listed companies (from mandatory Securities and Exchange Commission filings) with daily stock price changes for these companies. Close to half of the wealth of the Forbes 400 is in public equity in recent years. The limitations of annual Forbes estimates (lack of public information on diversified portfolios and on debts, imperfect information on the value of private businesses) carry over to the real-time Forbes estimates.
Additionally, we construct monthly and daily wealth distributions as follows. Starting from the quarterly wealth totals by components described above, we use housing and equity price indices to update total housing wealth and total equity wealth at the monthly frequency. We do so using the [START_REF] Denton | Adjustment of Monthly or Quarterly Series to Annual Totals: An Approach Based on Quadratic Minimization[END_REF] method for available quarters, and extrapolating using the indexes' growth rate in the most recent months, before the last quarter becomes available. The stock market index that we use is the Wilshire 5000, a comprehensive index of the stock performance of publicly traded U.S. firms. For housing prices, we use the Case-Shiller index, extrapolated using the Zillow house price index in the most recent months. For daily estimates, we simply update total equity (including S-corporations equity) using the daily Wilshire 5000. For both monthly and daily estimates, we keep distributions constant within components and rescale the wealth of the top 400 tax units to match real-time Forbes numbers.
C Structure of Programs and Files
The section describes the overall structure of the program files used in this paper. The programs (as well as a more detailed and up-to-date version of the following description and instructions) are available online at https://github.com/thomasblanchet/real-time-inequality.
C.1 Description of programs/code
• The folder raw-data contains the raw input data, primarily in cases where direct download/scraping is not possible or not justified, or in cases where data files are heavy (like the QCEW) and therefore downloading them over the internet every time is not desirable.
• The folder work-data contains intermediary data files that are produced by the code. It is divided into subfolders corresponding to each code file, and no intermediary data file is may be changed by two distinct code files.
• The folder graphs contains the all the figures (and a few tables) generated by the code.
It is divided into subfolders corresponding to each code file.
• The folder programs contains the codes (except those performing the optimal transport).
-The codes named programs/01-* handle the retrieval of the raw data, either directly from the internet or from the folder raw-data.
-The codes named programs/02-* handle preliminary treatments of the data.
-The codes named programs/03-* produce the synthetic microfiles and related outputs.
-The codes named programs/04-* produce the figures and tables used for the analysis.
• The folder transport contains the code and data specifically related to the optimal transport: it is meant to run separately from the main code on a computing cluster.
C.1.1 License for Code
The code is licensed under the Modified BSD License.
C.2 Instructions for Replication
• Edit the $root global in programs/00-setup.do to correspond to the project's directory.
• Run the file programs/00-run.do.
• To also run the transport, run programs until programs/02-export-transport-dina.do and then execute the Python code under transport/transport.py preferably using Slurm and the Shell script transport/transport.sh. Then resume the execution of programs/00-run.do.
C.2.1 Details
• programs/01-* -The codes retrieve the data from the internet directly to the extent that it is possible.
-Unless there has been changes in the structure of the data, they should run without any change for each update.
-In some cases, the data needs to be manually updated in the raw-data folder at each update.
-Instruction for each file in included in 00-run.do.
• programs/02-* -The codes primarily generate data in the work-data folder that is used to generate the synthetic microfiles.
• transport -This folder includes the data and code necessary for the optimal transport.
-These codes are meant to run on the computing cluster.
-They do not need to be updated every time (only when new tax microdata is available).
-The CSV data files included in this folder are produced by the codes before.
• programs/03-* -The codes in that folder produce the synthetic microfiles, including backtesting versions of the microfiles that use older tax data, and rescaling versions that only use information on macro aggregates.
-The globals $date begin and $date end at the beginning of these files can be used to generate only the files for specific months. This can be useful since not all the files need to be constructed for every update.
-Codes in that section also produce the aggregated version of the database by group that is used for the website http://realtimeinequality.org/. These files are stored in the folder website.
• programs/04-* -Use the microfiles and related outputs to create the tables and figures included in the paper (see below).
C.3 Description of Microfiles and Codebook
One output of this paper is a set of harmonized monthly micro-files in which an observation is a synthetic adult (obtained by statistically matching public micro-data) and variables include income, wealth, and their components. These variables add up to their respective national accounts totals and their distributions are consistent with those observed in the raw input data. With these micro-files one can reproduce all the results of the paper and the statistics shown at https://realtimeinequality.org exactly. The files are available here.37 There is one file per month starting in January 1976. The files are at the adult individual (aged 20 and above) level, so the sum of weights (variable weight) adds up to the total adult population, 248.9 million in May 2022. The variable id identifies a household (as defined by the tax code, i.e., either a single person aged 20 or above or a married couple, in both cases with children dependents if any). Income is individualized. To compute our benchmark equalsplit adult statistics, compute average income or wealth by id. To compute statistics at the household level, take the sum of income or wealth by id and the average weight by id.
The files include socio-demographic information: age, sex, marital status (married), race, educational attainment (educ), and the following income and wealth variables:
• princ: factor national income Predicted growth rate (%)
• Full methodology
• Rescaling only
Notes: This figure compares the quality of our baseline estimates with the quality of simpler estimates that only rescales macroeconomic aggregates without adjusting within-component distributions. This figure depicts predicted to actual growth in average real factor income per adult (with income equally split among married spouses) from year t to t + 1 for the bottom 50% (left panel) and the top 1% (right panel) for each year from 1976 to 2019. Actual growth is obtained using the annual distributional national account micro-data for both years t and t + 1. Predicted growth is obtained using the annual micro-data for year t but the projected micro-data using our full methodology for t + 1 (blue full dots) or a simplified methodology that only rescales macroeconomic aggregates without adjusting within-component distributions (red empty dots). The simplified methodology performs worse, especially in recessions years for the bottom 50%, showing that adjusting distributions is critical to accurately project real-time inequality during recessions. Notes: This figure shows the monthly evolution of real disposable income per adult from July 2019 to March 2022 in the full adult population (not restricting to working-age adults). Individual adults are ranked by their factor income, and income is equally split between married spouses. The figure shows that for the bottom 50% of the factor income distribution, monthly disposable income was nearly twice as large in March 2021 as in July 2019 (as a result of the third wave of Economic Impact Payments). By the spring of 2022, disposable income had returned to its pre-Covid level for all groups except for the bottom 50% for which it was about 15% higher.
surveys of the Bureau of Labor Statistics. The Quarterly Census of Employment and Wages (QCEW) provides employment and wage statistics for about 95% of employees, based on State and Federal unemployment insurance administrative records. At the monthly frequency, the Current Employment Statistics (CES) survey provides similar information based on a representative sample of about 144,000 businesses and government agencies. Both the QCEW and the CES are published in the form of tabulations by industries × geographical areas, up to 6-digits NAICS industry × county × type of ownership (i.e., public or private) in the case of the QCEW. Although the underlying individual-level micro-data are not publicly available, valuable information about the distribution of wages, including at the top, can be retrieved from these granular tabulations.
shown by Figure 1 ,
1 in those years the QCEW predicts large gains in the top 1% wage share but fails to capture the full magnitude of the rise in this top share. One possible interpretation is that the Tax Reform Act of 1986, which reduced the top marginal income tax rate from 50% in 1986 to 28% in 1988, led to an immediate and across-the-board increases in top-end wages and bonuses for executives within industries × counties (in addition to gains in specific high-paying industries × counties, by construction captured by our methodology). More broadly, it can be challenging to predict the growth of top incomes in years of significant tax reforms, which in addition to real responses can generate avoidance responses, such as inter-temporal income shifting. As shown by Figure4, the prediction errors in top 1% growth are concentrated during tax reform years (although not all tax reform years lead to errors). In non-tax-reform years our methodology delivers accurate predictions for the top 1%.
6. 1
1 The Dynamic of Factor Income During the Covid-19 Recession Dynamic across the income distribution. The Covid-19 pandemic led to a dramatic collapse in average national income. Between February 2020 (the last month before the recession)and April 2020 (its trough), annualized real national income per adult fell 15%. Average income then rebounded sharply. But as Figure5ashows, the fall and recovery were uneven. The economic downturn caused by the pandemic led to the strongest factor income decline for the working-class (-33% for the bottom 50% between February 2020 and April 2020) and to a lesser extent for the top one percent (-19%) due to the collapse of business profits, a key source of income at the top. The crisis affected the middle class and upper-middle class relatively less, because individuals in these groups were more likely to remain employed.
Figure 1 :
1 Figure 1: Wage Distributions: Tax Data vs. Estimates Based on QCEW & CPS
Figure
Figure 2: Capital Income Volatility
Figure 3 :
3 Figure 3: Actual vs. Predicted Growth at the Bottom
Figure 4 :
4 Figure 4: Actual vs. Predicted Growth at the Top
Figure 5 :
5 Figure 5: Factor Income: Covid-19 vs. Great Recession
Figure 6 :
6 Figure 6: Employment and Wage Growth During Covid
Figure 7 :
7 Figure 7: Earnings Growth Across the Distribution: Covid-19 vs Great Recession
Figure 8 :
8 Figure 8: Income of the Bottom 50% during the Covid Crisis
Figure
Figure 9: Wealth Growth During and After Covid
Figure
Figure 10: Black-White Economic Disparities
Figure 11 :
11 Figure 11: Income Dynamics by Race: Covid Recession vs. Great Recession
Figure
Figure A1: GDP vs. GDI Growth
Figure
Figure A2: Predicted Growth: Our Method vs. Simplified Macro Method
Figure
Figure A3: Real Disposable Income Around the Covid-19 Pandemic
Figure A5 :
A5 Figure A5: Share of Black adults in the Full Population vs. Top 10%
Figure
Figure A6: College Premium
methodology is highly predictive of income dynamics during downturns and the ensuing recoveries. We under-estimate disposable and posttax income growth for the bottom 50% during past recessions, but this is because we do not attempt to incorporate the creation of new government transfers during past recessions (e.g., $600 individual tax credits in 2008) in our simulated micro-files. Post-2019 monthly files, by contrast, carefully incorporate all new government programs (as detailed in Section 4.3), maximizing the accuracy of disposable and posttax income predictions at the bottom during and after the Covid-19 pandemic.
19
Bias in annual growth is limited to a few percentage points, which is reasonable considering the sample sizes involved. Remarkably, our methodology is unbiased for the top 1% even though it does not rely on any tax data. Standard errors for top 1% predictions fall when excluding tax-reform years; for other groups (where avoidance possibilities and incentives to avoid are more limited), including tax reforms does not make a difference. Second, these results carry over to recessions: our 6 Inequality During the Covid-19 Pandemic
Notes: This figure shows the annualized growth rate of real labor income by vingtile of the working-age population (with a zoom on the top 1%), from the eve of the Covid recession in February 2020 to May 2022, and from the eve of the Great Recession in December 2007 to July 2017. In both cases, we capture a full employment cycle (i.e., July 2017 is the month when the employment rate had returned to its pre-Great Recession level; and by May 2022 the employment rate had nearly returned to its pre-Covid level). Labor income is individualized (i.e., not equally split between married spouses) and includes wages, supplements to wages and salaries, and 70% of self-employment income. We include all working-age adults (aged 20 to 64), including non-workers. The graph starts at the 25 th percentile since the bottom quartile of working-age adults is mostly unemployed. The figure shows that the bottom (and top) of the labor income distribution experienced fast growth from the beginning of 2020 to May 2022, in contrast with the recovery from the Great Recession.
25-30% 35-40% 45-50% 55-60% 65-70% 75-80% 85-90% 95-99%
30-35% 40-45% 50-55% 60-65% 70-75% 80-85% 90-95% Top 1%
Percentiles (working-age population)
Table 2 :
2 Prediction Errors for Growth Rates of Income and Wealth
Concept Bracket All years Excl. tax reforms Recessions
Std. Dev. Correct sign RMSE Bias Std. Err. Correct sign RMSE Bias Std. Err. Correct sign RMSE Bias Std. Err.
Bottom 50% 4.0 pp. 82% 2.1 pp. -0.7 pp. 2.0 pp. 85% 2.0 pp. -0.9 pp. 1.8 pp. 92% 1.7 pp. -0.4 pp. 1.7 pp.
Factor Income Middle 40% Next 9% 1.4 pp. 1.8 pp. 82% 93% 0.9 pp. -0.3 pp. 0.9 pp. 1.1 pp. -0.7 pp. 0.9 pp. 91% 100% 0.8 pp. -0.2 pp. 0.7 pp. 1.1 pp. -0.8 pp. 0.8 pp. 83% 83% 1.2 pp. -1.0 pp. 0.7 pp. 1.1 pp. -0.6 pp. 0.9 pp.
Top 1% 6.0 pp. 86% 3.4 pp. -0.2 pp. 3.4 pp. 88% 2.6 pp. -0.2 pp. 2.6 pp. 92% 3.6 pp. 1.4 pp. 3.3 pp.
Bottom 50% 2.9 pp. 75% 2.0 pp. -1.0 pp. 1.7 pp. 76% 2.0 pp. -1.0 pp. 1.7 pp. 83% 1.8 pp. -1.3 pp. 1.3 pp.
Pretax Income Middle 40% Next 9% 1.5 pp. 2.3 pp. 82% 91% 1.0 pp. -0.2 pp. 0.9 pp. 1.1 pp. -0.7 pp. 0.9 pp. 91% 100% 0.8 pp. -0.2 pp. 0.7 pp. 1.1 pp. -0.8 pp. 0.8 pp. 83% 75% 1.1 pp. -0.9 pp. 0.7 pp. 1.0 pp. -0.3 pp. 0.9 pp.
Top 1% 6.2 pp. 89% 3.4 pp. -0.1 pp. 3.4 pp. 91% 2.6 pp. -0.2 pp. 2.6 pp. 92% 3.7 pp. 1.4 pp. 3.4 pp.
Bottom 50% 2.4 pp. 73% 2.5 pp. -1.6 pp. 1.9 pp. 71% 2.3 pp. -1.4 pp. 1.8 pp. 75% 3.3 pp. -2.8 pp. 1.8 pp.
Disposable Income Middle 40% Next 9% 1.4 pp. 2.2 pp. 86% 89% 0.9 pp. -0.2 pp. 0.9 pp. 1.4 pp. -0.6 pp. 1.3 pp. 91% 85% 0.8 pp. -0.2 pp. 0.8 pp. 1.4 pp. -0.7 pp. 1.1 pp. 83% 100% 0.8 pp. -0.5 pp. 0.6 pp. 1.1 pp. 0.2 pp. 1.1 pp.
Top 1% 6.4 pp. 89% 4.0 pp. 0.7 pp. 4.0 pp. 91% 3.2 pp. 0.6 pp. 3.1 pp. 83% 4.5 pp. 2.1 pp. 3.9 pp.
Bottom 50% 2.4 pp. 77% 1.9 pp. -1.2 pp. 1.5 pp. 79% 1.8 pp. -1.0 pp. 1.5 pp. 83% 2.5 pp. -2.0 pp. 1.4 pp.
Post-tax Income Middle 40% Next 9% 1.7 pp. 2.6 pp. 95% 89% 0.8 pp. -0.2 pp. 0.8 pp. 1.3 pp. -0.6 pp. 1.1 pp. 97% 97% 0.7 pp. -0.2 pp. 0.7 pp. 1.2 pp. -0.7 pp. 1.0 pp. 100% 75% 0.7 pp. -0.5 pp. 0.5 pp. 0.9 pp. 0.0 pp. 0.9 pp.
Top 1% 6.7 pp. 84% 3.9 pp. 0.3 pp. 3.9 pp. 88% 2.9 pp. 0.2 pp. 2.9 pp. 67% 4.1 pp. 1.4 pp. 3.8 pp.
Middle 40% 4.9 pp. 82% 1.8 pp. 0.0 pp. 1.8 pp. 88% 1.7 pp. -0.2 pp. 1.7 pp. 75% 1.4 pp. 0.1 pp. 1.4 pp.
Wealth Next 9% 3.9 pp. 95% 1.2 pp. -0.2 pp. 1.2 pp. 97% 1.1 pp. -0.1 pp. 1.0 pp. 100% 1.4 pp. -0.3 pp. 1.4 pp.
Top 1% 5.8 pp. 82% 2.8 pp. -1.5 pp. 2.4 pp. 82% 2.5 pp. -1.4 pp. 2.1 pp. 75% 2.8 pp. -1.4 pp. 2.5 pp.
• peinc: pretax national income • poinc: posttax national income • dispo: disposable income • flemp: compensation of employees • proprietors: proprietors' income • rental: rental income • profits: after-tax corporate profits • corptax: corporate income tax • fkfix: interest income • govin: government interest income • fknmo: non-mortgage interest payments • prodtax: production taxes • prodsub: production subsidies • contrib: contributions to pensions, disability insurance, & unemployment
Table A1 :
A1 Prediction Errors for Growth Rates of Income & Wealth (2 Years) : This table report statistics for goodness of fit and noise of our 2-year ahead real income and real wealth growth predictions. See notes to Table 2.
Concept Bracket All years Excl. tax reforms Recessions
Std. Dev. Correct sign RMSE Bias Std. Err. Correct sign RMSE Bias Std. Err. Correct sign RMSE Bias Std. Err.
Bottom 50% 6.5 pp. 93% 2.4 pp. -0.6 pp. 2.3 pp. 94% 2.4 pp. -0.9 pp. 2.2 pp. 92% 1.4 pp. -0.4 pp. 1.4 pp.
Factor Income Middle 40% Next 9% 2.3 pp. 3.0 pp. 93% 98% 1.3 pp. -0.2 pp. 1.3 pp. 1.2 pp. -0.6 pp. 1.0 pp. 97% 100% 1.1 pp. -0.1 pp. 1.1 pp. 1.1 pp. -0.6 pp. 0.9 pp. 83% 92% 1.6 pp. -1.2 pp. 1.0 pp. 0.9 pp. -0.5 pp. 0.8 pp.
Top 1% 9.1 pp. 91% 5.1 pp. -1.0 pp. 5.0 pp. 94% 3.8 pp. -1.1 pp. 3.6 pp. 83% 4.8 pp. 1.5 pp. 4.5 pp.
Bottom 50% 4.9 pp. 91% 2.5 pp. -1.2 pp. 2.2 pp. 91% 2.5 pp. -1.1 pp. 2.2 pp. 92% 2.2 pp. -1.8 pp. 1.4 pp.
Pretax Income Middle 40% Next 9% 2.4 pp. 3.7 pp. 95% 98% 1.3 pp. -0.1 pp. 1.3 pp. 1.2 pp. -0.4 pp. 1.1 pp. 100% 97% 1.1 pp. -0.1 pp. 1.1 pp. 1.1 pp. -0.5 pp. 1.0 pp. 83% 92% 1.4 pp. -1.1 pp. 0.9 pp. 0.7 pp. 0.1 pp. 0.7 pp.
Top 1% 9.5 pp. 88% 5.2 pp. -0.9 pp. 5.1 pp. 94% 3.8 pp. -1.1 pp. 3.7 pp. 75% 4.9 pp. 1.5 pp. 4.6 pp.
Bottom 50% 3.8 pp. 88% 3.5 pp. -2.2 pp. 2.7 pp. 91% 3.1 pp. -1.9 pp. 2.5 pp. 75% 4.9 pp. -4.2 pp. 2.5 pp.
Disposable Income Middle 40% Next 9% 2.2 pp. 3.3 pp. 95% 93% 1.3 pp. -0.2 pp. 1.2 pp. 1.6 pp. -0.0 pp. 1.6 pp. 94% 91% 1.1 pp. -0.2 pp. 1.1 pp. 1.4 pp. -0.2 pp. 1.4 pp. 92% 83% 1.1 pp. -0.5 pp. 1.0 pp. 1.5 pp. 1.2 pp. 0.9 pp.
Top 1% 9.4 pp. 93% 6.4 pp. 0.4 pp. 6.4 pp. 97% 4.5 pp. 0.0 pp. 4.5 pp. 83% 6.8 pp. 1.9 pp. 6.5 pp.
Bottom 50% 3.8 pp. 88% 2.7 pp. -1.5 pp. 2.2 pp. 97% 2.3 pp. -1.2 pp. 2.0 pp. 92% 3.5 pp. -2.9 pp. 1.9 pp.
Post-tax Income Middle 40% Next 9% 2.9 pp. 4.1 pp. 95% 95% 1.2 pp. -0.2 pp. 1.2 pp. 1.4 pp. -0.2 pp. 1.3 pp. 100% 94% 1.0 pp. -0.2 pp. 1.0 pp. 1.2 pp. -0.4 pp. 1.1 pp. 83% 83% 0.9 pp. -0.5 pp. 0.8 pp. 1.0 pp. 0.7 pp. 0.7 pp.
Top 1% 10.0 pp. 91% 6.2 pp. -0.4 pp. 6.2 pp. 94% 4.3 pp. -0.7 pp. 4.3 pp. 83% 6.0 pp. 1.2 pp. 5.9 pp.
Middle 40% 8.2 pp. 88% 2.1 pp. 0.6 pp. 2.0 pp. 97% 1.8 pp. 0.1 pp. 1.8 pp. 92% 1.8 pp. 0.5 pp. 1.8 pp.
Wealth Next 9% 6.6 pp. 93% 1.9 pp. 0.2 pp. 1.9 pp. 97% 1.7 pp. 0.2 pp. 1.7 pp. 83% 2.3 pp. 0.1 pp. 2.3 pp.
Top 1% 9.9 pp. 93% 4.7 pp. -3.1 pp. 3.5 pp. 94% 4.1 pp. -2.7 pp. 3.1 pp. 83% 4.5 pp. -2.8 pp. 3.5 pp.
Notes
The Federal Reserve has published Distributional Financial Accounts since
2019, distributing aggregate household wealth quarterly[START_REF] Batty | Introducing the Distributional Financial Accounts of the United States[END_REF]. For income, the Bureau of Economic Analysis (BEA) distributes annual personal income and has explored the feasibility of higher-frequency statistics[START_REF] Fixler | The Feasibility of a Quarterly Distribution of Personal Income[END_REF]. We have greatly benefited from discussions with the Federal Reserve and BEA teams.
See https://www.atlantafed.org/chcs/wage-growth-tracker.
See Saez and Zucman (2020) for a discussion of the differences between these two concepts and their implications.
Detailed definitions are presented[START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF] andSaez and Zucman (2020) in the US context, and in[START_REF] Alvaredo | Distributional National Accounts (DINA) Guidelines: Concepts and Methods used in the World Wealth and Income Database[END_REF] and[START_REF] Blanchet | Distributional National Accounts Guidelines: Methods and Concepts used in the World Inequality Database[END_REF] in the international context.
Conceptually, GDP and gross domestic income GDI (from which national income is derived by subtracting depreciation and adding net foreign income) are identical, but in practice they are estimated using largely independent sources in the United States and hence their growth can diverge; see Section 3.2 below.
In periods of crisis, posttax income-which includes government spending other than cash transfers but adds back the government deficit-can be lower than disposable income. This was the case in the second quarter of 2020, due to the massive federal deficits induced by the economic response to the Covid pandemic. Disposable income has two advantages relative to posttax income in this context. First, it does not require one to make (necessarily debatable) assumptions about who bears the burden of the government deficit. Second, it is more directly informative of the consumption possibilities of households and of the extent to which government policies manage to smooth them over the business cycle.
Appendix A provides a detailed mapping of these National Income and Product Accounts concepts to the variables used in our micro-files.
One-sided matching would not respect the distribution of the second dataset. One-sided matching without replacement is inefficient as match quality becomes very poor for the last observations matched.
In contrast, the traditional approach of starting from a core database-such as individual tax data in[START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF] or the CPS in[START_REF] Fixler | A Consistent Data Series to Evaluate Growth and Inequality in the National Accounts[END_REF]-makes it cumbersome to add information from another database.
The state variable is not available for top earners in the public-use tax files.
Remaining seasonal variations (introduced by seasonality in employment numbers) is corrected by running the average wage for each percentile through the X11 procedure.
The only exception is that for simplicity, we do not attempt to specifically model the distribution of new government transfers created between t and t+1 (or t+2), while in our real-time methodology we carefully model the distribution of post-2019 new transfers (see Section 4.3). This means that for disposable and posttax income, our real-time methodology (post-2019) is likely to be more highly predictive than implied by our retrospective tests.
The 2-year forward prediction performs better by that metric because the bottom 50% often has little annual income growth during our sample period, so that in a few cases predicted growth is slightly negative when actual growth is barely positive (or vice-versa), a problem that attenuates when one considers growth over 2 years.
As already noted in footnote 18 for factor income, predictions are slightly worse for the bottom 50% because growth is often close to zero for that group in those decades. If one considers growth over 2 years, we correctly predict the sign of growth for the bottom 50% about 90% of the time just like for other groups (Appendix TableA1).
The top fiscal income shares of[START_REF] Piketty | Income Inequality in the United States, 1913-1998[END_REF], which have been used to study the fraction of growth accruing to top income groups (seeSaez, 2008, and subsequent updates), also revealed it.
Another caveat is that, as reported in Appendix FigureA6, we do not find a significant decline of the college labor income premium: in the working-age population, individuals with at least some college education earn on average twice as much labor income as individuals without a college education since the late 2010s, with no trend. This number captures differences in both wages and employment by educational attainment. The total income college premium appears to be slightly falling since the late 2010s, driven by a reduction in the premium for capital income.
For housing wealth we use the quarterly Case-Schiller index, projected to the most recent month with the Zillow Home Value index. For equities, we use the Wilshire 5000 Total Market Index, the most extensive representative index of US public companies. We also adjust the wealth of the top 400 daily to match the real-time estimates published by Forbes. See Appendix B for complete details.
The mapping for compensation of employees, corporate profits, net interest, and taxes on production and imports less subsidies is exact. The mapping for proprietors' income and rental income is almost exact (with a discrepancy of less than 0.1% of national income).
The micro-files are updated annually. A description of each update is available at http://gabriel-zucman. eu/usdina, as are current micro-files, computer code, and tabulations of key findings. All vintage releases and corresponding code are also published at this address.
Full link in case hyperlinks break: https://www.dropbox.com/home/SaezZucman2014/RealTime/ repository/real-time-inequality/work-data/03-build-monthly-microfiles/microfiles.
Notes: This figure decomposes the average real monthly posttax national income of the bottom 50% of the working-age population ranked by factor income from July 2019 to May 2022. This is the same Figure as Figure8but adding Medicaid and Medicare, other government spending, and the government deficit, so as to go all the way to posttax national income. See notes to Figure8.
https://realtimeinequality
for Hispanics mirrors the one seen for Blacks, except that the recovery started earlier.
During Covid, racial disparities were less pronounced than during the Great Recession. The collapse in income in the second and third quarter of 2020 was similar for Black and white people; it was less marked for Hispanics. The different groups then recovered at roughly the same pace: by the first quarter of 2021 all had recovered their pre-crisis pretax income level.
We thus again see an illustration of the fact that the Covid-19 recovery was much more equal than the Great Recession recovery.
Appendix Figure A7 reports similar comparisons of average income for men vs. women in the two downturns and recoveries. In both cases income dropped more for men than for women. During the Great Recession, women recovered faster than men (3 years vs. more than 4 years), while during Covid both groups recovered simultaneously, in less than a year. We do not find evidence that the Covid recession had, by the end of 2021, exacerbated gender income inequality.
Conclusion
Macroeconomic growth statistics are not necessarily informative of how income grows for most social groups. Yet government statistics currently available in the United States do not make it possible to know who benefits from economic growth in a timely manner. Our paper attempts to address this gap by creating monthly income distributions, available within a few hours of the publication of official high-frequency national accounts aggregates. Our methodology, which we retrospectively test and successfully validate back to 1976, combines all publicly available high-frequency data in a unified framework.
Real-time distributional growth statistics could play a critical role in guiding stabilization policies during and in the aftermath of recessions. For example, following a recession, they could be used to estimate "distributional output gaps," that is the extent to which income remains below its pre-recession level or trend for the bottom 50% of the distribution, the next 40%, and the top 10%. Since our files incorporate all taxes and government transfers, they could be used to study whether fiscal policy enacted during a crisis mitigates income losses for the working class on a month-to-month basis. This project only uses publicly available datasets and our programs are available online at https://realtimeinequality.org, making it possible for interested users to examine all the aspects of our approach and refine it. Although our estimation procedure appears to deliver reliable results, our estimates could be improved by complementing the data we use with real- The last public-use microfile dates from 2014. For later years, we update the microdata using IRS tabulations of income.
Social Security Wage Statistics
Social Security Administration
Complementary data on the wage income distribution.
Yearly 6 months
We use the wage income distribution from the SSA because it is better at capturing low wages.
Current Population Survey (ASEC)
Census Bureau (via IPUMS)
Integration of socio-demographic information into the tax data (bottom 95%).
Yearly 6 months
We match the CPS to the tax data using optimal transport on detailed income variables.
Survey of Consumer Finances
Federal Reserve Integration of socio-demographic information into the tax data (top 5%).
Triennial 1 year
We match the SCF to the tax data using optimal transport on detailed income and wealth variables.
Intra-annual distribution adjustments
Quarterly Census of Employment and Wages (QCEW)
Bureau of Labor Statistics (BLS)
Estimation of the wage income distribution.
Quarterly 5 months
The wage data in the QCEW is quarterly but the employment data is monthly. Therefore we treat the QCEW as a monthly dataset. NIPA compensation of employees (Table 1.12 line 2) = DINA compensation of employees (flemp)
Current
NIPA proprietors' income (Table 1.12 line 9) = DINA business asset income (fkbus) + DINA labor component of mixed income (flmil) -DINA business property taxes allocated to non-corporate businesses (non-corporate business share of propbustax) + NIPA rental income included in proprietors' income (Table 7.4.5 line 20) -NIPA net non-corporate business transfers paid (Table 1.12 line 21 -Table 1.14 line 10 -Table 7.4.5 line 19) -NIPA royalties (Table 7.9 line 7)
NIPA rental income of persons (Table 1.12 line 12) = DINA housing asset income (fkhou) -NIPA residential property taxes (Table 7.4.5 line 15 = proprestax) -NIPA mortgage interest payments (Table 7.4.5 line 18) -NIPA housing net current transfer payments (Table 7.4.5 line 19) -NIPA rental income included in proprietors' income (Table 7.4.5 line 20) + NIPA tenant-occupied rental income of nonprofits (Table 7.9 line 14) + NIPA royalties (Table 7.9 line 7)
NIPA corporate profits (Table 1.12 line 13) = DINA equity asset income (fkequ) + DINA equity income earned through pension plans (equity share of fkpen) -DINA business property taxes allocated to corporations (corporate share of propbustax) + NIPA dividends received by government (Table 3.1 line 14) + NIPA dividends received by nonprofits (Table 2.9 line 51) -NIPA net corporate business transfers paid (Table 1.14 line 10) -NIPA imputed interest paid by corporations on underfunded pension plans (Table 7.12 line 192) -NIPA dividend receipts of life-insurance companies included under "imputed interest received from life-insurance carriers" (part of Table 7.11 line 68, not separately reported).
NIPA net interest and misc. payments (Table 1.12 line 18) = DINA currency, deposits, and bond income (fkfix) + DINA interest income earned through pension plans (interest share of fkpen) -DINA non-mortgage interest paid (fknmo) + NIPA misc. corporate payments (Table 1.14 line 9 -Table 7.11 line 101) + NIPA imputed interest paid by corporations on underfunded pension plans (Table 7.12 line 192) + NIPA dividend receipts of life-insurance companies included under "imputed interest received from life-insurance carriers" (part of |
04104076 | en | [
"shs"
] | 2024/03/04 16:41:22 | 2023 | https://shs.hal.science/halshs-04104076/file/WorldInequalityLab_WP202216.pdf | Matthew Fisher
Distributional National Accounts for Australia, 1991-2018
Keywords: income inequality, national accounts. JEL Codes: D31, D33, E01
We produce estimates of the full distribution of all national income in Australia for the period 1991 to 2018, by combining household survey with administrative tax microdata and adjusting to match National Accounts aggregates. From these estimates, we are able to rigorously document the shifts in income shares over the period, contrasting changes in the distribution of pre-tax and post-tax national income. Comparing Australia to the US and to France, we also compare our new results to traditional household survey-based estimates of inequality. Moreover, we exploit the richness of our unique microdata to shed light on the distribution of national income across and within various population groups not usually identifiable in the tax datasets that underpin reliable top-income estimates. Among our most surprising findings, inequality of post-tax national income is less than inequality of survey-based (post-transfer, disposable) income for Australia. The gender gap in income has stubbornly remained over the past three decades. Finally, we find that Australian inequality of national income is much lower than that of the United States, while it is similar to that of France, although those at the bottom of the income distribution fare better in France than in Australia.
Introduction
A recent literature led by researchers affiliated with the World Inequality Database (Atkinson and Morelli 2018;Bozio et al. 2018;[START_REF] Garbinti | Income inequality in France, 1900-2014: Evidence from Distributional National Accounts (DINA)[END_REF][START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF][START_REF] Alvaredo | Distributional National Accounts Guidelines: Methods and Concepts Used in the World Inequality Database[END_REF][START_REF] Piketty | Capital accumulation, private property, and rising inequality in China, 1978-2015[END_REF]) has attempted to provide a more complete picture of the distribution of income through allocating all of the income as measured in National Accounts to individual members of society. The guiding principle for these 'Distributional National Accounts' is to allocate the entirety of national income to individuals in line with their 'beneficial receipt' of the income-that is, according to how much of the income effectively accrues to them.
By doing so, a more accurate picture of the distribution of income is possible compared with traditional inequality studies using household survey or tax records data, which typically only capture cash incomes, thereby missing important components such as in-kind benefits from government-provided goods and services, imputed rents on owner-occupied housing, and retained earnings of companies. By accounting for these additional income components, the Distributional National Accounts approach therefore generates estimates of individuals' incomes that are on average larger than obtained from household surveys or income tax data and which should more accurately reflect the distribution of all (cash and in-kind) income.
In this paper we attempt to produce statistics on the distribution of income in Australia as measured by the National Accounts. 1 Our approach is guided by [START_REF] Alvaredo | Distributional National Accounts Guidelines: Methods and Concepts Used in the World Inequality Database[END_REF], which details the income concepts and methods of implementation adopted by the World Inequality Database (WID).
The guidelines are, however, not completely prescriptive because of the substantial variation across countries in institutional features and data availability. Our approach is therefore considerably influenced by the particular institutional features of Australia and the nature of the available data, including the relative strengths and weaknesses of alternative data sources.
Four main national income concepts are identified in [START_REF] Alvaredo | Distributional National Accounts Guidelines: Methods and Concepts Used in the World Inequality Database[END_REF] as being of interest: pretax factor income; pre-tax post-replacement income; post-tax disposable income; and post-tax national income. Pre-tax factor income approximately corresponds to total income accruing to capital and labour, where all of national income is attributed to capital and labour. Pre-tax postreplacement income is the same as pre-tax factor income, but with an adjustment made to account for the public pension system by allocating pension payments to recipients and deducting the contributions used to fund them (such that it still sums to national income). Post-tax disposable income deducts all taxes attributable to individuals and adds cash transfers. Consistent with the principle of distributed income aggregating to National Accounts totals, the total value of taxes deducted equals the total value of taxes collected by government (not just income taxes). However, government expenditure is not allocated to individuals and thus the sum of post-tax disposable income is less than national income. Post-tax national income addresses this deficiency by distributing all of government expenditure, inclusive of items not readily attributable to individuals, such as national defence.
We construct measures of all four income concepts, but the results we present are primarily for pretax post-replacement income and post-tax national income on the basis that these are the main pretax and post-tax income concepts of interest, respectively corresponding to measures of the distributions of market income and 'post-government' income (the latter corresponding to 'beneficial receipt' of income).
We are not the first to attempt to describe the distribution of income in Australia adopting a National Accounts income concept. In line with broader efforts by national statistical agencies that produce National Accounts, the Australian Bureau of Statistics (ABS) has, on four occasions since 2014, released distributional information by combining information from its biannual household income survey with the household income account of the National Accounts data (most recently in 2021; see ABS 2021a). The methods have been refined over time. In the most recent release, for each of nine years between 2003-04 and 2019-20, statistics are presented on the distribution of various components of the national household income account across households.
While complementary to the analysis we undertake, the ABS approach is somewhat different to that advocated by [START_REF] Alvaredo | Distributional National Accounts Guidelines: Methods and Concepts Used in the World Inequality Database[END_REF]. Most important is that the income concept differs. Under the ABS approach, only income captured in the household income account is distributed to households, and thus components of national income not captured in the household income account, including retained earnings of corporations and government expenditure, are excluded. Additionally, the distributional information produced by the ABS is limited, presenting only the total, mean and share of each income component of the household income account for broad groupings of households: by main source of income (five groups), by equivalised income quintile, by household type (seven groups), by age group of the household 'reference' person (six groups) and by wealth quintile.
Compared with the ABS outputs, we therefore present distributional information that is based on income concepts more in line with the WID guidelines, which are concerned with the total of national income, and not the total of income as measured in the household income account.
Furthermore, we present more detailed distributional information, most notably at the top of the distribution, and information for a larger array of demographic groups than is produced by the ABS.
Distributing national income to individuals
In building the Distributional National Accounts (DINA) for Australia, we follow approaches taken to produce DINA estimates for, inter alia, France, the US and China [START_REF] Garbinti | Income inequality in France, 1900-2014: Evidence from Distributional National Accounts (DINA)[END_REF][START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF][START_REF] Piketty | Capital accumulation, private property, and rising inequality in China, 1978-2015[END_REF], as well as the Distributional National Accounts Guidelines [START_REF] Alvaredo | Distributional National Accounts Guidelines: Methods and Concepts Used in the World Inequality Database[END_REF]).
The goal is to distribute to individuals all of the National Accounts measure of income, defined as GDP plus net foreign income minus consumption of fixed capital. Following the DINA Guidelines, we construct four measures of income that are distributed to individuals, although only three of these sum to a National Accounts aggregate. In the following we describe the methods and data used to produce each income distribution. For pre-tax cash incomes of individuals, based on exploratory work with both ALife and the SIH, we determined that the best approach was to primarily base cash income estimates on the SIH, but with ALife tax data used to adjust incomes for the top 1%. This is because the tax data appear inferior in income capture for most of the distribution (see Figure 2.1). Although non-labour income is higher in ALife than in the SIH for people with above-median incomes (see Figure 2.3), it is not enough to compensate for the undercoverage of labour income evident in Figure 2.2. 2 The ABS has also conducted surveys that collected household income data (for which in unit record data is still available) in 1975, 1982, 1986 and 1990. Unfortunately, unit-record tax data is not available prior to 1991, but further extension back to the mid 1970s may be possible using tabulations of tax data and the household survey data that is available. Prior to the mid 1970s, the only broad-based distributional information comes from income tax tables. At this stage, we leave DINA estimation prior to the 1990s as a task for future research.
Stata graph DINAvsSIH_check_nonlabour_deca
Up until 2015-16, the SIH unit record data contain measures of both annual income (for the preceding financial year, 1 July to 30 June) and 'current weekly' income. We use the annual income estimates for these surveys. However, in the 2017-18 SIH, only current weekly income is available.
We therefore use an annualised measure of this income measure for this survey.
Our approach is something of a departure from existing studies, which have given greater weight to tax records data. However, DINA need to be flexible to national circumstances, and in Australia's case, survey data is preferable to tax records data for all but the top 1%.
Australia is by no means unique in the finding that income survey data is at least as good as tax data for incomes below the top 1%. Burkhauser et al. (2012) found the US CPS matched income tax data up to the 99 th percentile, and [START_REF] Burkhauser | Top incomes and inequality in the UK: Reconciling estimates from household survey and tax return data[END_REF] similarly found the UK HBAI matched income tax data up to the 98 th percentile. Perhaps requiring some explanation is why the survey data actually captures more income below the 99 th percentile than the tax data. Two main explanations exist: some forms of income are nontaxable and are even received by high income earners; and there are incentives to minimise income reported to tax authorities that do not apply to statistical agencies. Regardless of the explanation, the fact remains that macroeconomic aggregates are better captured when income survey data is used for the bottom 99% and tax data is only used for the top 1%.
Aside from better capture of the incomes of the bottom 99%, additional reasons to use the SIH include better flexibility to look at different income concepts (including equivalised disposable cash incomes), income units (including the household unit) as well as information on wealth. That said, we focus on the four income concepts described in the DINA Guidelines.
We distribute incomes of households on an 'equal-split adults' basis, meaning each adult household member is assigned an equal share of the total household income, as per the 'broad equal-split series' in the DINA Guidelines (p23). Although our baseline estimates are based on these broad equal-split series, we also consider two alternatives. First, we build 'individualistic series', which assume no sharing within households and distribute income to each person individually according to individual earnings and ownership. This is a useful comparison point with the 'broad equal-split series' when we further breakdown income shares by individual characteristics. Second, we build and use the 'narrow-split series' to ensure consistency in the comparison with the US and France.
The 'narrow-split series' distributes income to all adult individuals by splitting income equally within a couple, but not within the extended household.
While the SIH is our preferred 'core' data source, it nonetheless has important limitations which need to be addressed. It is only available from 1994-95, and it has only been conducted every To produce estimates in non-SIH years, we interpolate distributions and adjust according to changes in the components of the National Accounts in those years. We use the national income price index to either inflate the distribution from the closest earlier year or to deflate it from the closest later year. If both an earlier and a later year are available, we apply both methods separately and compute the final DINA estimates by taking the average of the two series thus obtained.
Top 1%: combining survey and tax data
As a growing literature has shown, survey data tend to undercover top incomes. Comparison of survey and tax data has revealed that this is the case in Australia too (Burkhauser et al. 2016) and that it mostly affects the top 1%. We follow the cell-mean imputation method we developed for the UK in [START_REF] Burkhauser | Top incomes and inequality in the UK: Reconciling estimates from household survey and tax return data[END_REF], using tax data (ALife) to impute incomes of the top 1% in the survey data. 3To implement this method, we first rank individuals in the ALife unit record data by their 'tax gross income', which is total income subject to taxation prior to any allowable deductions or rebates. This is the closest variable to 'pre-tax income' available in the tax records data. Second, we select individuals in the top 1%, using the ABS estimate of the total adult population shown for the relevant year. Next, we allocate top 1% individuals to income groups, with the size of each group equal to 1/100,000 th of the total adult population, meaning we split the top 1% into 1,000 income groups.
Third, we calculate the average income for each income group. Next, we repeat the first and second steps with the SIH data for the same year using our derived measure of individual gross income. We then duplicate each record according to its sample weight. Finally, for each of the 1,000 SIH income groups within the top 1%, we replace the individual-level SIH incomes with the mean income of the corresponding group in ALife.
In addition to imputing gross income from tax data for the top 1%, we also use the labour/capital income-source composition as is obtained from the tax data. An alternative assumption would be to use the income composition as determined by the survey data, but this tends to underestimate the importance of capital income for the top 1%. However, the tax data offer less detail and thus less flexibility in then adjusting incomes to match National Accounts totals (e.g., mixed-income is not directly observable in ALife). We address this issue by maintaining the assumption that the incomesource compositions of capital and labour incomes are as obtained from the survey data.
Our procedure ensures that total 'tax gross income' for the top 1% -and for each of the 1,000 groups within the top 1% -is the same in the (adjusted) SIH and ALife data.
Labour income
Grossing up of labour incomes is required because of potential under-reporting in SIH as well as the failure of the SIH to capture (all of) salary sacrificed employment income, fringe benefits and fringe benefits tax, and 'employer social contributions' (i.e., employers' superannuation contributions and workers' compensation premiums). Employee incomes are grossed up by a constant factor so that total employee income in the SIH equals total employment income in the National Accounts.
Mixed income is grossed up separately, also by a constant factor. ABS National Accounts data do not report net mixed income. We therefore estimate net mixed income based on gross mixed income, which is reported in the National Accounts data, by applying net-to-gross ratios for mixed income sourced from the WID for Australia. 4 All grossing-up factors are provided in Appendix Table A.2. Total employee incomes have to be increased by between 10% and 26% to ensure consistency with National Accounts. The required increase is much larger and more volatile from year to year for mixed income, ranging from 25% to 226%, depending on the year.
Capital income
Capital income is estimated based on reported business and investment income and imputed rent. A 'grossing up' adjustment is done separately for each of superannuation, imputed rent and other capital income. The principle is that superannuation income is imputed based on observed or estimated superannuation balances. Net operating surplus of households and non-profit institutions serving households (NOSHN) is distributed based on imputed rent. The remaining (i.e., non-pension non-imputed-rent) capital incomes not captured by the SIH are distributed according to reported non-pension non-imputed-rent capital incomes (hereafter called 'other capital income').
From the total capital stock ("National net wealth") as measured in the National Accounts, we compute the share of the capital stock in superannuation funds ("Pension funds & life insurance") and then use that share to allocate the appropriate proportion of total private capital income (other than NOSHN) accruing to superannuation funds. The implicit assumption is that returns on superannuation are the same as the overall return on the national private capital stock. Total private capital income is obtained here from the National Accounts by adding "total net property income of households and non-profit institutions serving households" and "total net primary income of corporations". Superannuation income, NOSHN and other capital incomes are thus allocated to each individual separately.
Superannuation income
We impute superannuation income proportionally to each individual's superannuation balance. We use superannuation balances from the SIH for all years for which they are available (2003/04, 2005/06, 2009/10, 2011/12, 2013/14, 2015/16 and 2017/2018). For the years not covered by the 4 Alvaredo et al. (2020) discuss the disaggregation of total depreciation (consumption of fixed capital, CFC) where its components are missing in the official National Accounts statistics: A share of the total CFC corresponds to each component of gross operating surplus in the gross domestic product (viz. gross corporate operating surplus, gross mixed income, gross household operating surplus, and gross government operating surplus). Refer to https://wid.world/ for data and methodology. SIH, we estimate superannuation balances separately for those aged 60 and over and those aged under 60.
For those aged under 60, we estimate a regression model of superannuation balances on age, labour income and sex (as well as interactions). For those aged 60 and over, the model is enriched by including superannuation income. The coefficient estimates (see Appendix A.3) are then used to impute superannuation balances in the SIH data for years with no information, by using the set of estimated coefficients from the closest year available. This means that superannuation balances from 1991 to 2002 are all estimated based on the 2003 model. This approach is likely to generate some prediction errors. However, we note that superannuation wealth was limited in the 1990s, since compulsory contributions only commenced in 1992, initially at only 3% of gross earnings and gradually increased up to 9% as of 1 July 2002.5 Moreover, it is the relative distribution of superannuation balances that matters for imputation and not the absolute values, and relativities by labour income, age and sex are likely to have remained relatively stable between 1991 and 2003.
Net operating surplus of households and non-profit institutions serving households (NOSHN)
ABS National Accounts data report only gross and not net operating surplus of households and nonprofit institutions serving households. We use the share of the consumption of fixed capital attributable to operating surplus in NOSHN from the WID Australian National Accounts data (see footnote 5 above) to derive net operating surplus from the ABS National Accounts data on gross operating surplus.
We then impute NOSHN proportionally to each household's net imputed rent. Where a household comprises more than one adult, the income is equally split. Gross and net imputed rents are directly provided in the SIH from 2005 onwards. 6 For earlier years, we predict gross and net imputed rents.
Using 2005 values, we estimate a model to predict gross imputed rents based on reported tenure type, state of residence, area of residence, number of bedrooms, household gross income decile and landlord type. The approach draws heavily on the approach developed by the ABS (ABS 2008a). For net imputed rent, all covariates listed above are interacted with (predicted) gross imputed rent and we add mortgage repayments and predicted gross imputed rent to the list of covariates. Coefficient estimates are reported in Appendix A.4. All models are estimated with and without tenure type as this variable was not available before 1995 in the SIH and thus cannot be used for imputation before that year. These models fit the data well with the adjusted R-square 0.97 for gross imputed rent and 0.69 for net imputed rent.
Other capital income
Other capital income has two components: that captured by SIH and that not captured by SIH, the latter of which is a residual equal to total capital income7 from the National Accounts minus superannuation income from the National Accounts minus non-pension capital income as measured in SIH. This non-captured capital income will primarily comprise corporate retained earnings. We distribute it assuming it has the same distribution as observed other (non-superannuation nonimputed rent) capital income. We take the same approach for adding foreign inome received from tax havens and reinvested earnings on foreign portfolio investment. The latter captures retained earnings in foreign firms accruing to Australians whose shares comprise less than the 10% foreign direct investment threshold required to appear in the National Accounts. We use WID estimates of foreign income received from tax havens and reinvested earnings on foreign portfolio investment (see Zucman 2013). A.2 indicate that this captured capital income has to be multiplied by a factor of between 2 and 4.3 to match National Accounts totals.
Grossing-up factors reported in Appendix Table
Taxes on production
As [START_REF] Alvaredo | Distributional National Accounts Guidelines: Methods and Concepts Used in the World Inequality Database[END_REF] show, a pre-tax income measure not only requires that income taxes are not deducted from capital and labour income, but that taxes on production (and taxes on wealth, if they exist) need to be added to incomes to ensure all of national income is distributed to individuals.
As per the guidelines, taxes on production are assumed to have the same distribution as total factor income. This is somewhat arbitrary, but means pre-tax income distributions among those with factor incomes are unaffected by these taxes other than via a scaling up factor applied to all incomes.
Inequality measured over the total population increases, however, because people with zero factor incomes become relatively poorer.
Pre-tax national income
Following the approach adopted for the French DINA by [START_REF] Garbinti | Income inequality in France, 1900-2014: Evidence from Distributional National Accounts (DINA)[END_REF] and US DINA by [START_REF] Piketty | Distributional National Accounts: Methods and Estimates for the United States[END_REF], as well as the DINA Guidelines, we include the Age Pension as income to produce pretax national income. This presents no major difficulty as the Age Pension is reported in the SIH. 8 We distribute the total cost of Age Pension payments as a flat percentage of income tax liabilities. That is, we assume each individual's contribution to the funding of the Age Pension is in proportion of their income tax liabilities.
Post-tax disposable income
To move from pre-tax national income to post-tax disposable income requires deducting all taxes and adding all government cash transfers to individuals' pre-tax incomes. Deducting income taxes and adding cash transfers is straightforward since both are recorded in the SIH and ALife data.
However, both income taxes and cash transfers need to be scaled up to match National Accounts totals.
As noted in the DINA guidelines (Alvaredo et al., 2020, p. 53), the aim is to "to describe post-tax, post transfer inequality for the population's actual perceived budget constraints, while excluding inkind transfers such as health and education and other public spending (as these may impact purchasing power and disposable income only indirectly). For this reason, aggregate post-tax disposable income can be substantially less than aggregate national income."
Government pensions and allowances, as well as income taxes, are distributed according to the survey (and tax) data. For taxes on production (indirect taxes), which were distributed proportionally to factor income in pre-tax series, the DINA Guidelines advocate they are removed in proportion to consumption, proxied by disposable income (before the deduction of taxes on production) minus saving (where savings rates are based on external sources). In the absence of data on savings rates by level of income, we simply remove production taxes proportionally to household disposable income (as defined in the SIH). 9 Corporate taxes are imputed proportionally to capital incomes after excluding imputed rent.
9 A potential refinement for future work is to estimate expenditure regression models using the ABS Household Expenditure Survey data (collected in 1993, 1998, 2003, 2009 and 2015) to impute household expenditure as a function of income (and perhaps other factors) and use this to distribute taxes on production.
Post-tax national income
Moving from post-tax disposable income to post-tax national income requires distributing government expenditure to individuals. This corresponds to total expenditure of the government adjusted for the surplus or deficit of the government (Alvaredo et al., 2020, p. 64). The DINA Guidelines' definition of government surplus or deficit differs from the usual definition "due to the exclusion of other current transfers and capital transfers" (Alvaredo et al., 2020, p51). Thus the government surplus or deficit is defined as net saving plus net other current transfers.
Three alternative approaches to distributing government expenditure are recommended by the DINA Guidelines: (1) assume health expenditures benefit all adults equally but that the benefits of other expenditures are proportional to disposable income; (2) assume everyone benefits equally from all government expenditure; and (3) assume the benefits of government spending are distributed in the same way as disposable income. The third approach means government spending can effectively be ignored since it doesn't affect the distribution other than to scale up everyone's income by the same fraction. Interestingly, the Guidelines do not allow for a scenario where government spending is redistributive.
In Australia, the biggest expenditure items-health and education-are somewhat redistributive to lower-income individuals (ABS, 2018, Table 1.1). Consequently, of the approaches the guidelines recommend, the most appropriate approach for the Australian context is Approach (2). This means average government expenditure per adult is added to disposable income. This acts to lower measured inequality compared with post-tax disposable income, but nonetheless is likely to overstate benefits to high-income earners and understate benefits to low-income earners and thus not reduce measured inequality as much as it should. Australia 1991Australia -2018 3.1. Pre-tax national income Figure 3.1 presents estimated shares of pre-tax national income over the 1991 to 2018 period of the bottom 50%, top 50% excluding the top 10% (referred to as the 'middle 40%'), top 10% excluding the top 1%, and the top 1%. As noted, this provides information on how a 'market income' concept of income is distributed across individuals. The share of the bottom 50% remained relatively steady, at approximately 20%, but the middle 40% group experienced a decline from over 50% to 47.5%, with the decline occurring between 1991 and 2008, since when there has been no net change. The income share of the top 10% to 1% rose from 22% to 23.4%, while the top 1% income share rose from 7% to 9.4%, with all the increase occurring between 1995 and 2008 (and indeed there is a small decline evident after 2008). 10 Notes: Distribution of pre-tax national income (before all taxes and transfers, except age pensions) among adults. Broad equal-split adults series (household income equally split among adults).
Inequality in
Stata graph pretax_split2_incsh_a
Figure 3.2 compares the changes in mean income per adult of each of the four income groups examined in Figure 3.1. Since 1991, the mean income of the top 1% has increased by a factor of more than four. This compares with nearly 3.5 for the top 10% to 1% and approximately 2.9 to 3 for the two groups comprising the bottom 90%.
10 Data Appendix F (to be made available online) contains series (in Stata files) that describe thresholds, averages and shares for each of the 127 'generalised percentiles' (or g-percentiles) for each income concept, as recommended by the DINA Guidelines. Notes: Distribution of pre-tax national income (before all taxes and transfers, except age pensions) among adults. Broad equal-split adults series (household income equally slit among adults). Index based on mean incomes in current dollars.
Stata graph pretax_mean_split2_index_a percentile income group from 1991 to 2018. The figure reveals that, for pre-tax income, both the bottom 20% and the top 5% have done better than the average adult, who saw income grow at an average of 1.7% per annum. However, differences between the bottom, middle and top of the distribution mostly disappear when moving from pre-tax to post-tax national income, with the notable exception that growth was still higher for the top 5%. Moreover, among the top 5%, the top 1%, and top 0.1% in particular, have clearly experienced growth rates that are larger than the average.
Post-tax national income
Figure 3.5: Real average annual growth per adult 1991-2018
Notes: Distribution of pre-tax national income (before all taxes and transfers, except age pensions) and post-tax national income (after all taxes and transfers) among adults. Broad equal-split adults series (household income equally split among adults). Index based on mean incomes in constant dollars. The red line shows the overall average per-adult real annual national income growth rate over the period, which is (by construction) the same for pre-and post-tax income series. For the bottom 50%, France and Australia are again relatively similar and somewhat different to the US. However, there is a slight but steady rise in the income share of the bottom 50% in France from the mid 1990s, compared with a slight decline in Australia. Across the entire period, the 'middle 40%' (top 50% to 10%) has had the highest income share in Australia and lowest income share in the US. In all three countries, this income group has experienced a decline in income share, with the drop greatest in the US and smallest in France.
In Figure 4.3, we abstract from yearly changes and examine differences in income levels by percentile income group across the three countries. We use purchasing power parity (PPP) exchange rates to convert French and Australian income levels to US dollars. For each percentile of the income distribution, we plot the ratios of French and Australian incomes to US incomes. Thus, when the curve lies above one (i.e., the red line), incomes at those percentiles are higher than in the US. This exercise, with all its limitations, reveals that the Australian (and French) pre-tax income levels are lower than their US counterparts for all adults above the median. For the bottom 50%, there has been a tremendous catch-up by Australia with respect to the US between 1991 and 2017. Worth noting is that French income levels are substantially higher than in the US for the bottom 50% in 2017 (and the bottom 35% in 1991).
11 Results for Australia differ slightly from those presented in the previous section because we use the 'narrow equal-split' series in the comparison with the US and France to ensure comparability with these countries' estimates (see Section 2.1). The income share of the bottom 50% is highest in Australia and lowest in the US. There is little net change evident over the full period for France and Australia, but a considerable decline for the US.
At the end of the period, the income share of the bottom 50% was 33% in Australia, 29% in France and 19% in the US. For the middle 40%, income shares are very similar across the three countries, although across the entire period, France has the highest income share and the US the lowest, and the gap widened slightly between 1991 and 2018. Recent work shows that if Europe is less unequal than the US, it has more to do with lower levels of pre-tax income inequality than with more equalizing tax-and-transfer systems [START_REF] Blanchet | Why Is Europe More Equal Than the United States?[END_REF]. We can draw the same conclusion for Australia. Notes: Distribution of post-tax national income (after all taxes and transfers) among adults. Narrow equal-split adults series (income of married couples divided by two). Comparisons are based on purchasing power parity (PPP) exchange rates (source: World Inequality Database). 2017 is the latest year for which estimates are available for all three countries.
Stata graph intcomp_pretax_perc_1991_2017
In Figure 4.6, we examine differences in PPP income levels by percentile income group across the three countries. Only those below the 15 th percentile did better in Australian than in the US in 1991.
By 2017, however, Australians below the 30 th percentile have higher PPP-adjusted incomes than their US counterparts. There is a remarkable convergence of French and Australian distributions of post-tax national income, which by 2017 look very similar. The declining slope indicate that as we go from the bottom to the top of the distribution, the differential initially in favour of Australia (and France) over the US reverses around the 30 th percentile and keeps growing such that incomes at the top are markedly higher in the US.
Comparisons of DINA estimates with household survey estimates of inequality
Of considerable interest is how inferences on levels and trends in inequality are affected by moving from traditional household-survey based estimates for household equivalised disposable income to DINA estimates of inequality. Notes: Broad equal-split adults DINA series (household income equally split among adults).
Stata graph Gini1a
As would be expected, comparing across the DINA income concepts, moving from pre-tax national income to post-tax disposable income and then to post-tax national income is associated with decreases in the Gini coefficient. Notably, the Gini coefficient for post-tax national income is consistently below the Gini coefficient for equivalised disposable (cash) income. Between 1994 and 2018, Gini coefficients for both post-tax national income and equivalised disposable income increased, although more so for equivalised disposable income. Comparing SIH equivalized disposable income for the full population with SIH equal-split disposable income for adults only reveals very small differences in inequality as measured by the Gini coefficient. This suggests that going from equal-split income among adults, as per the DINA series, to equivalized adult among the full population, as per the standard SIH series, cannot explain much of the difference between the two series.
Income levels and inequality disaggregated by demographic characteristics
A valuable feature of the DINA series for Australia is that it is primarily based on survey data, which contains demographic information not typically available in administrative data sources such as tax records.
Here, we exploit this extra richness of the survey data to shed further light on the distribution of national income across and within various population groups.
In what follows we consider sex, age, education, immigrant status and area of residence (i.e., cities versus regional areas). We focus on the post-tax national income series based on the equal-splitting of household income between all adult members (i.e., the 'broad equal-split series'). However, we also bring in insights from the 'individualistic series' where it is most relevant, that is for sex and education, because equal-split series can mute differences across these groups.
For each demographic characteristic we consider, we first present and discuss differences in mean incomes before turning to the income shares of national income within each subgroup. Mean incomes are useful to show differences in levels across groups, regardless of the size of each group.
Mean income is preferred over income shares because the latter are a reflection of both mean incomes and population shares. The income shares within each subgroup (e.g., among men or among university graduates) are then presented to shed light on the levels and trends of inequality within each subgroup.
DINA by sex
Figure 6.1 shows that mean post-tax national income does not differ much by sex. While men have slightly higher incomes, the difference is limited and has remained stable. However, gender differences are muted in these DINA series by the use of equal-split incomes, meaning that all incomes are equally split among adult household members. The implication is that any remaining gender difference is driven by differences between single men and single women. Appendix Figure E.1 shows exacerbated gender differences if we use 'individualistic' income series, with women's mean incomes falling below men's mean incomes by about $25,000 throughout the period. Top 1%
Men Women Figure 6.2 shows that the income shares among men and among women are largely similar. The income shares of the top 10% to 1% of men is slightly larger than the share of this group among women, whereas the bottom 50% of men have a smaller income share. Again, relaxing the equalsplit assumption to consider individualistic series leads to larger differences (Appendix Figure E.2).
Notably, the individualistic series reveal that top income shares (for the top 10% to 1% and the top 1%) were higher among women than among men in the early 1990s, and the bottom 50% income shares were smaller. This thus shows more inequality among women than among men. This is likely to reflect the lower labour market participation of partnered women, with a significant proportion not employed and therefore having low or no personal income. However, consistent with the rise in female employment participation over the period since 1991, the following decades saw a reversal.
By 2018 the income share of the top 10% to 1% was larger among men than among women, while the bottom 50% of women had a larger income share than the bottom 50% of men. Hence, there appears to be more income disparities among men than among women in recent years.
DINA by age group
Mean post-tax national income is the highest among prime working-age adults-that is, those aged 25 to 39-and those aged 40 to 54, followed by those aged 55 and above. It is lowest for those under 25. Although, these differences in levels have been true since 1991, Figure 6.3 shows that the mean incomes of both the youngest and oldest age groups have been falling further behind that of the two prime working-age groups, particularly in the 2010s.
Figure 6.4 shows income shares within each age group. The share of the bottom 50% is the highest among those under 25 and the lowest among those 40 to 54 years of age. The share of the top 10% to 1% is higher among the two older age groups-that is, those above 40-and the lowest in the youngest two age groups. The same is largely true for the top 1% income shares, although there are more fluctuations, and the differences across age groups tend to be smaller. Notes: Broad equal-split adults series (household income equally split among adults).
Stata graph pshares_split2_edu_ptninc_b
Figure 6.6 looks inside each of these three educational groups to reveal that inequality is the highest among university graduates, as measured by their higher top 10% to 1% and top 1% income shares, and their lower shares for the bottom 50%. Income shares among those with no post-school qualifications and those with vocational qualifications are comparable. Over the period as a whole, there is no clear trend in these income shares. As was the case with sex, inequality within each education group is exacerbated by the use of individualistic income series (Appendix Figure E.4).
DINA by immigrant status
The ABS SIH data allow us to distinguish foreign-born and native-born individuals. 12 Figure 6.7 shows that mean post-tax national income is greater for native-born Australians than for immigrants throughout the 1991-2018 period. The gap increased after the 1990s to reach almost 8% (or about $6,000) in 2018, compared to 6% ($3,000) in 1991. Figure 6.8 shows that income shares are, perhaps surprisingly given the mean income differences, similarly distributed among immigrants and among the native-born.
12 More detailed information on country of birth is available in some survey years, but not consistently across our period of analysis. Top 1%
Immigrants Natives
DINA by area of residence
We distinguish major cities and the rest of Australia. Figure 6.9 shows that mean post-tax national income is higher in major cities by about 10% and that this gap has remained stable in relative terms over the 1991-2018 period.0
Figure 6.10 reveals some shifts in the distribution of post-tax national income in major cities and in the rest of Australia. Top 1% income shares are larger in major cities throughout the period but, while other income shares were similar across both types of region in the early 1990s, they have since diverged somewhat, driven by an increase in inequality in major cities. The result is that the income shares of the bottom 50% are now lower, and the income shares of the top 10% to 1% are larger, in major cities than in the rest of Australia. Notes: Broad equal-split adults series (household income equally split among adults).
Stata graph pshares_split2_areahcf_ptninc_b
Conclusion
We have produced the first DINA estimates for Australia consistent with the DINA Guidelines described in [START_REF] Alvaredo | Distributional National Accounts Guidelines: Methods and Concepts Used in the World Inequality Database[END_REF], spanning the period 1991 to 2018. Our estimates suggest Australia has a somewhat similar distribution to France, with both countries having considerably more equitable distributions than the US. Australia has, however, had greater growth in inequality than France.
Significantly, our DINA estimates for Australia indicate that income inequality is somewhat lower when all income as measured in the National Accounts is distributed to individuals compared with a focus on cash incomes as is conventional in household survey based studies of income inequality.
In contrast to other DINA studies internationally, our reliance on household survey data to anchor our distributional analysis has allowed us to consider income differences between and within demographic groups. The analysis presented in this paper has only investigated these differences in a cursory fashion, but clearly there is considerable potential to exploit this feature of our series in future research.
While in the long run it would be ideal to publish synthetic microfiles for public research consumption, the confidentiality requirements of ABS and ALife data access currently preclude this. Top 1%
Major city Other
However, detailed distributional information will be made available through the World Inequality
Database website, https://wid.world/.
A further important future research direction is to attempt to extend the DINA estimates back to earlier years. Unit record income survey data is sparser prior to the 1990s, and indeed non-existent prior to 1975. Similarly, unit record tax data only extends back to 1991. Methods for producing DINA estimates will therefore need to rely on more aggregated forms of data, such as the tax tables used to produce the original (cash income) top income shares for WID.
Further refinement of Australian DINA estimates is also possible and should be a priority for further research. For instance, our assumption that in-kind income from government expenditure is equally distributed across the population is consequential but almost certainly not accurate. On balance, government expenditure is likely to be progressive in its effects, as evidenced by the ABS in its periodic 'fiscal incidence' studies (ABS, 2018). However, while it is easy to come up with alternative choices and assumptions, implementation is often impeded by the lack of data. In addition, further refinements should ideally occur through refinements and extensions to the DINA guidelines in order to facilitate comparability of DINA estimates across countries. -343,384 1992 -14,613,357 -608,890 1993 -11,291,962 -470,498 1994 -15,332,505 -638,854 1995 -15,336,185 -639,008 1996 -10,649,353 -443,723 1997 -16,235,498 -676,479 1998 -16,477,313 -686,555 1999 -27,519,147 -1,146,631 2000 -42,319,718 -1,763,322 2001 -15,676,018 -653,167 2002 -27,836,179 -1,159,841 2003 -24,939,858 -1,039,161 2004 -18,548,200 -772,842 2005 -20,565,614 -856,901 2006 -21,123,878 -880,162 2007 -32,612,964 -1,358,874 2008 -47,913,907 -1,996,413 2009 -51,932,913 -2,163 As ALife is a 10% random sample of tax filers, it is subject to sampling error. We address this issue by reconciling top income outliers (defined here the top 100 individuals in terms of taxable gross income in Australia each year) with the ATO full population for which the ATO has provided us mean income values (separately for the top 100 to 50 and the top 50 individuals). We average income for top 100 to 50 and top 50 individuals in Australia to adjust incomes of the top 10 and top 5 individuals in ALife. All income components are scaled up by a constant factor. This approach fixes the top 0.001% but sampling error may still affect income groups below the top 0.001%, with the issue likely to be more important the smaller the income group. In practice, top income shares for groups smaller than 0.1% of the population and above the top 0.1% may not be reliable.
Appendix A.2: Grossing-up factors 2) is without tenure type. In the net imputed rent models all variables are interacted with gross imputed rent, with the exception of mortgage weekly repayments and gross imputed rent. * p < 0.10, ** p < 0.05, *** p < 0.01.
Figure C.2: Mean adult income by income group 1991-2018
Notes: Distribution of post-tax disposable income (before all taxes and transfers, except age pensions) among adults. Broad equal-split adults series (household income equally split among adults). Index based on mean incomes in current dollars. Mean income per adult (2018 $)
Stata graph net_split2_mean_index_a
2.1. Pre-tax factor income 2.1.1. Pre-tax cash incomes Our approach draws on both unit record tax data and income survey data. The tax data set, known as ALife, comprises a 10% random sample of tax returns covering the period 1991 to 2018. The income survey data come from the Australian Bureau of Statistics' Survey of Income and Housing (SIH), covering the period 1994 to 2018, but with some gaps. The SIH provides the longest time span of coverage for income survey data in Australia, the main other survey source being the Household, Income and Labour Dynamics in Australia (HILDA) Survey, a panel study that commenced in 2001. 2
Figure 2 . 1 :
21 Figure 2.1: SIH survey data relative to ALife tax data income by percentile -Pre-tax income
Figure 2 . 3 :
23 Figure 2.3: SIH survey data relative to ALife tax data by percentile -Non-labour income (with individuals ranked based on pre-tax income)
second year from 1997-98 to 2002-03 and from 2003-04 onwards. It also only has wealth data (and hence information on superannuation (private retirement account) balances and home equity required to distribute capital income; see below) in 2003-04, 2005-06 and 2009-10 onwards.
Figure
Figure 3.1: Pre-tax national income shares 1991-2018
Figure 3 . 2 :
32 Figure 3.2: Mean adult pre-tax national income by income group 1991-2018
Figures 3 .Figure 3 . 4 :Figure 3
3343 Figures 3.3 and 3.4 present the same information as Figures 3.1 and 3.2, but for post-tax nationalincome. This provides information on the distribution across individuals of 'beneficial receipt' of total income in the National Accounts. The relative rise in top income shares is less pronounced for this income measure, but notable is that the income share of the bottom 50%, after rising slightly between 1991 and 2007, subsequently fell to 2010, and has largely not recovered.
Figure 4 . 6 :
46 Figure 4.6: Average adult post-tax income by percentile income group: Australia, US and France 1991 & 2017
Figure 5.1 compares the Gini coefficient for three of the DINA income concepts with the Gini coefficient for equivalised disposable income captured in the SIH (where the modified OECD scale is used to equivalise income; see Hagenaars et al. 1994).
Figure 5 . 1 :
51 Figure 5.1: Gini coefficients for DINA series and household equivalised disposable income, 1991-2018
Figure 6 . 1 :Figure 6 . 2 :
6162 Figure 6.1: Mean real post-tax national income per adult by sex (1991-2018)
Figure 6 . 3 :Figure 6 . 4 :
6364 Figure 6.3: Mean post-tax national income per adult by age group (1991-2018)
Figure 6
6 Figure6.5, comparing across three levels of educational attainment, shows that mean post-tax national incomes are, unsurprisingly, ordered by educational attainment. Perhaps more interesting is that all three education groups seemed to equally benefit from the trend increase in post-tax national income per adult up until the GFC in 2008. However, since 2008 there has been no net growth in mean incomes for all three education groups.
Figure 6 . 5 :
65 Figure 6.5: Mean post-tax national income per adult by educational attainment (1991-2018)
Figure 6 . 9 :
69 Figure 6.9: Mean post-tax national income per adult by area of residence (1991-2018)
Figure 6 .
6 Figure 6.10: Post-tax national income shares in major cities and other areas (1991-2018)
Figure C. 3 :
3 Figure C.3: Top 10% and bottom 50% income shares: Australia, US and France 1991-2018
Figure
Figure D.3: Adult population shares for natives and immigrants (1991-2018)
In this section we compare US, French and Australian income shares of four income groups: top 1%, top 10%, top 50% to 10% ('middle 40%') and bottom 50%. 11 Figures 4.1 and 4.2 examine pre-tax national income, the first figure examining the top 10% and bottom 50% and the second figure the top 1% and middle 40%. The top 10% income share is considerably higher in the US than in Australia and France, which have similar top 10% income shares. The income share of the top 10% has also risen considerably in the US. It has also risen in Australia, albeit to a smaller degree, while it has remained relatively stable in France, such that the top 10% income share has gone from being somewhat higher in France than in Australia in the early 1990s to slightly lower in the late 2010s.Similar patterns are evident for the top 1% in Figure4.2, although the income share of the top 1% in France remains slightly above that of the top 1% in Australia throughout the 1991 to 2018 period.
4. International comparisons: Australia, US and France
4.1. Pre-tax national income
Real average annual growth in %, 1991-2018 0 1 2 3 4 5 6 Pre-tax Post-tax Average adult
0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 0 . 1 % 0 . 1 %
1 t o T o p
T o p
Income percentile
Stata graph DINA_percgrowth_split2_1991-2018
Figure 4.1: Top 10% and bottom 50% income shares: Australia, US and France 1991-2018
Stata graph intcomp_pretax_sh1
Figure 4.2:
Notes: Distribution of pre-tax national income (before all taxes and transfers, except age pensions) among adults. Narrow equal-split adults series (income of married couples divided by two).
Top 1% and middle 40% income shares: Australia, US and France 1991-2018
Figure 4.3:
55 Top 10% Australia Top 10% US Top 10% France
Bottom 50% Australia Bottom 50% US Bottom 50% France
50
Share of national income (%) 20 25 30 35 40 45
15
10
1991 1994 1997 2000 2003 2006 2009 2012 2015 2018
55
50
45
Share of national income (%) 20 25 30 35 40 Middle 40% Australia Top 1% Australia Middle 40% US Top 1% US Middle 40% France Top 1% France
15
10
5
1991 1994 1997 2000 2003 2006 2009 2012 2015 2018
Stata graph intcomp_pretax_sh2
Notes: Distribution of pre-tax national income (before all taxes and transfers, except age pensions) among adults. Narrow equal-split adults series (income of married couples divided by two).
Average adult pre-tax income by percentile income group: Australia and France relative to the US, 1991 & 2017
Figure 4.4:
Average income relative to US income, PPP 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 Australia 1991 Australia 2017 France 1991 France 2017
0.2
10 20 30 40 50 60 70 80 90 100
Income percentile
Stata graph intcomp_pretax_perc_1991_2017
4.2. Post-tax national income
Comparisons across the US, France and Australia in the distribution of post-tax national income are
presented in Figures 4.4 and 4.5. Differences across the three countries are stark. The top 10% in the
US received nearly 34% of income in 1991, and this had risen to nearly 39% in 2018. In France, the
top 10% received approximately 27% of income in 1991 and this share fell slightly to approximately
26% in 2018. In Australia, the top 10% income share was approximately 23% between 1991 and
2001, but then increased to nearly 26% in 2010 and subsequently declined only slightly. For the top
1% (Figure 4.5), the US again has a much higher income share and greater growth in the income
share than France and Australia. The top 1% share is higher in France than in Australia, with the gap
being approximately 2 percentage-points in 1991 as well as in the most recent years.
Notes: Distribution of pre-tax national income (before all taxes and transfers, except age pensions) among adults. Narrow equal-split adults series (income of married couples divided by two). Comparisons are based on purchasing power parity (PPP) exchange rates (source: World Inequality Database). 2017 is the latest year for which estimates are available for all three countries.
Top 10% and bottom 50% income shares: Australia, US and France 1991-2018
Stata graph intcomp_ptninc_sh1
Figure 4.5:
Notes: Distribution of post-tax national income (after all taxes and transfers) among adults. Narrow equal-split adults series (income of married couples divided by two).
Top 1% and middle 40% income shares: Australia, US and France 1991-2018
45
Top 10% Australia Top 10% US Top 10% France
Bottom 50% Australia Bottom 50% US Bottom 50% France
40
Share of national income (%) 25 30 35
20
15
1991 1994 1997 2000 2003 2006 2009 2012 2015 2018
50
45
Share of national income (%) 15 20 25 30 35 40 Middle 40% Australia Top 1% Australia Middle 40% US Top 1% US Middle 40% France Top 1% France
10
5
1991 1994 1997 2000 2003 2006 2009 2012 2015 2018
Stata graph intcomp_ ptninc_sh2
Notes: Distribution of post-tax national income (after all taxes and transfers) among adults. Narrow equal-split adults series (income of married couples divided by two).
Mean post-tax national income per adult for immigrants and natives (1991-2018)
Notes: Broad equal-split adults series (household income equally split among adults).
27 29 31 33 35 37 12 14 16 18 20 22 Figure 6.7: Stata graph ptninc_mean_split2_migrant_b Share of national income (%) 1991 1996 2001 2006 2011 Bottom 50% Share of national income (%) 1991 1996 2001 2006 2011 Top 10 to 1% No post-school 2016 2016 Vocational qualification 40 42 44 46 48 50 1991 1996 0 2 4 6 8 10 1991 1996 2001 Middle 40% 2006 2001 2006 Top 1% 2011 2011 University 2016 2016
Figure 6.8:
Post-tax national income shares among immigrants and natives (1991-2018)
Notes: Broad equal-split adults series (household income equally split among adults).
80,000 Immigrants
Natives
75,000
Mean income per adult (current $) 55,000 60,000 65,000 70,000
50,000
45,000
1991 1994 1997 2000 2003 2006 2009 2012 2015 2018
Share of national income (%) 25 27 29 31 33 35 Bottom 50% 40 42 44 46 48 50 Middle 40%
1991 1996 2001 2006 2011 2016 1991 1996 2001 2006 2011 2016
Share of national income (%) 10 12 14 16 18 20 Top 10 to 1% 0 2 4 6 8 10
1991 1996 2001 2006 2011 2016 1991 1996 2001 2006 2011 2016
Stata graph pshares_split2_ausborn_ptninc_b
Table A .1: ATO adjustment of Employment Termination Payments (1991-2017, in current dollars)
A
Tax year Total adjustment Mean adjustment
1991 -8,241,221
Table A .2.1: Survey to National Account grossing-up factors (1991-2018)
A Constant factors by which each income component has to be multiplied in the survey data (complemented by tax data for the top 1%) to restore consistency with National Account. For instance, a factor of 2 means that incomes have to be doubled.
Year 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 Notes: Appendix A.4: Imputed rent regression estimates Employee income Mixed income Non-pension capital income 1.11 1.74 2.93 1.10 1.40 3.18 1.14 1.58 3.51 1.17 1.54 3.76 1.18 1.24 3.87 1.18 1.40 3.24 1.19 1.32 3.51 1.17 1.56 3.24 1.18 1.41 2.69 1.16 1.56 3.15 1.16 1.02 3.34 1.16 1.83 3.43 1.17 1.49 3.40 1.17 1.64 3.25 1.18 1.86 2.70 1.19 1.85 2.68 1.23 1.47 2.60 1.27 2.00 2.43 1.16 2.02 3.36 1.16 2.17 3.14 1.17 2.24 3.35 1.19 2.12 3.07 1.19 2.09 2.40 1.19 2.09 2.48 1.17 2.80 2.34 1.14 2.56 1.88 1.14 2.90 2.19 1.12 2.50 2.01 Personal income tax 1.14 1.05 1.06 1.12 1.14 1.18 1.19 1.16 1.17 1.16 1.02 1.12 1.12 1.12 1.12 1.19 1.15 1.23 1.22 1.16 1.16 1.17 1.14 1.11 1.12 1.09 1.11 1.10 Cash benefits 0.98 1.12 1.20 1.28 1.25 1.25 1.26 1.25 1.28 1.29 1.38 1.39 1.33 1.45 1.44 1.52 1.53 1.65 1.46 1.36 1.44 1.46 1.47 1.46 1.39 1.39 1.34 1.30
Table A .4.1: Weekly imputed rent regression estimates (2005-06)
A Ordinary Least Square estimates. Model (1) is with tenure type, model (
Gross imputed rent Net imputed rent
(1) (2) (1) (2)
Coef. S.E. Coef. S.E. Coef. S.E. Coef. S.E.
Tenure type (ref. is owner without a mortgage
Owner with a mortgage -0.003 0.010 -0.198*** 0.009
Renter -1.361*** 0.095 0.450*** 0.096
Other -0.048* 0.028 0.218*** 0.023
Has a mortgage 0.000 0.009 0.010 -0.274*** 0.010
State of residence (ref. is NSW)
VIC -0.121*** 0.012 -0.119*** 0.012 -0.001 0.010 -0.003 0.010
QLD -0.093*** 0.012 -0.092*** 0.012 0.004 0.010 0.006 0.010
SA -0.205*** 0.013 -0.204*** 0.013 -0.046*** 0.012 -0.049*** 0.012
WA -0.244*** 0.013 -0.242*** 0.013 -0.013 0.012 -0.017 0.012
Tas -0.184*** 0.016 -0.179*** 0.016 -0.016 0.016 -0.021 0.016
ACT & NT 0.008 0.018 0.008 0.018 0.056*** 0.013 0.057*** 0.013
Area of residence (ref. is Capital city)
Balance of State -0.198*** 0.009 -0.198*** 0.009 -0.033*** 0.008 -0.035*** 0.008
Number of bedrooms 0.128*** 0.005 0.131*** 0.005 0.028*** 0.004 0.025*** 0.004
Household gross income decile (ref. is 1)
2 -0.005 0.017 -0.001 0.017 -0.001 0.016 -0.004 0.016
3 -0.043** 0.017 -0.036** 0.017 -0.017 0.016 -0.026 0.016
4 -0.018 0.017 -0.019 0.017 -0.037** 0.016 -0.042*** 0.016
5 0.001 0.018 0.000 0.018 -0.044*** 0.016 -0.052*** 0.016
6 0.004 0.018 0.005 0.018 -0.054*** 0.016 -0.057*** 0.016
7 -0.007 0.018 -0.011 0.018 -0.073*** 0.016 -0.080*** 0.016
8 0.015 0.018 0.012 0.019 -0.050*** 0.016 -0.055*** 0.016
9 0.051*** 0.019 0.048** 0.019 -0.058*** 0.016 -0.072*** 0.016
10 0.109*** 0.019 0.109*** 0.019 -0.022 0.016 -0.042*** 0.016
Landlord type (ref. is real estate agent)
No landlord 3.962*** 0.095 5.313*** 0.013 2.311*** 0.381 1.862*** 0.379
State or territory housing
authority 5.140*** 0.020 5.167*** 0.020 1.596*** 0.391 1.590*** 0.379
Parent 5.300*** 0.034 5.307*** 0.035 1.423*** 0.392 1.420*** 0.380
Other person 0.053*** 0.019 0.058*** 0.019 1.682*** 0.419 1.621*** 0.418
Other 3.941*** 0.029 3.946*** 0.029 1.492*** 0.392 1.492*** 0.380
Mortgage weekly
repayments -0.439*** 0.005 -0.396*** 0.006
Gross imputed rent -1.636*** 0.381 -1.170*** 0.379
Sample size 9,857 9,857 9,857 9,857
Adjusted R2 0.969 0.968 0.690 0.692
Notes:
Top 1% and middle 40% income shares: Australia, US and France 1991-2018
Notes: Distribution of post-tax disposable income (after all taxes and transfers) among adults. Narrow-split-adults series (income of married couples divided by two).
50 100 150 200 250 300 350 400 450 5 10 Mean disposable income per capita (base 100 in 1991) 45 50 1991 1991 Figure C.4: Stata graph intcomp_net_sh2 1994 Bottom 50% 1997 Middle 40% Top 10 to 1% Top 1% 1994 1997 15 20 25 30 35 40 Middle 40% Australia 2000 2000 Share of total disposable income (%) Top 1% Australia 2003 2003 2006 2006 Middle 40% US 2009 2009 Top 1% US 2012 2012 Middle 40% France 2015 2018 2015 2018 Top 1% France
50 Top 10% Australia Top 10% US Top 10% France
Bottom 50% Australia Bottom 50% US Bottom 50% France
Share of total disposable income (%) 20 25 30 35 40 45
15
1991 1994 1997 2000 2003 2006 2009 2012 2015 2018
Stata graph intcomp_net_sh1
This paper builds on earlier work by Fisher-Post (2020).
A few preliminary adjustments to ALife data are required: see Appendix A.1.
The minimum contribution rate is now 10% and is scheduled to gradually increase up to 12% by 1 July 2025.
According to theABS (2008b, p.3), 'Gross imputed rent is the market value of the rental equivalent, and has been estimated using hedonic regression. Net imputed rent for owner occupiers has been derived by subtracting the housing costs normally paid by landlords (i.e., council rates, mortgage interest, building insurance premiums, repairs and maintenance) from gross imputed rent.'.
Total capital income is defined here as 'total net property income of households and non-profit institutions serving households' plus 'total net primary income of corporations'.
Age Pension income is not directly reported in ALife, but we can combine information on receipt of government pensions and age to infer it. In addition, we use ALife only for the top 1%, a group almost certain not to receive the Age Pension given that it is subject to both an income test and an asset test.
Acknowledgements
For helpful comments and discussions, we thank Rafael Carranza and participants of the IARIW 37 th General Conference, the 2 nd Australian Workshop on Public Finance, the 2022 Australian Conference of Economists, the 2022 Melbourne Institute Brownbag seminar series and the 2021 World Inequality Conference.
Appendix A: Data appendix Appendix A.1: Preliminary adjustments to top 1% in ALife A few adjustments are performed in ALife before it is combined with survey data for the top 1%.
Incomes and income components are not top coded in ALife, with one exception: in each year, the 24 largest 'employment termination' (redundancy) payments in the entire tax filer population are reduced to the level of the 25 th -largest payment value. Between 1991 and 2017, this represented a total adjustment of between $8 million and $57 million in total and (noting that ALife is a 10% sample) affected between 0 and 7 individuals in ALife each year (see Table A.1). We distribute the portion of Employment Termination Payment that was cut due to top-coding: we take 10% of the total shortfall and divide it between all top-coded observations in ALife.
Appendix A.3: Superannuation balance regression estimates |
04091744 | en | [
"math",
"stat"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-04091744/file/Article-MS.pdf | Mohamed Saidane
email: [email protected]
A New Viterbi-Based Decoding Strategy for Market Risk Tracking: An application to the Tunisian Foreign Debt Portfolio during 2010-2012
Keywords: C38, C53, G17, G32 Factor analysis, Volatility Clustering, Hidden Markov Models, Viterbi-EM Algorithm, Portfolio's Value-at-Risk
In this paper, a novel market risk tracking and prediction strategy is introduced. Our approach takes volatility clustering into account and allows for the possibility of regime shifts in the intra-portfolio's latent correlation structure. The proposed specification combines hidden Markov models (HMM) with latent factor models that takes into account the presence of both the conditional skewness and leverage effects in stock returns.
A computationally efficient expectation-maximization (EM) algorithm based on the Viterbi decoder is developed to estimate the model parameters. Using daily exchange rate data of the Tunisian dinar versus the currencies of the main Tunisian government's creditors, during the 2011 revolution period, the model parameters are estimated. Then, the suitable model is used in conjunction with a Monte Carlo simulation strategy to predict the Value-at-Risk (VaR) of the Tunisian government's foreign debt portfolio. The backtesting results indicate that the new approach appears to give a good fit to the data and can improve the VaR predictions, particularly during financial instability periods.
INTRODUCTION
According to [START_REF] Saidane | A Monte-Carlo-based Latent Factor Modeling Approach with Time-Varying Volatility for Value-at-Risk Estimation: Case of the Tunisian Foreign Exchange Market[END_REF] and [START_REF] Mosbahi | Mixture of Probabilistic Factor Analyzers for Market Risk Measurement: Empirical Evidence from the Tunisian Foreign Exchange Market[END_REF], the understanding of co-movements among asset returns is a central element in the portfolio risk management process. The authors advocate the use of a mixture of probabilistic factor analyzers and the conditionally heteroskedastic latent factor model to handle co-movements, heterogeneity and time-varying volatility embedded in financial data.
They demonstrate how their proposed strategies can be applied to the estimation of the portfolio's Value-at-Risk (VaR). However, an assumption of these models is that the correlation structure of the portfolio is assumed to be constant over time, but recent empirical works (e.g. [START_REF] Saidane | Forecasting Portfolio-Value-at-Risk with Mixed Factorial Hidden Markov Models[END_REF][START_REF] Tsang | Regime Change Detection Using Directional Change Indicators in the Foreign Exchange Market to Chart Brexit[END_REF][START_REF] Hamilton | Macroeconomic Regimes and Regime Shifts[END_REF][START_REF] Ang | Regime Changes and Financial Markets[END_REF] have shown that this assumption of structural stability is invalid for financial returns, especially during crisis periods. For example, when the economy is hit by a permanent or temporary exogenous unpredictable shock, the crosscorrelation behavior among several financial assets and the inter-relationship between volatilities can be expected to shift simultaneously. In light of this, we propose a novel market risk prediction strategy considering the possibility of regime switching in the interrelationships among several asset classes.
The new approach presented in this paper allows for the possibility of regime shifts in the intraportfolio's latent correlation structure and takes volatility clustering into account. The proposed specification combines latent factor models that takes into account the presence of both the conditional skewness and leverage effects with hidden Markov models (HMM). To capture the volatility clustering and the leverage effect patterns of the return series, we assume that the common variances are modeled separately using quadratic generalized autoregressive conditionally heteroskedastic (GQARCH) processes. This provides a more tractable way to handle the time-varying volatility, co-movements and the latent heterogeneity in financial data.
For the maximum likelihood estimation we proceed in two steps. In the first step, we use the Viterbi decoding algorithm to find the most probable path through the HMM, given the observed data, which we take as an estimate of the true path. In the second step, we implement the Expectation-Maximization (EM) algorithm introduced by [START_REF] Dempster | Maximum Likelihood from Incomplete Data via the EM Algorithm[END_REF], to estimate the model parameters. Our proposed estimation strategy overcomes the complexity and limitations of the exact learning algorithm, especially when the number of hidden states and the length of the time sequence become larger.
The remainder of this paper is organized as follows. In the next section, we provide further background on the factorial hidden Markov volatility model. In section 2, we discuss the inference procedure for the latent factors structure. We then present our iterative maximum-likelihood expectation-maximization (EM) algorithm in section 3. We describe the portfolio's VaR simulationbased Viterbi tracking strategy in section 4 and report on the backtesting results in section 5. In this paper, the currency risk of the Tunisian government's foreign debt portfolio during the revolution period of 14 January 2011 is considered as the basis for an application to our novel prediction strategy. Our portfolio includes the main debt currencies against the Tunisian dinar, such as the European euro, the American dollar, the Japanese yen, the Swiss franc and the British pound. Finally, we conclude the paper by summarizing our contributions and discussing the future research directions.
THE FACTORIAL HIDDEN MARKOV VOLATILITY MODEL
Throughout this paper, we consider a multivariate discrete-time model. The closing price of the k-th asset in the portfolio at the t-th trading day is denoted by 𝑝 𝑘,𝑡 , and the opening price at the first trading day by 𝑝 𝑘,0 .
For each 𝑡 ≥ 1, let 𝑟 𝑘,𝑡 = log(𝑝 𝑘,𝑡 /𝑝 𝑘,𝑡-1 ) be the log-return of the k-th asset. Our model assumes a Markov switching relationship between the observed variables (the log-returns) and a set of q latent factors, which depend on the market regime. This new framework, called factorial hidden Markov volatility model (FHMV), is defined by:
𝒓 𝑡 = 𝚽 𝑗 𝒛 𝑡 + 𝝐 𝑡 , (1)
where, ∀ 𝑡 = 1, … , 𝑇, 𝒓 𝑡 is a (𝑝 × 1) vector of log-returns. The transition probabilities of the first order homogenous hidden Markov process from state 𝑖 to state 𝑗 (∀ 𝑖, 𝑗 = 1, … , 𝑛) are represented by 𝑝(𝑆 𝑡 = 𝑗|𝑆 𝑡-1 = 𝑖), where 𝑗 is the actual market regime at time 𝑡, given the previous regime 𝑖 at time 𝑡 -1. In a specified regime 𝑆 𝑡 = 𝑗, 𝚽 𝑗 is the (𝑝 × 𝑞) factor loadings matrix.
The common latent factors 𝒛 𝑡 are generated from the multivariate normal distributions:
𝒛 𝑡 ~𝒩(𝟎, 𝛀 𝑗 ), (2)
where 0 and 𝛀 𝑗 denote, respectively, the (𝑞 × 1) mean vectors and (𝑞 × 𝑞) diagonal covariance matrices of the latent vectors 𝒛 𝑡 . The diagonal elements of 𝛀 𝑗 (common variances) are described by switching univariate quadratic GARCH [START_REF] Bastianin | Modelling Asymmetric Dependence Using Copula Functions: An application to Value-at-Risk in the Energy Sector[END_REF][START_REF] Bastianin | Modelling Asymmetric Dependence Using Copula Functions: An application to Value-at-Risk in the Energy Sector[END_REF] processes. Under a particular regime 𝑆 𝑡 = 𝑗 since 𝑆 𝑡-1 = 𝑖, the l-th common factor variance is given by:
𝜔 𝑙,𝑡 𝑗 = 𝛽 0𝑙 𝑗 + 𝛽 1𝑙 𝑗 𝑧 𝑙,𝑡-1 𝑖 + 𝛽 2𝑙 𝑗 𝑧 𝑙,𝑡-1 𝑖2 + 𝛽 3𝑙 𝑗 𝜔 𝑙,𝑡-1 𝑖 . ( 3
)
Assuming that 𝛽 1𝑙 𝑗 , 𝛽 2𝑙 𝑗 > 0, if 𝑧 𝑙,𝑡-1 < 0, its impact on the variance 𝜔 𝑙,𝑡 is lower than in the case where 𝑧 𝑙,𝑡-1 > 0.
Finally, the (𝑝 × 1) vector of specific factors can be written as follows:
𝝐 𝑡 ~𝒩(𝝁 𝑗 , 𝚲 𝑗 ), (4)
where 𝝁 𝑗 and 𝚲 𝑗 are, respectively, the (𝑝 × 1) mean vectors and (𝑝 × 𝑝) diagonal covariance matrices of the specific factors.
Assumption 1:
In order to insure the positivity of the common variances and the stationarity of the covariance structure of the studied series, we introduce some constraints on the parameters of the quadratic GARCH specification, such as:
𝛽 2𝑙 𝑗 + 𝛽 3𝑙 𝑗 < 1, 𝛽 0𝑙 𝑗 , 𝛽 2𝑙 𝑗 , 𝛽 3𝑙 𝑗 > 0 and 𝛽 1𝑙 𝑗 2 ≤ 4𝛽 0𝑙 𝑗 𝛽 2𝑙 𝑗 , ∀ 𝑗 = 1, … , 𝑛, 𝑙 = 1, … , 𝑞.
Assumption 2: To guarantee the model identification in [START_REF] Bastianin | Modelling Asymmetric Dependence Using Copula Functions: An application to Value-at-Risk in the Energy Sector[END_REF], we assume that ∀ 𝑗, 𝑟𝑎𝑛𝑘(𝚽 𝑗 ) = 𝑞 and 𝑝 ≥ 𝑞. The factors 𝒛 𝑡 and 𝝐 𝑡 are also assumed to be uncorrelated and mutually independent. For more detailed discussions of the identification problem, the reader can refer to Saidane and[START_REF] Saidane | Can the GQARCH Latent Factor Model Improve the Prediction Performance of Multivariate Financial Time Series[END_REF][START_REF] Carnero | Persistence and Kurtosis in GARCH and Stochastic Volatility Models[END_REF].
INFERENCE OF THE LATENT FACTORS STRUCTURES
Our model can be expressed as a switching state-space system with a measurement equation:
𝒓 𝑡 = 𝝁 𝑗 + 𝚽 𝑗 𝒛 𝑡 + 𝝐 𝑡 , (5)
and a transition equation:
𝒛 𝑡 = 𝟎 • 𝒛 𝑡-1 + 𝒛 𝑡 , (6)
In order to find the optimal sequences of hidden states 𝑆 𝑡 and latent factors 𝒛 𝑡 , we can use the Viterbi decoding algorithm based on the minimization of the Hamiltonian cost function given by the following equation: 𝛿 𝑡-1,𝑖 is the "optimal" HMM sequence up to time 𝑡 -1 when the market state is in regime 𝑖 at time 𝑡 -1.
ℋ(𝑟 1:𝑇 , 𝑍 1:𝑇 , 𝑆 1:𝑇 ) ≃ 𝑐 + 𝓢 1 ′(-log 𝝅) + ∑ 𝓢 𝑡 ′(-log 𝑷)𝓢 𝑡-1 𝑇 𝑡=2 + 1 2 ∑ ∑[log|𝚲 𝑗 | + (𝒓 𝑡 -𝚽 𝑗 𝒛 𝑡 -𝝁 𝑗 )′𝚲 𝑗 -1
Firstly we define the "optimal" partial Hamiltonian cost up to time 𝑡 of the observed log-return sequence 𝑟 1:𝑡 when the market state is in regime 𝑗 at time 𝑡:
𝛿 𝑡,𝑗 = min
𝑆 1:𝑡-1 ,𝑍 1:𝑡 ℋ(𝑍 1:𝑡 , {𝑆 1:𝑡-1 , 𝑆 𝑡 = 𝑗}, 𝑟 1:𝑡 ). ( 10
)
To calculate this cost correctly, we need the optimal filtered estimates of the common latent factors We need also the predicted and filtered common variances: , for 𝑙 = 1, … , 𝑞.
𝛀 𝑡|𝑡-1 𝑖(𝑗) = 𝔼 [(𝒛 𝑡 -𝒛 𝑡|𝑡-1 𝑖(𝑗) ) (𝒛 𝑡 -𝒛 𝑡|𝑡-1 𝑖(𝑗) ) ′ |𝑟 1:𝑡-1 , 𝑆 𝑡 = 𝑗, 𝑆 𝑡-1 = 𝑖] , (11)
Given the information set 𝒟 1:𝑡-1 = {𝑟 1:𝑡-1 , 𝑍 1:𝑡-1 , 𝑆 1:𝑡-1 }, the predicted variances are calculated as the conditional expectations of the predicted volatilities, 𝔼(𝜔 𝑙,𝑡 |𝒟 1:𝑡-1 ), and from the total variance
formula 𝔼(𝑧 𝑙,𝑡-1 𝑖2 |𝒟 1:𝑡-1 ) = 𝑉𝑎𝑟(𝑧 𝑙,𝑡-1 𝑖 |𝒟 1:𝑡-1 ) + 𝔼(𝑧 𝑙,𝑡-1 𝑖 |𝒟 1:𝑡-1 ) 2 = 𝜔 𝑙,𝑡-1|𝑡-1 𝑖 + 𝑧 𝑙,𝑡-1|𝑡-1 𝑖2
, we obtain the filtered variances 𝜔 𝑙,𝑡-1|𝑡-1
𝑖
. When a novel observation 𝒓 𝑡 becomes available, all the prediction estimates can be updated recursively via the Kalman filtering equations:
𝒛 𝑡|𝑡 𝑖(𝑗) = 𝒛 𝑡|𝑡-1 𝑖(𝑗) + 𝑲 𝑡 (𝑖, 𝑗) [𝒓 𝑡 -𝝁 𝑗 -𝚽 𝑗 𝒛 𝑡|𝑡-1 𝑖(𝑗) ] , (15)
𝛀 𝑡|𝑡 𝑖(𝑗) = [𝑰 𝑘 -𝑲 𝑡 (𝑖, 𝑗)𝚽 𝑗 ]𝛀 𝑡|𝑡-1 𝑖(𝑗) = 𝛀 𝑡|𝑡-1 𝑖(𝑗) -𝑲 𝑡 (𝑖, 𝑗)𝚪 𝑡|𝑡-1 𝑖(𝑗) 𝑲 𝑡 (𝑖, 𝑗) ′ , ( 16
)
with 𝚪 𝑡|𝑡-1 𝑖(𝑗) = 𝚲 𝑗 + 𝚽 𝑗 𝛀 𝑡|𝑡-1 𝑖(𝑗) 𝚽 𝑗 ′ and 𝑲 𝑡 (𝑖, 𝑗) = 𝛀 𝑡|𝑡-1 𝑖(𝑗) 𝚽 𝑗 ′ 𝚪 𝑡|𝑡-1 𝑖(𝑗)-1 . The innovation cost 𝛿 𝑡,𝑡-1,𝑖,𝑗 related to each transition from state 𝑖 to state 𝑗, is given by:
𝛿 𝑡,𝑡-1,𝑖,𝑗 = 1 2 log |𝚪 𝑡|𝑡-1 𝑖(𝑗) | + 1 2 [𝒓 𝑡 -𝝁 𝑗 -𝚽 𝑗 𝒛 𝑡|𝑡-1 𝑖(𝑗) ] ′ 𝚪 𝑡|𝑡-1 𝑖(𝑗)-1 [𝒓 𝑡 -𝝁 𝑗 -𝚽 𝑗 𝒛 𝑡|𝑡-1 𝑖(𝑗) ] -log 𝑝 𝑖𝑗 . (17)
A substantial part of this cost is exclusively due to the transition of the latent factors, as illustrated by the innovation component in equation ( 17 As a result, we obtain for each time 𝑡 the "optimal" filtered latent factors 𝒛 𝑡|𝑡 𝑗 = 𝒛 𝑡|𝑡 𝜆 𝑡-1,𝑗 (𝑗) and their corresponding variances 𝛀 𝑡|𝑡 𝑗 = 𝛀 𝑡|𝑡 𝜆 𝑡-1,𝑗 (𝑗) = 𝑑𝑖𝑎𝑔 [ω 𝑙,𝑡|𝑡-1 𝜆 𝑡-1,𝑗 (𝑗) ].
When all the log-returns 𝑟 1:𝑇 become available, we obtain the optimal global cost 𝛿 𝑇 * = min 𝑗 {𝛿 𝑇,𝑗 }.
Then, we use the index of the optimal final state in order to decode the optimal sequence of HMM states: 𝑗 𝑇 * = arg min 𝑗 {𝛿 𝑇,𝑗 }. To get the best regime for all time steps, we trace back through the market regime switching record: 𝑗 𝑡 * = 𝜆 𝑡,𝑗 𝑡+1 * .
We note here that the smoothing gain matrix 𝑳 𝑡 (𝑗)𝑘 = 𝛀 𝑡|𝑡 𝑗 𝟎 𝑘 ′𝛀 𝑡+1|𝑡 (𝑗)𝑘-1 = 𝟎 and the smoothing equations are simply given by:
𝒛 𝑡|𝑇 (𝑗)𝑘 = 𝒛 𝑡|𝑡 𝑗 + 𝑳 𝑡 (𝑗)𝑘 [𝒛 𝑡+1|𝑇 𝑘 -𝒛 𝑡+1|𝑡 𝑗(𝑘) ] = 𝒛 𝑡|𝑡 𝑗 , ( 18
) 𝛀 𝑡|𝑇 (𝑗)𝑘 = 𝛀 𝑡|𝑡 𝑗 + 𝑳 𝑡 (𝑗)𝑘 [𝛀 𝑡+1|𝑇 𝑘 -𝛀 𝑡+1|𝑡 (𝑗)𝑘 ] 𝑳 𝑡 (𝑗)𝑘 ′ = 𝛀 𝑡|𝑡 𝑗 . ( 19
)
Following the smoothing procedure developed by [START_REF] Saidane | An EM-Based Viterbi Approximation Algorithm for Mixed-State Latent Factor Models[END_REF], the sufficient statistics for our estimation problem will be given by:
𝔼(𝒮 𝑡 | •) = 𝒮 𝑡 (𝑗 * ), 𝔼(𝒮 𝑡 𝒮 𝑡-1 ′| •) = 𝒮 𝑡 (𝑗 * )𝒮 𝑡-1 (𝑗 * )′ and 𝔼(𝒛 𝑡 𝒮 𝑡 (𝑗)| •) = 𝒛 𝑡|𝑇 𝑗 𝑡 *
, if 𝑗 = 𝑗 𝑡 * and 0 otherwise. In this case, the operator 𝔼(| •)
denotes the expectation with respect to the distribution 𝑝(𝑍, 𝑆|𝑟).
MAXIMUM LIKELIHOOD ESTIMATION
We propose a two-step learning algorithm combining the expectation maximization (EM) algorithm [START_REF] Dempster | Maximum Likelihood from Incomplete Data via the EM Algorithm[END_REF] and the Viterbi decoding algorithm in order to estimate the parameters Θ of our model. The E-step subsists in calculating the expected value of the complete data log-likelihood function with respect to the conditional distribution of the unobserved variables (𝑍, 𝑆) given the observed returns 𝑟 and Θ (𝑒) , the value of the parameter at the current iteration (𝑒). The conditional expectation is then, maximized with respect to Θ at the M-step. In this case, the auxiliary function that will be maximized can be approximated as follows:
𝒬(Θ, Θ (𝑒) ) ≃ ∑ 𝒮 1 (𝑗) log 𝑝(𝑆 1 ) -∑ ∑ ∑ 𝒮 𝑡 (𝑗)𝒮 𝑡-1 (𝑖) log 𝑝 𝑖𝑗 𝑛 𝑗=1 𝑛 𝑖=1 𝑇 𝑡=2 𝑛 𝑗=1 - 1 2 ∑ ∑ 𝒮 𝑡 (𝑗)[log|𝚲 𝑗 | + 𝔼{(𝒓 𝑡 -𝝁 𝑗 -𝚽 𝑗 𝒛 𝑡 𝑗 )′𝚲 𝑗 -1 (𝒓 𝑡 -𝝁 𝑗 -𝚽 𝑗 𝒛 𝑡 𝑗 )|𝑟 1:𝑇 , Θ (𝑒) }] 𝑛 𝑗=1 𝑛 𝑖=1 - 1 2 ∑ ∑ ∑ 𝒮 𝑡 (𝑗)𝔼 [log(𝜔 𝑙,𝑡 𝑗 ) + 𝑧 𝑙,𝑡 2 𝜔 𝑙,𝑡 𝑗 |𝑟 1:𝑇 , Θ (𝑒) ] 𝑇 𝑡=1 𝑞 𝑙=1 𝑛 𝑗=1 , ( 20
)
and the conditional expectations can be derived using the sufficient statistics obtained by the Viterbi algorithm in section 3.
The basic idea behind our algorithm is summarized as follows: At the end of each iteration (𝑒) we find Θ (𝑒+1) , the optimal value of the parameter Θ that maximizes the function in equation ( 20) over all possible values of Θ. Then Θ (𝑒+1) replaces Θ (𝑒) in the E-step and Θ (𝑒+2) is chosen to maximize 𝒬(Θ, Θ (𝑒+1) ), and so on until convergence. However, given the nonlinear dependency of the common variance parameters in the last summation of equation ( 20), we can maximize in a first time this function with respect to the probabilities of the initial state 𝜋 𝑗 , the transition probabilities 𝑝 𝑖𝑗 , the specific means 𝝁 𝑗 , the factor loadings 𝚽 𝑗 and the specific variances 𝚲 𝑗 . In a second time, the parameters of the common variances can be determined numerically.
For the initial state probabilities 𝜋 𝑗 , we use the Lagrange multipliers approach subject to the condition that the sum ∑ 𝜋 𝑗 𝑛 𝑗=1
= 1, and we obtain the updated estimation:
𝜋 ̂𝑗 = 𝒮 1 (𝑗) ∑ 𝒮 1 (𝑖) 𝑛 𝑖=1 . ( 21
)
We use also the Lagrange formalism, subject to the unity constraint ∑ 𝑝 𝑖𝑗 𝑛 𝑗=1
= 1, to obtain the updated transition probabilities:
𝑝î 𝑗 = ∑ 𝒮 𝑡 (𝑗)𝒮 𝑡-1 (𝑖) 𝑇 𝑡=2 ∑ 𝒮 𝑡-1 (𝑖) 𝑇 𝑡=2 . ( 22
)
The maximization of the auxiliary function with respect to the specific means yields the updated estimates:
𝝁 ̂𝑗 = 1 ∑ 𝒮 𝑡 (𝑗) 𝑇 𝑡=1 ∑ 𝒮 𝑡 (𝑗)(𝒓 𝑡 -𝚽 𝑗 𝒛 𝑡|𝑇 𝑗 ) 𝑇 𝑡=1 . ( 23
)
The updated l-th row of the factor loadings matrix 𝚽 ̂𝑗can be expressed as follows:
𝝓 ̂𝑙,𝑗 = [∑ 𝒮 𝑡 (𝑗)(𝑟 𝑙,𝑡 -𝜇 𝑙,𝑗 )𝒛 𝑡|𝑇 𝑗 𝑇 𝑡=1 ] ′ [∑ 𝒮 𝑡 (𝑗)[𝛀 𝑡|𝑇 𝑗 + 𝒛 𝑡|𝑇 𝑗 𝒛 𝑡|𝑇 𝑗′ ] 𝑇 𝑡=1 ] -1 , ( 24
)
where 𝜇 𝑙,𝑗 is the specific mean of the l-th asset return 𝑟 𝑙,𝑡 under the market regime 𝑗. Then, given these updated parameters, we can update the specific variances according to the following rule:
𝚲 ̂𝑗 = 1 ∑ 𝒮 𝑡 (𝑗) 𝑇 𝑡=1 ∑ 𝒮 𝑡 (𝑗)𝑑𝑖𝑎𝑔 [𝚽 𝑗 𝛀 𝑡|𝑇 𝑗 𝚽 𝑗 ′ + (𝒓 𝑡 -𝝁 𝑗 -𝚽 𝑗 𝒛 𝑡|𝑇 𝑗 )(𝒓 𝑡 -𝝁 𝑗 -𝚽 𝑗 𝒛 𝑡|𝑇 𝑗 ) ′ ] 𝑇 𝑡=1 . ( 25
)
In a second time, given the new values of 𝜋 𝑗 , 𝑝 𝑖𝑗 , 𝝁 𝑗 , 𝚽 𝑗 and 𝚲 𝑗 , we can approximate the conditional distribution of the log-returns by the normal distribution:
𝒓 𝑡 |𝑟 1:𝑡-1 , 𝑆 𝑡 = 𝑗, 𝑆 1:𝑡-1 ∼ 𝒩[𝝁 𝑗 , 𝚪 𝑡|𝑡-1 𝑗 ]
(e.g. [START_REF] Harvey | Unobserved Component Time Series Models with ARCH Disturbances[END_REF]. In this case, 𝚪 𝑡|𝑡-1 (𝑗) ] is the conditional expectation of 𝛀 𝑡 , given the sequences 𝑟 1:𝑡-1 and 𝑆 1:𝑡-1 , obtained via the modified Kalman filter approach based on the Viterbi decoder developed in section 3.
𝑗 = 𝚲 𝑗 + 𝚽 𝑗 𝛀 𝑡|𝑡-1 𝑗 𝚽 𝑗 ′ and 𝛀 𝑡|𝑡-1 𝑗 = 𝑑𝑖𝑎𝑔 [𝜔 𝑙,𝑡|𝑡-1 𝜆 𝑡-1,𝑗
Using these approximations and ignoring the initial conditions, we obtain the following pseudo loglikelihood function:
ℒ * = 𝑐 - 1 2 ∑ ∑ 𝒮 𝑡 (𝑗) [log|𝚪 𝑡|𝑡-1 𝑗 -1 | + (𝒓 𝑡 -𝝁 𝑗 )′𝚪 𝑡|𝑡-1 𝑗 -1 (𝒓 𝑡 -𝝁 𝑗 )] 𝑛 𝑗=1 𝑇 𝑡=1 . ( 26
)
In a first stage, we ignore the elements in the last summation of equation ( 20) and then we maximize the remaining terms with respect to (𝜋 𝑗 , 𝑝 𝑖𝑗 , 𝝁 𝑗 , 𝚽 𝑗 and 𝚲 𝑗 ) using the EM algorithm. During this step, the parameters of the quadratic GARCH processes 𝛽 = {𝛽 0 , 𝛽 1 , 𝛽 2 , 𝛽 3 } are kept fixed to their values obtained in the previous iteration. In a second stage, we optimize the pseudo log-likelihood in equation ( 26) with respect to 𝛽, using the values of 𝜋 𝑗 , 𝑝 𝑖𝑗 , 𝝁 𝑗 , 𝚽 𝑗 and 𝚲 𝑗 found in the first step. The R package NlcOptim, developed by [START_REF] Chen | Solve Nonlinear Optimization with Nonlinear Constraints[END_REF], can be used in this step to find quickly and most accurately the parameters of the conditionally heteroskedastic component 𝛽.
THE FHMV APPROACH FOR VALUE-AT-RISK PREDCTION
Formally put, Value-at-Risk is a financial metric that measures the worst expected loss that could happen in an investment portfolio over a given horizon for a given confidence level. In this section a general Monte Carlo simulation FHMV-based framework for value-at-risk prediction, under regime switching dynamics, will be proposed. This approach will then be used, in section 5, for the evaluation of the currency risk associated with the Tunisian government's foreign debt portfolio during the revolution period of 14 January 2011.
Forecasting future market regime changes
Given the information set available at time 𝑡, 𝒟 1:𝑡 , and the actual market regime 𝑖, the conditional mean of the multivariate predictive distribution given by our FHMV model is as follows:
𝔼(𝒓 𝑡+1 |𝒟 1:𝑡 ) = 𝝁 𝑗 , ( 27
)
and the conditional variance-covariance matrix is given by:
𝚪 𝑡+1|𝑡 𝑗 = 𝚲 𝑗 + 𝚽 𝑗 𝛀 𝑡+1|𝑡 𝑗 𝚽 𝑗 ′ . ( 28
)
Within this framework, the forecasts of the future market regime jumps and the model parameters updating process are implemented simultaneously. Thus, by the end of each transaction day the closing prices will be included in the database. Thereafter, the parameters of our model will be updated using the newer information set available at this point in time, and the updated one-stepahead forecasts of the common latent factor variances will be derived via the relation: 𝜔 ̃𝑙,𝑡+1|𝑡
The simulation strategy
Our simulation strategy consists of the following steps:
1. Firstly, we define the coverage rate 𝛼 of the VaR. 5. Finally, to compute the portfolio's VaR for the period 𝑡 + 1, we sort the simulated values in ascending order and we exclude the 𝛼% lowest returns ℛ 𝑠,𝑡+1|𝑡 . In this case, the predicted VaR is the minimum of the remaining returns.
NUMERICAL EXPERIMENTS USING EXCHANGE RATE DATA
In this empirical experiment, we use the FHMV model to analyze the dynamic latent correlation structure of the five dominant currencies in the Tunisian government's foreign debt portfolio. The optimal specification obtained with the Viterbi algorithm will then be used to evaluate, through a backtesting exercise, the performance of the new methodology in detecting the foreign exchange risk associated with this portfolio. All the numerical results and the graphs in this section are obtained using the R statistical freeware, version 4.1.
Data presentation and summary
We focus in this section on the main five currencies forming the basis for the Tunisian government's foreign debt portfolio, namely the European euro (EUR), the American dollar (USD), the Japanese yen (JPY), the Swiss franc (CHF) and the British pound (GBP). Our dataset, downloaded from the Yahoo Finance website, spread over the period between 02/01/2007 and 30/12/2012, consists of 1500 daily exchange rates for the different currencies expressed in terms of Tunisian dinar (TND). This dataset includes the period of social mobilization and political change in Tunisia (the revolution of 14 January 2011). In this case, taking into account the period of social instability we will be permitted to investigate the efficiency of our Jump-VaR methodology during crisis times.
In Table 1, we give a variety of descriptive statistics to study the distributional characteristics of the data and to test the empirical skewness and Kurtosis against the values of normal distributions (e.g. D 'Agostino, 1970 and[START_REF] Anscombe | Distribution of Kurtosis Statistic for Normal Statistics[END_REF]. We implemented also the normality test [START_REF] Jarque | Efficient Test for Normality, Homoscedasticity and Serial Independence of Residuals[END_REF]. From these results, we note that all the log-returns are non-normally distributed, they are still skewed (positive for EUR, USD, JPY and GBP and negative for CHF). We note also a positive excess kurtosis for all the currencies. The results of the [START_REF] Ljung | On a Measure of Lack of Fit in Time Series Models[END_REF] statistic show the presence of volatility clustering. This imply that we have a non-constant conditional volatility, and the use of a
Markov-switching specification with a time-varying co-movement structure for the log-return series, is more realistic in this situation.
insert Table 1 about here.
A preliminary latent structure analysis of the data
In order to select the most appropriate model that fits better our dataset, we used the Akaike (AIC)
and the Bayesian (BIC) information criteria. To this end, we trained standard and conditionally heteroskedastic models using one or two common factors and a number of hidden states varying between one and three, on the period from 02/01/2010 to 30/12/2012. Then, we used the selection criteria to identify the best model with the minimum AIC and BIC values.
insert Table 2 about here.
The results reported in Table 2 show that the FHMV model with two common factors and two HMM states is the best one fitting our dataset. For this optimal specification, the initial state probability vector and the transition probability matrix are as follows:
𝝅 = [ 1 0
] and 𝑷 = [ 0.9491 0.0509 0.2311 0.7689 ].
In Figure 1, we depict the percentage of the variability of the different currencies expressed in terms of specific and common factors. During the crisis period, we can see that, on average, 90% and 95% of the variances of the EUR and the CHF are explained by the first common factor. During the normal period (before and after the 2011-2012), we can see that the first common factor explains on average 50% and 90% of the variability of the EUR and CHF.
The contribution of the second common factor to the variability of the EUR and CHF, during the instability period, is almost insignificant. During the revolution period, this factor explains more than 85% and 65% of the variability of the USD and JPY. However, the contribution of the first factor to the JPY variability is around 45% during the normal period. For the GBP, the contribution of this factor is around 35% over the whole period.
insert Figure 1 about here.
From these results, we can conclude that the first common factor is associated with the volatility dynamics of the European currencies. During the social mobilization period, the second factor is associated with the volatility dynamics of the American and Japanese currencies. We can conclude also that the first common latent factor expresses the relative value of the TND against the major trading partner's currencies (the European community countries). The second factor reproduces the relative value of the TND against a basket of global currencies in which the American and Japanese currencies are dominant.
insert Table 3 about here.
From the estimation results presented in Table 3, we note that the first common factor can be regarded as a European factor: it represents a basket of currencies, where the EUR dominates with relatively high loadings (50% in the first regime and 76% in the second regime). The weight of the GBP is relatively reduced in this basket. We note also that the second common factor represents a basket of currencies, where the USD dominates with relatively high loadings (52% in the first regime and nearly 60% in the second regime). In order to satisfy the identification constraints (e.g. Saidane and Lavergne 2011), we have taken 𝜙 1,2,𝑗 = 0, ∀𝑗 = 1,2, which imply that the European currency EUR is entirely absent from the second factor. The relative weight of the CHF is also reduced in this factor. Hence, we can consider the second common factor as an American factor.
insert Table 4 about here.
From Table 4, it appears that the excess of volatility during the political instability period (the second hidden regime) is relatively due to the significant increase in volatility persistence (e.g. [START_REF] Klaassen | Improving GARCH Volatility Forecasts with Regime-Switching GARCH[END_REF]. We observe from this table that the sum of the volatility parameters, 𝛽 2 and 𝛽 3 , of the two common factors in the second regime is nearly close to 1.
All the previous conclusions are strongly confirmed by the estimated values of the specific variances, given in Table 3. Hence, during the social mobilization period, the specific variance of the British pound is relatively high, which indicates its aberration from its latent factorial class. On the other hand, the specific variances of the European euro and the American dollar are the smallest ones, which indicate their determinant role in their latent factorial class.
Finally, in Figure 2, we depict the correlation structure of the different log-returns during the period 02/01/2010 to 30/12/2012. The graph picks up co-movement increases between all the log-returns from the beginning of 2011 until near the end of the study period. This result confirms the financial contagion that affected the Tunisian economy during the revolution period.
insert Figure 2 about here.
Selection of the most appropriate VaR model
In order to assess the currency risk associated with the Tunisian government's foreign debt portfolio during the social mobilization period of 14 January 2011, we divided in a first time our dataset into calibration set and test set. The calibration, called also training, set contains the log-returns of the different exchange rates during the period 02/01/2007-30/12/2009 (750 observations). The test, called also backtesting, set contains the remaining 750 observations covering the period 02/01/2010-30/12/2012. Then, we used the Monte Carlo simulation strategy (section 4.2.) to evaluate the VaR of our portfolio. For each coverage rate 𝛼, we used the portfolio weights given in Table 5. Here, the weight of each exchange rate 𝛾 𝑘 is determined by the relative share of currency 𝑘 in the payment of the total foreign debt. For example, in 2010 Tunisia settled 61.3% of its foreign loans in Euro, 14.3% in American dollar, 16.1% in Japanese yen, 2.4% in Swiss franc and 5.9% in British pounds. Hence, in 2010, 𝛾 1 = 0.613, 𝛾 2 = 0.143, 𝛾 3 = 0.161, 𝛾 4 = 0.024 and 𝛾 5 = 0.059. For 2011 and 2012, the weights are determined in the same way.
insert Table 5 about here.
In a second time, the effectiveness of our methodology is justified by some experiments, using unconditional [START_REF] Kupiec | Techniques for Verifying the Accuracy of Risk Measurement Models[END_REF] and conditional [START_REF] Christoffersen | Elements of Financial Risk Management[END_REF] tests and the rolling sample method based on a one-day moving window scheme with the coverage rates from the level of 0.005 to 0.1 by 0.005. All these calculations have been carried out by simulations from our FHMV model, the mixed factorial hidden Markov model (MFHMM) by [START_REF] Saidane | Forecasting Portfolio-Value-at-Risk with Mixed Factorial Hidden Markov Models[END_REF], the latent factor model with time varying volatility (FM) by [START_REF] Saidane | A Monte-Carlo-based Latent Factor Modeling Approach with Time-Varying Volatility for Value-at-Risk Estimation: Case of the Tunisian Foreign Exchange Market[END_REF] and the classical Monte Carlo simulation method (CMC) by [START_REF] Mosbahi | Mixture of Probabilistic Factor Analyzers for Market Risk Measurement: Empirical Evidence from the Tunisian Foreign Exchange Market[END_REF].
In order to compromise between precision and efficiency, we generated 𝑚 = 25.000 scenarios from each competing model (e.g. [START_REF] Saidane | Switching latent factor value-at-risk models for conditionally heteroskedastic portfolios: A comparative approach[END_REF][START_REF] Lu | Portfolio Value-at-Risk Estimation in Energy Futures Markets with Time-Varying Copula-GARCH Model[END_REF][START_REF] Bastianin | Modelling Asymmetric Dependence Using Copula Functions: An application to Value-at-Risk in the Energy Sector[END_REF][START_REF] Fantazzini | Dynamic Copula Modelling for Value at Risk[END_REF].
Then, we calculated the VaR, the exception rates and the likelihood ratios for the proportion of failure test (LR-pof), the independence test (LR-ind) and the conditional coverage test (LR-cc).
All the results of the backtesting experiments are given in table 6 and figures 3-4. The Kupiec and
Christoffersen backtesting results show that the optimal FHMV model, with 2 latent factors and 2
hidden states, provides good results and gives exception rates very close to the target (the true coverage rates 𝛼). The likelihood ratios associated with the unconditional and independence tests, for our proposed model, are always lower than the critical values, which imply a significant conditional coverage tests for all the confidence levels. 1insert Table 6 about here.
For the coverage rates from 0.5% to 2%, Figure 3 shows promising results for the optimal FHMV model compared to those given by the best MFHMM (with 2 mixture components and 2 latent factors). For the significance level 2%, our FHMV model gives, for example, an exception rate equal to 2.17%, versus 2.45% obtained by the MFHMM. From this figure, we can see also that the optimal MFHMM looks better than the FM and CMC, especially at low confidence levels. Hence, we can argue that the FHMV model is the more precise and yields higher-quality predictions, as compared to the other competing models.
insert Figure 3 about here.
In order to compare the results given by the different models, we used the squared relative prediction
error criteria 𝑆 = ∑ [(𝐸 𝑖 -𝛼 𝑖 )/𝛼 𝑖 ] 2 20 𝑖=1
, where 𝐸 𝑖 are the estimated exception rates obtained with the different specifications, and 𝛼 𝑖 the coverage rates. It appears from the results that the most adequate model to evaluate the VaR of our portfolio, during this period, is the FHMV framework. This specification gives the estimated exception rates closest to all the true significance levels with 𝑆 = 0.4561. The second ranked model is the MFHMM (𝑆 = 2.5824), the third is the FM with (𝑆 = 4.9783), and the CMC is the worst one with 𝑆 = 6.0673.
insert Figure 4 about here.
Finally, we can see from Figure 4 the significant effect of the volatility shocks on the predicted VaR given by the optimal FHMV model. Hence, we can argue that the major reason for the bad results
given by the CMC, FM, and to a lesser degree the MFHMM, is that they do not take into account the abnormal switching behaviors, which can affect the volatility and the co-movement dynamics in financial markets during crisis periods. the new approach appears to give a good fit to the data, allows to more close forecasts to the market changes and can improve the VaR predictions and offer more accurate VaR estimates than the other competing models for all coverage rates from 0.5% to 10%.
CONCLUSION
We conclude that our Viterbi-based decoding strategy using the factorial hidden Markov volatility model seems to be a useful tool for portfolio risk management and control, especially during periods of financial market stress. These results support our argument for integrating time-varying volatility and regime jumps into the risk measurement framework. In the forthcoming works, we intend to reflect the interaction between the common latent factors with a dynamic structure for the idiosyncratic variances. We will address also nonlinear behaviors, non-homogeneous transition probabilities and other areas of application, like options, or credit derivatives.
[𝒛 𝑡 |𝑟 1:𝑡 , 𝑆 𝑡 = 𝑗], the one-step ahead predictions of the common latent factors 𝒛 𝑡|𝑡-1 𝑖(𝑗) = 𝔼[𝒛 𝑡 |𝑟 1:𝑡-1 , 𝑆 𝑡 = 𝑗, 𝑆 𝑡-1 = 𝑖] and their optimal filtered estimates, 𝒛 𝑡|𝑡 𝑖(𝑗) = 𝔼[𝒛 𝑡 |𝑟 1:𝑡 , 𝑆 𝑡 = 𝑗, 𝑆 𝑡-1 = 𝑖].
). The remaining part (-log 𝑝 𝑖𝑗 ) reflects the transition of the market state from regime 𝑖 to regime 𝑗. In this case, the minimization of the global cost at time 𝑡 requires the selection of the optimal previous market state 𝑖: 𝛿 𝑡,𝑗 = min 𝑖 {𝛿 𝑡,𝑡-1,𝑖,𝑗 + 𝛿 𝑡-1,𝑖 }. The resulting index is then recorded in the regime switching record, 𝜆 𝑡-1,𝑗 = arg min 𝑖 {𝛿 𝑡,𝑡-1,𝑖,𝑗 + 𝛿 𝑡-1,𝑖 }.
2 . 3 .
23 Then, taking into account the presence of leverage effects and conditional skewness in financial time series, we simulate different return scenarios from the conditional distribution of the common latent factors 𝒛 𝑡+1|𝑡 𝑠 , using the optimal specification obtained by the Viterbi algorithm at time 𝑡 (section 5.1). a. We use in a first time the normal distribution 𝒩(𝟎, 𝑰 𝑞 ) to generate the standardized factors After that, we simulate different return scenarios from the conditional distribution of the specific factors 𝝐 𝑡+1|𝑡 𝑠 , using also the optimal specification obtained by the Viterbi algorithm at time 𝑡. a. We generate in a first time from the normal distribution 𝒩(𝟎, 𝑰 𝑝 ) the standardized specificities 𝝐 𝑡+1 * . b. Then, we compute the lower triangular Cholesky factor 𝚲 𝑗 * of the variance-covariance matrix 𝚲 𝑗 , and we obtain: the fourth step, we compute 𝑚 different portfolio's returns for the period 𝑡 + 1 as, ℛ 𝑠,1 , 𝛾 2 , … , 𝛾 𝑝 denote the portfolio weights of the 𝑝 risk factors and 𝒓 𝑡+1|𝑡 𝑠 = 𝝁 𝑗 + 𝚽 𝑗 𝒛 𝑡+1|𝑡 𝑠 + 𝝐 𝑡+1|𝑡 𝑠 (∀ 𝑠 = 1, … , 𝑚).
This paper develops a new multivariate approach for Value-at-Risk (VaR) prediction. Our strategy considers the possibility of regime jumps in the intra-portfolio's latent correlation structure and allows for time-varying volatility in the factor variances. The proposed framework combines factor analysis models with GQARCH processes and hidden Markov models. During financial crisis periods, this specification provides a more tractable way to capture simultaneously the switching interrelations between assets and the time-varying volatility of each individual asset.The accuracy of the new prediction approach in comparison with other existing models (such as the mixed factorial hidden Markov model, the latent factor model with time varying volatility and the classical Monte Carlo method) is evaluated through a real dataset example from the Tunisian foreign exchange market for the period 02/01/2010 to 30/12/2012. Our strategy aims to select the best model that could predict the VaR of the Tunisian government's foreign debt portfolio during the social mobilization period of 14 January 2011. In that period the Tunisian economy has experienced the longest, deepest and most broad-based recession in its history since the 1978. The main results of the empirical example and the backtesting experiments, based on the rolling sample method, show that
Figure 1 Source
1 Figure 1 The percentage of variability of the different log-returns associated to the common and specific factors over the period from 02/01/2010 till 30/12/2012
Figure 2 Figure 3
23 Figure 2 Plot of the estimated correlations from the best FHMV model, during the period 02/01/2010 -30/12/2012
(𝒓 𝑡 -𝚽 𝑗 𝒛 𝑡 -𝝁 𝑗 )]𝒮 𝑡(𝑗) where 𝑟 1:𝜏 = {𝒓 1 , 𝒓 2 , … , 𝒓 𝜏 }, 𝑍 1:𝜏 = {𝒛 1 , 𝒛 2 , … , 𝒛 𝜏 }, 𝑆 1:𝜏 = {𝑺 1 , 𝑺 2 , … , 𝑺 𝜏 } are, respectively, the sequences of observed returns, latent common factors and HMM states up to time 𝜏; 𝝅 the vector of initial state probabilities, of length n-states and summing to 1; 𝑷 the matrix of transition probabilities of the hidden Markov chain, the sum of all elements in the i-th row [𝑝 𝑖1 … 𝑝 𝑖𝑛 ] is 1, ∀ 𝑖 = 1, … , 𝑛 and
𝓢 𝑡 = [𝒮 𝑡 (1), … , 𝒮 𝑡 (𝑛)]′, where 𝒮 𝑡 (𝑗) = 1, if 𝑆 𝑡 = 𝑗 and 0 otherwise.
If we denote by 𝑆 1:𝑇 * the optimal sequence of HMM states, the posterior distribution 𝑝(𝑍 1:𝑇 , 𝑆 1:𝑇 |𝑟 1:𝑇 )
can be approximated as:
𝑝(𝑍 1:𝑇 , 𝑆 1:𝑇 |𝑟 1:𝑇 ) ≃ 𝜂(𝑆 1:𝑇 -𝑆 1:𝑇 * )𝑝(𝑍 1:𝑇 |𝑆 1:𝑇 , 𝑟 1:𝑇 ), (8)
i.e. the posterior distribution of the HMM state sequence 𝑝(𝑆 1:𝑇 |𝑟 1:𝑇 ) is approached by its mode,
where 𝜂(𝑦) = 1 for 𝑦 = 𝜙 and zero otherwise. The optimal sequence of HMM states can formally be
obtained by solving the dynamic optimization program: 𝑆 1:𝑇 * = arg max 𝑆 1:𝑇 𝑝(𝑆 1:𝑇 |𝑟 1:𝑇 ). In this case, an
almost optimal solution can be reached by maximizing recursively the probability of the best HMM
sequence up to time t:
𝛿 𝑡,𝑗 = max 𝑆 1:𝑡-1 𝑝(𝑆 1:𝑡-1 , 𝑆 𝑡 = 𝑗, 𝑟 1:𝑡 )
≃ max 𝑖 {𝑝(𝒓 𝑡 |𝑆 𝑡 = 𝑗, 𝑆 𝑡-1 = 𝑖, 𝑆 1:𝑡-2 * (𝑖), 𝑟 1:𝑡-1 )𝑝(𝑆 𝑡 = 𝑗|𝑆 𝑡-1 = 𝑖)
× max 𝑆 1:𝑡-2 𝑝(𝑆 1:𝑡-2 , 𝑆 𝑡-1 = 𝑖, 𝑟 1:𝑡-1 )}, (9)
* where 𝑆 1:𝑡-2 (𝑖) = arg max 𝑆 1:𝑡-2
𝑇 𝑛
𝑡=1 𝑗=1
+ 1 2 𝑛 𝑗=1 ∑ ∑[log|𝛀 𝑗 | + 𝒛 𝑡 ′𝛀 𝑗 𝑇 𝑡=1 -1 𝒛 𝑡 ]𝒮 𝑡 (𝑗) , (7)
𝑡 obtained by the Viterbi algorithm, through the state transition record 𝜆 𝑡-1,𝑖 at each time step. The FHMV model with the optimal future hidden state 𝑆 ̂𝑡+1|𝑡 , will be used in the simulation procedure as the data generating process, to calculate the VaR of our portfolio.
𝑗 =
𝛽 0𝑙 𝑗 + 𝛽 1𝑙 𝑗 𝑧 𝑙,𝑡|𝑡 𝑖 + 𝛽 2𝑙 𝑗 𝑧 𝑙,𝑡|𝑡 𝑖 2 + 𝛽 3𝑙 𝑗 𝜔 𝑙,𝑡|𝑡 𝑖 . Then, the future market regime 𝑆 𝑡+1 = 𝑗 can be obtained as a
solution of the optimization problem: 𝑆 ̂𝑡+1|𝑡 = arg max 𝑗 𝑝(𝑆 𝑡+1 = 𝑗|𝑆 𝑡 = 𝑖 𝑡 * ), where 𝑖 𝑡 * is the optimal
market regime at time
Table 1
1 Basic descriptive characteristics of the daily exchange rate returns from 02/01/2010 to
30/12/2012
Statistic EUR/TND USD/TND JPY/TND CHF/TND GBP/TND
Mean 0.000327 0.000381 0.000179 0.000274 0.000363
Max 0.0439 0.0423 0.0514 0.0424 0.1468
Median 0.000198 0.000255 0.000163 0.000308 0.000212
Min -0.0538 -0.0619 -0.0718 -0.0653 -0.0083
Std. Dev. 0.00425 0.00510 0.00674 0.00583 0.00620
D'Agost. test 7.49623 2.72961 4.14286 -6.21753 9.57321
(0.0000) (0.0047) (0.0000) (0.0081) (0.0000)
A-Glyn. test 18.751 16.173 15.927 16.369 21.916
(0.0000) (0.0000) (0.0000) (0.0000) (0.0000)
LB. test 117.67 108.21 41.962 62.114 123.813
(0.0000) (0.0000) (0.0000) (0.0000) (0.0000)
J-Bera. test 65.321 29.655 14.533 26.123 34.259
(0.0000) (0.0000) (0.0074) (0.0000) (0.0000)
Note:
The values into brackets represent the p-values of the corresponding tests.
Source: Own construction
Table 2
2 AIC and BIC values for the different specifications over the period from 02/01/2010 till 30/12/2012 (750 observations)
Table 3
3 Estimation results of the optimal FHMV model during the period 02/01/2010 -30/12/2012
Model
parameters
(1e-04)
Note:
The selection criteria values for the standard models are given into brackets.
Source: Own construction
Table 4
4 Estimated parameters of the conditionally heteroskedastic components of the optimal FHMV model during the period 02/01/2010 -30/12/2012
Common 𝛽 0 𝛽 1 𝛽 2 𝛽 3
Variance
parameters
Factor 1
Regime 1 0.1224 0.0009 0.2411 0.3648
Regime 2 0.1521 0.1382 0.2363 0.7601
Factor 2
0.0843 0.1774 0.1892 0.4667
0.1102 0.1430 0.2711 0.7092
Source: Own construction
Table 5
5 Structure of the Tunisian foreign debt by settlement currency for the period 2010-2012 Monetary and financial statistics of the Tunisian central bank
Date Indicators
EUR USD JPY GBP CHF
31/12/2010 61.3 14.3 16.1 5.9 2.4
31/12/2011 56.8 20.1 15.3 5.6 2.2
31/12/2012 59.6 18.9 13.8 5.1 2.6
Source:
Table 6
6 Backtesting results for the FHMV model with two hidden states and two common factors
Confidence Exception rate 1st exception LR-pof LR-ind LR-cc
level
0.950 0.050 18 1.7934 1.1152 2.9086
0.960 0.040 22 1.8372 1.1256 2.9628
0.970 0.029 38 2.0839 1.1398 3.2237
0.980 0.022 38 2.1856 1.1458 3.3314
0.990 0.013 56 2.3122 1.1593 3.4715
Source: Own construction
The VaR exceptions obtained by the optimal FHMV model for different coverage rates and different portfolio weights
0.90 0.90 02/01/2010 0.00 0.02 0.04 0.06 0.08 0.10 0.00 0.02 0.04 0.06 0.08 0.10 0.12 -0.005 Exception rates 0.000 0.005 0.010 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 02/01/2011 -0.005 0.000 0.005 Figure 4 Source: Own construction 0.94 FHMV MFHMM 0.94 MFHMM FM = 1% = 2% = 5% 02/01/2012 0.98 0.98 0.90 0.90 Confidence levels 0.94 0.98 0.00 0.02 0.04 0.06 0.08 0.10 0.12 FHMV FM 0.94 0.98 0.00 0.02 0.04 0.06 0.08 0.10 0.12 MFHMM CMC 01/05/2010 01/05/2011 01/05/2012 0.90 0.90 01/09/2010 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.00 0.02 0.04 0.06 0.08 0.10 0.12 01/09/2011 01/09/2012 0.94 FHMV CMC 0.94 FM CMC 0.98 0.98 30/12/2010 Returns 30/12/2011 30/12/2012
The critical values for the Kupiec and Christoffersen tests are, respectively, 𝜒
(1) =
3.8414 and 𝜒 2 (2) = 5.9915 for 95% VaR.
ACKNOWLEDGMENT
We would like to thank the editor and the reviewers for their constructive, careful and helpful feedback. |
04104230 | en | [
"spi"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-04104230/file/13-20.pdf | Somto K Onukwuli
Charles Chikwendu Okpala
Fred N Okeagu
Review of Benefits and Limitations of Coir Fiber Filler Material in Composites
Keywords: fiber, composite, coir fiber, coir fiber reinforced composite, lignin, cellulose, history of coir, limitations of coir fiber, benefits of coir fiber
The applications of coir fiber reinforced composites have witnessed rapid growth in industrial applications as well as in fundamental research, due to their improved physical and mechanical characteristics. The paper started with the definitions of fiber and composites, before it provided a brief history of coir fiber starting from when it was observed from the Ramayana era around the third century B.C. The nature of coir fiber, it's processing and extraction, as well as its chemical properties, and physical and mechanical compositions were discussed in detail. Apart from being abundant in nature, cost effective, non-toxic, renewable, lowest thermal conductivity, and bulk density, the research pointed out that coir fiber is more durable than other natural fibers because of its high lignin content. Despite the remarkable properties of coir fiber, it was noted that untreated coir fiber composites have unwanted features such as dimensional instability and flammability which makes it unsuitable for high temperature applications. After explaining that the extremely hydrophilic character of coir, along with the hydrophobic nature of the polymers employed for matrix production makes the manufacture of coir polymer composites difficult, resulting in mechanical property loss following moisture uptake, the paper concluded that in order to reduce or eliminate the rate of moisture absorption, that coir fiber surface must be chemically modified.
I.INTRODUCTION
fiber is defined as a unit of matter distinguished by fineness, flexibility, and a high length-to-thickness ratio [START_REF] Farnfield | Textile terms and definitions[END_REF]. The first fibers used by humans were natural fibers such as wool, sisal, cotton, silk, flax, and hemp, while the first synthetic fiber was most likely glass, [START_REF] Grosberg | Fibre failure and wear of materials-an atlas of fracture, fatigue and durability[END_REF]. In a nutshell, natural fibers are fibers that are not organic or man-made. They can be obtained from either plants or animals, [START_REF] Ray | Thermoset biocomposites. Natural Fibers[END_REF]. Natural fibers are multicellular in nature, comprised of a number of continuous, primarily cylindrical honeycomb cells with variable sizes, forms, and structures [START_REF] Pothan | The role of fibre/matrix interactions on the dynamic mechanical properties of chemically modified banana fibre/polyester composites[END_REF].
The basic idea behind fiber composites is the use of fibers as reinforcement in a resin matrix. Fibers typically give the majority of the required strength, while resin provides binding to the fibers. This is because fibers cannot withstand actual loads on their own, and as a result, resin is utilized to bond and protect them.
Okpala, Chinwuko, and Ezeliora (2021), defined a composite as a material that comprise of two or more dissimilar constituents that when joined together are quite stronger than the individual components. They pointed out that a stronger material that is known as reinforcement is embedded in a weaker one known as matrix, to form a new material with enhanced properties like rigidity, long term durability, and improved strength. A composite material combines the most desired qualities of its parts while suppressing their least desirable properties, which in return gives better and unique mechanical and physical properties. Composites, plastics, and ceramics have been the dominating engineering materials during the past few decades.
Modern composite materials are made up of many different materials that are used in everyday life, as well as in sophisticated applications. While composites have already proven their worth as weight-saving materials, the current challenge is to make them durable enough to replace other materials while also remaining cost effective. This has led to the creation of several innovative procedures that are now in use in composite industry. At the moment, composite materials play an important part in the aerospace, car, and other technical applications due to their exceptional strengthto-weight and modulus-to-weight ratios [START_REF] Haruna | Prospects and challenges of composites in a developing country[END_REF]. Coir fiber composites are composite materials that are reinforced with fibers, particles, or platelets derived from coconut.
Composite material uses have expanded fast and have even found new markets, as numerous industries have utilized composites for diverse applications. Composite materials are divided into three types based on their matrix materials: polymer matrix composites, metal matrix composites, and ceramic matrix composites. Each form of these composites is appropriate for a variety of purposes. A matrix or binder material is the foundation material that binds or retains the filler material in structures, whereas filler material is present in the form of sheets, pieces, particles, fibers, or whiskers of natural or synthetic materials [START_REF] Rajak | Fiber-reinforced polymer composites: Manufacturing, properties, and applications[END_REF]. Coconut fiber ropes and cordage have been used since prehistoric times, as coconut was also referred to as "Indian Nut" by explorer Marco Polo (Deccan Herald, 2020). Polynesian coir, also known as sennit, was once again utilized as a building material to weave together boats, houses, swords, and tools. Coir was used as a ship's cable by Indian navigators who crossed the seas to Malaya, Java, China, and the Gulf of Arabia centuries ago. Coir as a reinforcing fiber is gaining prominence in the composite research and industry due to its versatile character, as well as its high biodegradability and availability. They have a number of potential advantages, particularly in terms of environmental performance; this is because when natural fiber composite trash is burned, no net carbon dioxide emissions are released into the environment.
The history of natural fiber reinforced composites began in the last 20 years, when there was a lot of interest in using natural fibers to make polymer bound composites. They are environmentally friendly materials that can compete with glass/polyester in terms of strength, performance, and cost.
The combination of natural fiber reinforced polyester composites has been shown to be an effective approach for designing materials that meet a variety of criteria [START_REF] Idicula | Thermophysical properties of natural fibre reinforced polyester composites[END_REF]. [START_REF] Brahmakumar | Coconut fibre reinforced polyethylene composites: effect of natural waxy surface layer of the fibre on fibre/matrix interfacial bonding and strength of composites[END_REF], shown that coir fibers may be effectively reinforced and bonded in a polyester matrix.
Coconut is widely cultivated in several tropical Asian nations. Indonesia, the Philippines, and India account for 73 percent of global coconut output, with Indonesia being the greatest producer, the Philippines second, and India third. Brazil is the fourth biggest producer of coconut, which is used to make a broad range of floor furnishing materials including mats, yarn, and ropes. Despite all of these applications for coir fiber, just a small portion of the fiber produced is used. As a result, numerous studies have been conducted to investigate the use of coir as reinforcement in composites, [START_REF] Geethamma | Dynamic mechanical behavior of short coir fiber reinforced natural rubber composites[END_REF]; S. [START_REF] Mahzan | Mechanical Properties of Medium Density Fibreboard Composites Material Using Recycled Rubber and Coconut Coir[END_REF].
III. ATURE OF COIR FIBER
Coir, often known as coconut fiber, is a hard structural commercial product derived from the coconut husk. Coir is an excellent replacement for cypress mulch or peat moss, since unlike peat and cypress, it is a renewable resource. Its harvest does not harm the environment in the same way as peat mining does, and it does not contain disease organisms that may be transmitted to plants.
Individual coir fiber measures 0.3-1.0 mm in length, 0.01-0.0.2 mm in diameter, and has an aspect ratio of 35. It features a medium -to large-sized lumen that is polytonally rounded or elliptically formed. The vascular bundle is collateral and is encased in a thick sclerenchymatous sheath. With increasing fiber age, lignin and hemicelluloses, which comprise the cementing elements of fiber cells, increase, whereas pectin declines.
The husk includes coir fiber as well as a corky tissue known as pith, according to [START_REF] Shandilya | Banana fiber reinforcement and application in composites: A review[END_REF]. Due to its high lignin concentration, the husk is composed of water, fibers, and small quantities of soluble solids, and it has a higher biodegradability than most natural fibers. The lignin makes coir fiber excellent for applications in cars, biomedical, railway coaches, maritime, and other long-lasting uses [START_REF] Kakou | Hybrid composites based on polyethylene and coir/oil palm fibers: Http://Dx[END_REF]. Coir is relatively resistant to salinity and microbial degradation, and its current study as a polymer matrix reinforcement has shown promising results [START_REF] Mohanty | Biocomposites : An Introduction. Natural Fibers[END_REF]. Coir fiber polymer composite material is a viable alternative to other natural fiber polymer composites due to their exceptional performance in a wide variety of applications. Differences in coir fiber characteristics might be related to the source of the coconut plant from which the fibers were extracted or to the technique of extraction utilized.
Processing
The term coir is derived from the Malayalam word kayar, which means cord or rope (traditionally a kind of rope is made from the coconut fiber). Coir has been hand-processed throughout its history, initially by climbing the palms to get the coconuts and then by gathering them from the ground. Palm climbers discovered that utilizing a knife tied to a bamboo pole was the most effective method, allowing them to collect fruit from up to 250 trees each day.
The optimal period to gather coir was believed to be 10 months, during which harvesters husked the coconuts and soaked them for a year to yield the coir, because extracting the coir from the shell needs soaking (known as retting).
Depending on the availability of water source, harvesters soaked coconut husks in either salt or fresh water. Soaking the coir in salt water resulted in stronger coir. The results varied, but once the coir was further processed by rolling the fibers from three to ten nuts, depending on their size, the material was claimed to create roughly one pound of yarn. Mechanical processing was introduced by industrialization in 1950, it has led to modern-day dehusking machines that process around 200 coconuts per hour.
Extraction
Coir fiber is divided into two types: brown fiber from mature coconuts and finer white fiber from immature coconuts. Green coconuts, which are collected after six to twelve months from the palm, have flexible white fibers, while brown fiber is acquired by collecting completely ripe coconuts at the point when the nutritive layer surrounding the seed is ready to be processed into copra and dried coconut. During the coir fiber extraction process, a large amount of coir residue is generated.
To extract white fiber, retted husks are taken from the retting enclosure, washed to remove any clinging slime, dirt, or sand, and the exocarp is easily peeled off. The husks are pounded with wooden mallets on tree trunks or stones to separate the fibers. The fibers are washed, dried, and stacked together. The wet husks are put through spiked breaker drums to remove the fragile fiber, while the drums are pushed at high speeds in opposing directions to separate the long and short fibers. The brittle fibers that pass between the drums are gathered and cleaned by running them through another pair of drums with closer nails, followed by washing and drying. Cellulose has a strong crystalline structure and is resistant to hydrolysis, nevertheless, acid hydrolyzes it rapidly. Hemicellulose has a random, amorphous, and hydrophilic structure that serves as a cellulose support matrix. It has a low specific gravity, is equally soluble in alkali, and may be hydrolyzed in acid [START_REF] John | Biofibres and biocomposites[END_REF]. Lignin is an amorphous hydrophobic complex polymer (a thermoplastic) that gives plants stiffness and is not hydrolysable in acid, but is soluble in hot alkali, whereas pectin is a structured heteropolysaccharide that offers plants flexibility [START_REF] John | Biofibres and biocomposites[END_REF].
Physical and Mechanical Composition of Coir Fiber
The diameter, density and weight gain by water absorption of a material are referred to as its physical characteristics, whereas it's mechanical properties include its tensile, flexural, and impact strengths. It also relies on the cellulose type and the shape of the basic cell. The cellulosic chains are arranged parallel to one another, forming bundles that each contain cellulosic macromolecules linked by hydrogen bonds, and the cellulosic chains confer stiffness to fibers called micro fibrils via links with amorphous hemicelluloses and lignin [START_REF] Omrani | State of the art on tribological behavior of polymer matrix composites reinforced with natural fibers in the green materials world[END_REF].
Table 2 summarizes the physical and mechanical characteristics of coir fiber. There is a great possibility to fabricate coir-based composites for a wide range of applications in building and construction, such as boards and blocks.
Other benefits of coir fiber include:
• Tough and durable • Provides excellent insulation against sound • Easy to clean • Not easily combustible • Flame retardant • Totally static free
The impact of lignin as a compatibilizer on the physical characteristics of coconut fiber-polypropylene composites was investigated by [START_REF] Bledzki | Barley husk and coconut shell reinforced polypropylene composites: The effect of fibre physical, chemical and surface properties[END_REF]. The coconut fiber polypropylene composites using lignin as a compatibilizer were shown to have greater flexural characteristics than the control composites. It is used to make a broad range of floor furnishing materials, yarn, rope, and other products due to its hard-wearing qualities, durability, and other benefits. [START_REF] Satyanarayana | Structure property studies of fibres from various parts of the coconut tree[END_REF].
The damping characteristics of randomly oriented coir fiberreinforced polypropylene composites are superior to those of synthetic fiber-reinforced composites. This is because a greater resin content has better damping characteristics, therefore, a lower fiber loading results in more energy absorption. The highest damping ratio of 0.4736 was found in coir composite at 10% fiber content, while increasing fiber content to 30% increased natural frequency of material to 20.92 Hz [START_REF] Munde | Investigation to appraise the vibration and damping characteristics of coir fibre reinforced polypropylene composites[END_REF].
By utilizing 70% fiber content and 28% epoxy in the casting [START_REF] Natsa | Development of a military helmet using coconut fiber[END_REF], created military helmets out of coir fiber reinforced polymer composite. The study used several fiber compositions in epoxy resin to generate a variety of specimen. The manufactured helmet's physical characteristics were measured and compared to those of other helmets. The manufactured helmet weighs less than both the Chinese and British helmets. However, it weighs only a little more than a US Ballistic helmet.
V. LIMITATIONS OF COIR FIBER AS FILLER MATERIAL IN COMPOSITES
Despite its favorable characteristics, untreated coir fiber composites have certain unwanted features such as dimensional instability, flammability that makes it unsuitable for high temperature applications, UV radiation, acids, and bases (Izzuddin [START_REF] Zaman | Influence of Fiber Volume Fraction on the Tensile Properties and Dynamic Characteristics of Coconut Fiber Reinforced Composite[END_REF]. The majority of research on coir fiber composites show that a variety of factors such as fiber volume fraction, fiber length, fiber aspect ratio, fibermatrix adhesion, fiber orientation, and stress transmission at the interface have a substantial impact on their mechanical properties. A variety of studies on coir fibers have been conducted to investigate the influence of various fiber characteristics on the mechanical properties of composite materials.
The effect of fiber content on the mechanical characteristics of coir fiber reinforced composites was studied by (Izzuddin [START_REF] Zaman | Influence of Fiber Volume Fraction on the Tensile Properties and Dynamic Characteristics of Coconut Fiber Reinforced Composite[END_REF]. The coir fiber volume in the composite was varied from 5% to 15% in the research. The results demonstrate that when the fiber volume percentage increases, the tensile strength and Young's Modulus drop. The decrease is caused by inadequate interfacial adhesion between the fibers and the matrix. The decrease is caused by inadequate interfacial adhesion between the fibers and the matrix. Brittleness of the fibers also led to low mechanical strength since stronger fibers have more potential to withstand larger loads. When the fiber volume fraction is raised, the failure strain increases. (Izzuddin [START_REF] Zaman | Influence of Fiber Volume Fraction on the Tensile Properties and Dynamic Characteristics of Coconut Fiber Reinforced Composite[END_REF].
When submerged in water, coir fiber reinforced composites and polymer-based composites absorb moisture in humid environments. In general, moisture diffusion in composites is affected by variables such as fiber volume fraction, void volume, additives, humidity, and temperature. [START_REF] Errajhi | Water absorption characteristics of aluminised E-glass fibre reinforced unsaturated polyester composites[END_REF][START_REF] Weitsman | Anomalous fluid sorption in polymeric composites and its relation to fluid-induced damage[END_REF]. Moisture diffusion in polymer composites has been demonstrated to be controlled by three distinct processes. The first involves water molecules diffusing into the tiny gaps between the polymer strands.
The second method includes capillary transport through the gaps and faults at the fiber-matrix contacts. The third includes the movement of tiny fractures in the matrix caused by fiber swelling. [START_REF] Dhakal | Effect of water absorption on the mechanical properties of hemp fibre reinforced unsaturated polyester composites[END_REF][START_REF] Thwe | Effects of environmental aging on the mechanical properties of bamboo-glass fiber reinforced polymer matrix hybrid composites[END_REF], investigated the mechanical performance of coir fiber/polyester composites, the mechanical behavior of coir fiber/polyester composites with ineffective coir fiber reinforcement is related to their low modulus of elasticity when compared to bare polyester resin. (Harish et al). Because of their ability to absorb moisture from the environment and their poor interfacial interaction with the hydrophobic polymer matrix, untreated coir fibers are limited in their usage in industrial applications. [START_REF] Sudhakara | Fabrication of Borassus fruit lignocellulose fiber/PP composites and comparison with jute, sisal and coir fibers[END_REF]. [START_REF] Asasutjarit | Development of coconut coirbased lightweight cement board[END_REF], investigated the optimal treatment conditions and fiber length for producing coconut coir fiberbased cement board from portland cement and coir fiber. He observed that a composite made using 1-13cm lengths of raw coir fiber did not conform appropriately due to the spring back effect and the difficulty of performing measurements. As a result, the composite failed to meet the standard specification required for testing samples. Moisture can cause the leaching of water-soluble components and/or the breakdown of fibers or particles into low molecular weight degradation products in natural fiber reinforced cement board samples. As a result, the bonding of fibers and cement in composites may be disrupted. [START_REF] Balea | Nanocelluloses: Natural-Based Materials for Fiber-Reinforced Cement Composites[END_REF][START_REF] N E Naveen | Experimental Analysis of Coir-Fiber Reinforced Polymer Composite Materials[END_REF], studied the effect of fiber length on coir fiber reinforced composites, he discovered that the mechanical properties of the composites such as tensile strength, flexural strength, impact strength, and so on are also greatly influenced by the fiber lengths, and the micro-hardness decreases with the increase in fiber length up to 20 mm. Small changes in the physical nature of fibers for a given volume content of fibers can cause significant variations in the overall mechanical characteristics of composites. These composite materials sometimes depart from their designed condition as some defects, such as manufacturing defects, cause them to digress from the expected improvement in mechanical properties. These manufacturing flaws include fiber displacement, waviness, and occasionally breakage, fiber/matrix de-bonding, decortication, and the creation of voids in a composite material's matrix. 1% increase in void content in composites results in a drop in tensile strength (10-20%), flexural strength (10%), and inter-laminar shear strength (5-10%), correspondingly [START_REF] Mehdikhani | Voids in fiber-reinforced polymer composites: A review on their formation, characteristics, and effects on mechanical performance[END_REF]. [START_REF] Bhaskar | Water Absorption and Compressive Properties of Coconut Shell Particle Reinforced-Epoxy Composite[END_REF], investigated the water absorption and compressive characteristics of a coconut shell particle reinforced epoxy composite. It was determined that the water absorption capacity of coconut shell particles was limited to a maximum of 30%. Coir fiber is made from lingo cellulose and has highly polarized hydroxyl groups. As a result, they tend to be hydrophilic [START_REF] Harish | Mechanical property evaluation of natural fiber coir composite[END_REF]. The extremely hydrophilic character of coir, along with the hydrophobic nature of the polymers employed for matrix production, makes the manufacture of coir polymer composites difficult, resulting in mechanical property loss following moisture uptake. Because of the poor compatibility, the coir fiber surface must be treated chemically to make it less hydrophilic and enhance the interface contact between the fiber and the matrix. [START_REF] Khan | Swelling and Thermal Conductivity of Wood and Wood-Plastic Composite[END_REF][START_REF] Tajvidi | Water uptake and mechanical characteristics of natural filler-polypropylene composites[END_REF].
VI. CONCLUSION
This paper demonstrates that coir fiber can be utilized as reinforcement in composites, as it has the benefits of being lightweight, having a high strength-to-weight ratio, inexpensive, and being widely available. However, the primary drawbacks are its relative moisture and fiber volume content, as well as other limits due to length and fiber orientation, which are common to natural fibers. To minimize or eliminate the rate of moisture absorption, the coir fiber surface must be chemically modified.
More study is required for the modification of the chemical properties of coir which will thus eliminate or decrease the limitations of coir fiber. This is important for broadening the application of coir fiber composites.
A
www.ijltemas.in Page 14 II. HISTORY OF COIR FIBER According to Coir Team (2021), coconuts are thought to have originally appeared in the Ramayana era about the third century B.C. they explained that a Greek sailor wrote in the first century A.D. about observing coco fibers from an East African hamlet being utilized in the construction of a boat, which was done by stitching together planks with twine spun from coir fibers. Mentions of coir characterization have also been discovered dating back to the 11th and 13th centuries A.D. In Sri Lanka, India, and the Persian Gulf, these fibers were utilized as building materials in place of nails. The coir strands, sometimes referred to as lashings, were used by early Hawaiian settlers to bind their boats together. Maritime explorers have traditionally used coir rope as cables and rigging on their ships. Coir was also commercially sold in the United Kingdom in the mid-1800s, notably for use in carpets and fabrics used in flooring.
Figure 1 :
1 Figure 1: Coconut tree and fruits
Figure 2 :
2 Figure 2: Extracted and Un-Extracted Coir Fiber. Source: Okpala, Onukwuli, and Ezeanyim (2021)Chemical Properties of Coir FiberMany studies have investigated the chemical composition of coir fiber; Table1summarizes their findings. According to (Carlos -Brazil -et al., 2000), water is the most important chemical component of a living tree, but on a dry weight basis, all plant cell walls are primarily composed of sugarbased polymers (carbohydrates) combined with lignin, with minor amounts of extractives, protein, starch, and inorganics.
Table 1 :
1 Chemical Composition of Coir
Cellulose (%) Hemicellulose (%) Lignin (%) Pectin (%) Source
36-43 0.2 41-45 1.8 (Yan et al., 2016)
45-50 - 30 - (Roy et al., 2012)
37 - 42 - (Verma et al., 2013)
43.4 0.25 45.8 3 (Verma and Gope, 2015)
36-43 0.15-0.25 41-45 3-4 (Zhang and Hu, 2014)
42.14 15-17 35.25 - (Siakeng et al., 2018)
32-43 0.15-0.25 40-45 - (Yusoff et al., 2016)
38-46 10-15 37-41 - (Pérez-Fonseca et al., 2016)
42.44 0.25 45.4 3 (A. Khan and Mehra, 2014)
32-43 0.15-0.25 40-45 - (Zainudin et al., 2014)
45.67 0.12-0.25 41-45 - (Sudhakara et al., 2013)
39.3 - 29.8 - (Chollakup et al., 2013)
43.44 0.25 45.84 3 (Haque et al., 2009)
Table 2 :
2 Physical and Mechanical Composition of Coir Fiber
Average diameter (mm) Water Absorption (%) Density (g/cm3) Young (GPa) Modulus Tensile (MPa) Strength Elongation (%) at break Source
0.025 - 1.2 2.74 286 20.8 (Yan et al., 2015)
0.4 130-180 1.2 4-6 175 30 (Anupama et al., 2014)
0.01-0.46 - 1.15-1.46 2.2-6 95-230 15-51.4 (Yan et al., 2016)
0.25 - 1.2 2.74 286 20.8 (Yan et al., 2015)
0.1-0.45 10 1.3-1.5 4-6 105-175 17-47 (Sanjay et al., 2018)
0.38 1.2 2 144 4.5 (Yusoff et al., 2016)
- - - 3.23 165.2 39.45 (Andiç-Çakir et al., 2014)
- - - 4-6 144 15-40 (Zainudin et al., 2014)
0.1-0.4 - 1.15 4-6 108-252 15-40 (Sanjay et al., 2018)
- - 1.37 3.19-3.23 158-165 39-41 (Yan et al., 2015)
- - - 4-5 250 20-40 (Tran et al., 2013)
0.1-0.45 - - 3-6 106-175 47 (max) (Yan et al., 2015)
- - - 4-6 131-175 47.2 (Sudhakara et al., 2013)
0.2 - 1.3 3.11 144.6 32.3 (Saw et al., 2012)
- 93 1.17 8 95-118 - (Sen and Reddy, 2011)
- - 1.2 4-6 593 30 (Ku et al., 2011)
IV. BENEFITS OF COIR FIBER AS FILLER MATERIAL IN With a growing emphasis on fuel efficiency, natural fibers
COMPOSITES such as coir-based composites are finding more usage in
Many researchers have reported the benefits of coir fibers, including the fact that they are abundant in nature, non-toxic, renewable, and cost-effective, as well as providing the vehicles, railway coaches, and buses for public transportation. Coir fiber reinforced composites are also used to create seat cushions for Mercedes vehicles (Naveen and Yasaswi, 2013).
necessary bonding with the cement-based matrix for
significant improvements in material properties such as
ductility, toughness, flexural capacity, and impact resistance
(Ardanuy et al., 2011).
Coir fiber is more durable than other natural fibers because of
its lignin content, it also has the lowest thermal conductivity
and bulk density. The inclusion of coconut coir decreased the
heat conductivity of the composite specimens resulting in a
lighter product. The development of composite materials for
buildings based on natural fibers such as coconut coir with
low heat conductivity is an intriguing option that would
address environmental and energy concerns. (Asasutjarit et
al., 2005; Khedari et al., 2001) |
04104262 | en | [
"shs.hist"
] | 2024/03/04 16:41:22 | 2003 | https://upf.hal.science/hal-04104262/file/Le%20M%C3%A9morial%20de%20Me%20Bacalerie.pdf | Véronique Dorbe-Larcade
Claudette Gilard
) d'Anthoine Bacalerie, notaire de Sarrant et tabellion de l'Histoire
Le Mémorial (1619-1628) d'Anthoine Bacalerie, notaire de Sarrant et tabellion de l'Histoire par Claudette Gilard et Véronique Larcade Bulletin de la Société archéologique du Gers, 3 e trim. 2003, pp. 292-309.
Du legs historiographique bien spécial d'Anthoine Bacalerie, il ne reste apparemment que des bribes. Répertoriées avec celles de Beaumont-de-Lomagne, les minutes de ce notaire de Sarrant (Gers, arr. Lectoure, c. Mauvezin) sont conservées aux archives départementales de Tarn-et-Garonne à Montauban. Elles sont incomplètes : quatre registres seulement subsistent pour 1619-1621, 1622-1623, 1625-1627 et 1628-1629. Et encore ces vestiges sont-ils tronqués : tout ce qui touche à la fin de des années 1622 et 1623 a disparu ainsi que les derniers mois de 1629 1 . Or Me Bacalerie avait acheté son office en 1613 2 et il resta en activité jusqu'en 1640, au moins 3 . Il est donc très probable qu'au fil de ces quelques trente années de carrière, Me Bacalerie a d'autres fois, sinon toujours, rédigé à la suite du répertoire des instruments de chaque année, ce qu'il appelle lui-même son Mémorial. Selon les années, celui-ci compte quelques lignes seulement ou bien s'étend sur deux feuilles recto-verso. En marge, au propre comme au figuré, des écritures dictées par le protocole notarial et à usage professionnel, Anthoine Bacalerie -avec modestie, on le voit, compte-tenu de la masse des registres-a risqué un texte de sa propre initiative. Dès lors, il convient de s'interroger : pourquoi sort-il ainsi des sentiers battus ? Dans quelle mesure fait-il alors preuve d'audace ? Et, de cette manière, jusqu'à quel point se livre-t-il ? Un texte hors norme Certains traits rapprochent le Mémorial des livres de raison d'usage assez répandu au XVIIe siècle 4 . A leur exemple, il comporte, en effet, des indications touchant à la météorologie et à la situation agricole. Une phase de refroidissement du climat est alors en cours que l'on désigne traditionnellement du nom de « petit âge glaciaire »5 . Le rendement est compromis et, très concrètement, les mauvaises récoltes se multiplient. Or la place et le rôle des produits de la terre étaient primordiaux pour tous et chacun dans l'ancienne France. Anthoine Bacalerie signale ainsi la médiocrité de la récolte de 1619 et tout particulièrement les pluies catastrophiques de 1627, à cause de leur « longueur et continuation… la recolte de graines a été fort petite…Les vers engendrés à leur racine et autres accidentz les perdirent. Les mestures se firent difficilment (sic) à cause des pluyes dont plusieurs semences se gasterent aulx aires ». Il y a aussi la « sterilitté » de 1628 : « …la terre n'ayant produit presque aulcuns fruitz exepté mediocrement du blé »6 . De la même façon, faisant ressortir ainsi le caractère exceptionnel et presque miraculeux de la chose, Me Bacalerie remarque l'abondance de la récolte de 1626 ; et encore les arbres fruitiers donnent-ils peu cette année-là 7
Les dernières nouvelles de Bohême
Le notaire a d'autres perspectives. Une bonne part de ses notes touche, de cette façon, à la guerre de religion menée par Louis XIII12 contre les huguenots entre 1620 et 1629. Il signale la tenue des assemblées de La Rochelle et de Montauban, en 1620 et en 1621, moments forts de la mobilisation du parti protestant13 comme il mentionne les épisodes les plus sensationnels de l'offensive royale : le siège de Monheurt marqué par la disparition de deux importants personnages : l'inquiétant et ténébreux Boisse-Pardaillan, assassiné et le connétable de Luynes, favori de Louis XIII, victime de la scarlatine (le 15 décembre) 14 Mémorial (ou plutôt de ce qui en a été conservé) en 1628 et qui a donc statut de grand événement de l'année (au même titre que le siège de La Rochelle) concerne l'investiture d'un nouvel évêque de Lombez : « Messire Jean d'Affis, fils de M. le president de Bordeaux » 37 . 31 A.D Tarn-et-Garonne, 5 E 17458, f°422r° et f°420r°. 32 A.D Tarn-et-Garonne, 5 E 17458, f°140 v°. 33 id. 34 A.D Tarn-et-Garonne, 5 E 17458, f°140 v°. 35 A.D Tarn-et Garonne, 5 E 17458, f°419 r° 36 A.D Tarn-et-Garonne, 5 E 17461 f° 182 r°. 37 A.D Tarn-et-Garonne, 5 E 17461, f°183r°. Dans les registres de l'église réformée de Mauvezin déposés aux archives de la commune, sont mentionnés dans les années 1580 et 1583 des baptêmes d'enfants Bacalerie de Sarrant: il s'agit de Pierre et Daniel, fils de Me Bartelemye Bacalerie et de Peyronne Guissot. On y trouve aussi, en avril 1588, le mariage de Jehan Bacalerie, de Sarrant, avec Clairette Depont, de Bivès. Il y a de très fortes probabilités pour que ce soient des cousins germains de Me Anthoine Bacalerie, car il n'y avait qu'une famille Bacalerie à Sarrant. Il faut remarquer que le prénom de Daniel, porté, on l'a vu par le dernier fils du notaire, est très rarement rencontré dans les registres de la catholicité en Lomagne. Ce « Daniel » baptisé en 1580 était-il parrain du fils de Me A. Bacalerie ? C'est possible, mais alors s'était-il converti au catholicisme et quand ? Il est aussi intéressant de relever que le fils aîné d'Anthoine Bacalerie s'appelle 42 Aujourd'hui Lycée Pierre-de-Fermat. 43 A.D de Haute-Garonne, 3 E 7153 f° 95 et 3 E 7155 f° 142 (Etude de Me Anthoine Serres, notaire de Toulouse).
Sixte : sans doute s'agit-il d'une marque supplémentaire d'attachement à l'Eglise romaine.
Peut-on imaginer un prénom qui soit plus « papiste » au début du XVIIe siècle? En effet, il avait été porté notamment par Sixte-Quint dont le pontificat (de 1585 à 1590) fut marqué certes par de grands travaux à Rome, par une intensification de la Réforme catholique, mais aussi et surtout par le soutien déclaré du Saint-Siège à la Ligue en France, alors que commençait la 8e guerre de religion 44 .
Si Anthoine Bacalerie se montre si attentif à prévenir toute remise en cause de son office pour fait de non-conformisme religieux, c'est que sa situation est relativement fragile. A l'instar de beaucoup de notaires en son temps, Me Bacalerie est en pleine ascension sociale : sa charge est un atout certes, mais encore assez puissant pour assurer définitivement son rang parmi les « gens de bien » 45 . Sa famille est certes déjà installée à Sarrant : elle a déjà rompu depuis deux générations apparemment avec la paysannerie. D'après les registres consulaires de Sarrant, un certain Pierre Bacalerie est « pratricien ou bazochier » en 1565. On trouve aussi en 1568 un Pierre Bacalerie premier consul. Il s'agit très probablement du grand-père du notaire. Son père, en tout cas, était marchand de son état et Anthoine lui-même qui est né vers 1580-1585 a épousé en 1609, la fille d'un marchand de Brignemont. Elle lui a apporté, à l'évidence, une dot non négligeable de 700 livres 46 . On peut remarquer que 4 ans plus tard, Anthoine achète sa charge de notaire pour 400 livres. Si l'on peut parler d'aisance, celle-ci est probablement modeste. Le profit que Me Bacalerie peut retirer de son étude est certainement compromis par la concurrence de son confrère -et meilleur ennemi, semble-t-il-Philippe Larigaudère, l'autre notaire de Sarrant 47 . Ainsi, il apparaît que pour la famille Bacalerie l'office notarial n'est 44 Jouanna (A.), Boucher (J.), Biloghi (D.), Le Thiec (G.), Histoire et dictionnaire des guerres de religion, Robert Laffont, 1998, notice « Sixte-Quint » pp. 1304-1307. 45 Moreau (A.), id., p. 34 et p.99 : en principe Charles VI en mai 1387 avait établi que la fonction de notaire ne figurait pas parmi les « arts vils » ie pas incompatibilité avec noblesse. Il existait des charges notariales anoblissantes ainsi celles de notaires du Châtelet de Paris ; en fait l'anoblissement des notaires découlait surtout de l'exercice de charges échevinales. Il y a , en fait, une certaine ambiguité dans le statut du notaire au début de l'époque moderne : le notariat est à la frontière, excepté à Paris, de la « petite robe » (auxiliaires de justice : basoche) et de la « moyenne robe » (magistrats autres que parlementaires ; ces derniers formant la « grande robe »). Peut-être faut-il aussi rappeler que l'Ordonnance de Villers-Cotterêts en instituant non seulement l'abandon des langues régionales mais surtout celui du latin, langue savante, par excellence, a été perçue comme un avilissement des notaires, cantonnés à un culture « moderne » plutôt déconsidérée. 46 Le contrat de mariage avec Marie Dufaur fille d'Anthoine marchand de Brignemont a été passé le 22 août 1609. La dot de l'épouse est de 700 livres cf A.D du Gers, 3E 10511, f°435 (étude Saunyer). 47 compassion [START_REF] Jouanna | Le devoir de révolte[END_REF] . Mais il convient de souligner qu'en 1625, A. Bacalerie n'a pas consigné les cruautés de Ventadour au Mas d'Azil, égales sinon pires que celles commises trois ans plus tard à Pamiers. Peut-être a-t-il alors manqué tout simplement de place pour l'écrire ? Mais il est aussi possible que les idées d'A. Bacalerie aient changé. Evoquant, toujours en 1628, la défaite de Rohan à Montpellier, Me Bacalerie expose que non seulement le chef protestant a causé, en vain, parce qu'il a été battu, la souffrance de la population mais encore « qu'il y perdist la fleur et eslitte de ses gens » 55 . Certainement s'exprime là une impression de gâchis que le spectacle des misères de la guerre, auquel -on l'a constatéil assiste de ses propres yeux a fait naître et que Me Bacalerie n'éprouvait pas forcément, 9 ans plus tôt, dans la première « livraison » du Mémorial qui reste. Dans cette perspective, il est intéressant de remarquer que le récit des horreurs de la famine lors du siège de La Rochelle est suivi, pratiquement sans transition de la description des ravages de la peste en pays toulousain. Et Me Bacalerie ajoute : « Ceste année le pauvre puble françois a esté battu de peste, guerre et sterilitté » [START_REF] Tarn-Et-Garonne | [END_REF] . Le notaire, bien sûr, peut penser que l'armée de Louis XIII n'est que l'instrument de Dieu pour châtier les hérétiques. Leur malheur ne serait donc qu'une exemplaire expiation. Mais il y a lieu de croire pareillement qu'il perçoit peut-être une sorte de communion dans la douleur, toutes confessions confondues, et qu'il indique subtilement une certaine perte de sens de ce déploiement de violence et, au fond, de la guerre de religion.
Il est sans doute bien exagéré de soupçonner en A. Bacalerie, qui proclame si au haut et si fort son catholicisme, un esprit fort. Certes, par ailleurs, en Gascogne, on a pu, vérifier sinon le dépassement du moins l'atténuation des heurts religieux au début du XVIIe siècle par la force efficace des liens de mariage et de sociabilité, d'affaires et de parenté, de même que l'infiltration de courants plutôt rationalistes [START_REF] Hanlon | L'univers des gens de bien. Culture et comportements des élites urbaines en Agenais-Condomois au XVIIe siècle[END_REF] . En fait, fidèle à lui-même, le notaire de Sarrant laisse entendre, tout conformisme catholique mis à part, qu'il pense par lui-même. Ce qu'il dit à propos notamment de la disparition brutale du duc de Mayenne le révèle. Elle se produisit devant Montauban en 1621 où il fut « tué inopinément dans les tranchees dun coup de pistolet quy luy feust lasché par les assiégés par loeil ». De toute évidence, le trépas de ce grand personnage, neveu du fameux duc Henri de Guise dit « le Balafré », assassiné à Blois à la veille de Noël, en 1588, causa une vive émotion. Ce décès advint un jeudi, or le dimanche précédent à la nuit tombée étaient apparus « de grandz signes au ciel quy durerent toute cette
. Dans la même perspective, le Mémorial indique systématiquement le prix du sac de blé et de la barrique de vin, avant l'été et après l'été, c'est-à-dire avant ou après la moisson ou la vendange, au moment où le cours de ces denrées est au plus haut ou au plus bas. De telles notes s'inscrivent dans la logique comptable des livres de raison, comme elles reflètent les préoccupations pratiques d'un notaire rural, dont le frère -les actes évoqués précédemment le prouvent-est laboureur. Me Bacalerie est ainsi à plusieurs titres intéressé à la terre et a fortiori en raison du petit patrimoine foncier détenu par les siens 8 . Or la céréaliculture et la viticulture constituent, au début du XVIIe siècle, des productions majeures et des valeurs agricoles de référence. Mais le pain et le vin ne sont pas uniquement alors des denrées vivrières ou commerciales de base : ils sont au coeur de la controverse théologique qui oppose les catholiques et les protestants autour de la question de la Transsubstantiation du corps et du sang du Christ. Or, au temps d'A. Bacalerie et en Gascogne tout particulièrement la guerre de religion est encore d'actualité. Peut-être est-il aussi pour cette raison particulièrement scrupuleux et attentif à l'endroit du grain et du raisin. En tout cas, de telles indications, classiques dans un livre de raison, n'interviennent qu'en incise dans le Mémorial d'Anthoine Bacalerie. Or celui-ci se démarque également de l'horizon familial et de la chronique domestique qui sont le propre des mémoriaux-journaux ordinaires. Pour Me Bacalerie, ce qui fait événement et mérite d'être retenu diffère de ce qui est habituellement consigné. On ne s'étonne pas qu'il parle des affaires communales mais encore le fait-il de façon très limitée. A. Bacalerie signale ainsi systématiquement le nom et la qualité éventuellement des consuls en exercice 9 , il semble que ce soit plus une manière de fixer la date -à l'imitation de l'antique peut-êtreque de rapporter les débats locaux. Il évoque aussi l'arpentement de 1625 10 dont les conséquences fiscales n'étaient certainement pas négligeables pour les habitants, mais c'est en une phrase et, rajoutée de plus, à la suite de son propos. Seule exception, les « inquisitions » de 1627 font l'objet d'un véritable développement mais le Mémorial reste tout de même allusif et surtout il occulte l'exacte implication d'Anthoine Bacalerie dans le litige 11 .
20
Il s'agit d'Henri de Mayenne (1578-1621), fils de Charles de Lorraine, duc de Mayenne et chef de la ligue : A.D Tarn-et-Garonne, 5 E 17458 f° 419v°21 A.D Tarn-et-Garonne, 5 E 17458 f°419v°-420r°. Anthoine Bacalerie mentionne Bourg, gouverneur de l'Isle-Jourdain et il évoque Rapin, gouverneur de Mas-Grenier et Maravat , gouverneur de Mauvezin : ces derniers sont des rebelles huguenots notoires et indomptables : un arrêt du Parlement de Toulouse est rendu contre eux à ce propos, le 6 février 1625 cf A.D Haute-Garonne, B449, f°120.22 A.D Tarn-et-Garonne, 5 E 17460, f° 179r°. cf Fabre (Jean-Claude), « Le premier duc d'Epernon et les troubles religieux dans le Montalbanais », dans Bulletin de la Société Archéologique de Tarn-et-Garonne, t. CIV, 1979, pp.97-113 et Amalric (Jean-Pierre), « L'épreuve de force entre Montauban et le pouvoir royal vue par la diplomatie espagnole », dans Bulletin de la Société Archéologique de Tarn-et-Garonne, t.CVIII, 1983, pp. 25-40. serait pas réellement, pour Anthoine Bacalerie, une affaire extérieure à ses préoccupations et à son cadre quotidiens. Il ne quitterait guère, en somme, les perspectives courtes du livre de raison.Mais ce serait oublier que le mémorial rapporte par ailleurs et en leur donnant une large place, des faits ne touchant pas effectivement et directement à Sarrant. Il se place à l'échelle du royaume de France et même sur un plan plus large encore, à celui de l'Europe. Anthoine Bacalerie consigne ainsi en 1620 la fin de la guerre civile connue sous le nom de « seconde guerre de la Mère et du fils » -Louis XIII et Marie de Médicis-apaisée par le traité d'Angers (10 août 1620)23 . Il s'agit d'un nouvel épisode de l'agitation qui a marqué le début du règne de Louis XIII, commencé par la régence contestée de la reine-mère : la révolte des Grands est pratiquement permanente depuis 10 ans. Me Bacalerie ne parle que de celui qui est, dans ce milieu, le plus en vue sans doute : le prince de Condé dont il mentionne la libération en 161924 . Plus extraordinairement, le mémorial traite de la guerre de Trente ans ou plus exactement de ce qui fit événement lors de sa première phase : la bataille de la Montagne Blanche livrée le 8 novembre 1620. S'il ne rapporte pas le détail de la bataille, A. Bacalerie est bien renseigné sur les protagonistes, du côté de l'Empereur, Jean Tserclaes, comte de Tilly (1559-1632) à qui revient toute la gloire de cette victoire mais aussi « le duc de Bavière », Maximilien, surnommé « l'âme damnée de Ferdinand II » et enfin le comte de Buquoy (Bacalerie écrit « Bucquoy ») originaire des Pays-Bas. Dans le camp des vaincus, Bacalerie est capable de citer « L'Electeur palatin », c'est-à-dire Frédéric V (1596-1632) mais aussi « Bethlengaboy » c'est-à-dire, de toute évidence, le prince de Transylvanie, associé à la révolte contre l'autorité impériale Gabriel ou Gabor Bethlen (1580-1629) : il est d'usage d'inverser et d'accoler son prénom et son nom, on parle ainsi habituellement de Bethlengabor. Anthoine Bacalerie fait donc écho, dans son coin reculé de Gascogne, à ce qui a été considéré comme un grand moment d'histoire. Dans les pays catholiques du moins, on a vu dans la bataille de la Montagne Blanche gagnée contre l'armée des protestants l'équivalent de la victoire navale de Lépante contre les infidèles Turcs 25 . La présence d'une telle information dans le Mémorial fait naître autant d'interrogations qu'elle apporte de précisions sur son auteur. La relation assez exacte d'un fait aussi récent et aussi lointain que la bataille de la Montagne Blanche révèle de la part d'Anthoine Bacalerie de la curiosité et certainement une réelle exigence intellectuelle autant qu'elle démontre sa culture et son ambition. Un notaire contestataire ? Matériellement, l'excellente tenue des registres d'Anthoine Bacalerie est à remarquer : ce goût du bon ordre dans le travail comme le souci d'économiser le papier peuvent passer pour un gage de conscience professionnelle sinon de compétence. On a affaire effectivement à un personnage placé à l'échelon supérieur de sa corporation. Il a le titre et l'office de notaire royal qui le différencie de la catégorie inférieure des tabellions. La loi et l'usage l'avaient établie. Selon les dispositions de l'ordonnance d'Angoulême de 1542 ces derniers étaient cantonné aux « grosses » qui rendent les actes exécutoires alors que la rédaction plus valorisante des originaux, les « minutes », revenait aux notaires. Surtout on appelait de façon assez méprisante « tabellions », les notaires seigneuriaux, notaires de bas de gamme comparés à ceux ayant la qualité royale pour lesquels le terme de « notaire » était presque uniquement usité. Peu compétents, peu respectés il n'était pas rare que ces gagne-petits exercent d'autres besognes celle de sabotier, de chaudronnier, d'organiste ou encore de bedeau et assez souvent celle, en fait, de tavernier. Tout ce qui contribue encore à leur déclassement. A. Bacalerie peut effectivement d'enorgueillir d'appartenir à une corporation dans laquelle ne sont « institués » depuis Philippe le Bel que « ceux qui seraient trouvés suffisamment intelligents et capables ». En tout cas pour prétendre à ce qui était désigné comme un « office à pratique », il fallait satisfaire à un certain nombre de conditions : être en mesure, certes, d'acquitter la valeur d'achat de la charge, mais aussi faire preuve de capacités. Parmi elles : avoir au moins 25 ans, avoir fait l'objet d'une enquête de moralité et tout particulièrement, selon une carme, le Père Dominique de Jésus Maria prêcha la guerre sainte et s'inspira de l'Evangile du jour : « Reddite Caesari quae sunt Caesaris et Deo quae sunt Dei ». Les chefs chantaient le « Salve Regina » et parmi eux Maximilien, duc de Bavière avait donné comme mot d'ordre « Sancta Maria ». La tradition veut que Descartes ait combattu dans le camp de Ferdinand. A la suite de la « bataille », un temple luthérien de Prague, dédié à la Sainte Trinité, fut rendu au culte catholique et placé, en mémoire de la journée sous le vocable de la Vierge des Victoires « Panna Maria Viteznà ». Surtout à Rome une église fut alors consacrée à « Santa Maria della Vittoria ». L'architecte Soria la dota d'un portail en forme d'arc de triomphe. Elle est célèbre pour avoir par la suite été décorée de l'oeuvre du Bernin « la Transverbération de Ste Thérèse » cf Chaline (Olivier), La bataille de la Montagne Blanche ( 8 novembre 1620) : un mystique chez les guerriers, Paris, éd. Noesis, 2000.ordonnance de François 1 er d'octobre 1535, les candidats notaires devaient passer un examen, devant quatre conseillers au Parlement dans le ressort duquel ils souhaitaient exercer, en principe sinon en pratique. Surtout le notaire tirait son prestige de sa qualité d'« homme du prince et de la loi vis-à-vis des particuliers » : il apparaissait ainsi comme un véritable juge, ses actes avaient tous les caractères d'un jugement, ils étaient exécutoires sans appel, ils emportaient hypothèque, et foi leur était due jusqu'à l'inscription en faux[START_REF] Moreau | Les métamorphoses du scribe, histoire du notariat français[END_REF] .Si Anthoine Bacalerie possède donc une certaine instruction, celle-ci passe d'abord par la maîtrise de la langue française, devenue obligatoire pour tous les actes officiels, depuis l'ordonnance de Villers-Cotterêts de 1539[START_REF]L'ordonnance de Villers-Cotterêts rappelait aussi que les notaires avaient la responsabilité de la conservation des minutes et qu'ils devaient également confectionner des répertoires de leurs instruments[END_REF] . La francophonie d'Anthoine Bacalerie est de fort bon aloi, mais on relève tout de même quelques gasconismes dans ce qu'il écrit : par exemple « Bentadour » au lieu de « Ventadour » ou encore « puble » pour « peuple » ou « public ». Ce qui traduit à tout le moins un accent, et probablement la pratique du Gascon, avec ses clients sûrement, voire avec ses voisins et ses proches.Ces atouts professionnels ne suffisent pas cependant à expliquer comment Anthoine Bacalerie se trouve si bien au fait de l'actualité de son temps. Des facteurs strictement personnels sont assurément à considérer pour ne pas hasarder de trop rapides conclusions. Ainsi il faut se garder de penser qu'au début du XVIIe sècle, les nouvelles du monde arrivent à Sarrant ; il y a tout lieu de penser plutôt que Me Bacalerie, à titre individuel, va à elles. De par ses affaires, apparemment, il est amené à se rendre régulièrement à Toulouse[START_REF]les notaires royaux dans certaines villes petites et moyennes sont souvent simultanément procureurs (avoués), greffiers ou titulaires autres offices de judicature[END_REF] . C'est là vraisemblablement qu'il a pu avoir connaissance des deux imprimés qu'il mentionne comme ses sources d'information à propos de la bataille de la Montagne Blanche ou celui (ou ceux) qui l'ont renseigné sur le siège de La Rochelle en 1628. Les occasionnels peuvent ne pas avoir été ses seules lectures, peut-être faudrait y ajouter la Gazette ou le Mercure François ; ce qui lui permet de savoir la promotion à l'ordre du Saint-Esprit de Condé en 1619 ou la convocation des l'Assemblée des Notables de 1626[START_REF] Tarn-Et-Garonne | [END_REF] . Sans doute, une certaine prudence s'impose-t-elle à l'égard de ces « lectures ». En effet, la manière dont il écorche le patronyme de Tilly ou celui de Toiras qu'il écrit « Toras »[START_REF] Tarn-Et-Garonne | 'appui d'interrogatoires en présence de témoins et devant Me Bacalerie qui rédige de nombreux actes. Les syndiqués demandent entre autre que soit mis « à l'encan public » la charge de greffier des consuls : il y a fort à penser que Me Bacalerie aurait voulu cette charge pour son fils ou pour lui[END_REF] indique certainement qu'il n'avait pas ce nom sous les yeux et qu'il l'écrivait de mémoire. Anthoine Bacalerie avait-il acheté ces libelles ? C'est peu probable. Où et dans quelles circonstances alors les avait-il lu ou entendu lire ? Mais on ne saurait réduire l'originalité du propos de Me Bacalerie à cet incertain usage de l'imprimé. Quelque soit la manière dont il lise, le notaire écoute et observe aussi volontiers. Il note plus d'une fois d'où lui vient l'événement qu'il rapporte : « Voilla le somaire de ce que jay pu aprendre par aucun bruit ou autrement s'estre passé » écrit-il en 1621. Et on l'imagine bien cette année là, comme le duc de Mayenne, après une étape à Mauvezin, « passa proche la presente ville », au premier rang des Sarrantais qui se pressèrent pour avoir « l'honneur de le veoir, avec toute son armée et ecquipage quy estoit très magnifique » 31 . Cette ardeur et cette vitalité contrastent singulièrement avec la teneur d'ensemble très conformiste du propos. Me Bacalerie affiche sans ambages son catholicisme et son royalisme. Il termine religieusement chaque chapitre du Mémorial par une invocation à Dieu, bien sûr, mais il n'omet pas d'y ajouter de manière nettement papiste une invocation à la Vierge toujours et aux saints éventuellement. Et pour affirmer encore son adhésion à l'Eglise romaine il la désigne pas autrement que « notre religion » 32 . De même, il recourt, semble-t-il, à tous les poncifs anti-huguenots. Il use cela va sans dire de l'habituelle formule de « religion prethendue refformée », expression consacrée du côté catholique pour parler du protestantisme. Dans la même logique, Bethlengabor reçoit l'épithète d' « heresiarche et ennemy de la foi de l'Eglise » 33 . Bacalerie stigmatise aussi la duplicité des protestants qui obtiennent la révocation de Fontrailles du gouvernement de Lectoure sous le fallacieux prétexte, insinue Me Bacalerie, qu'il s'est converti au catholicisme 34 . Pareillement, il note que les huguenots ont repris « par trahison » de Navarrenx en 1620, il s'agit donc de « corriger et chastier ces temerites et rebellions ». Ce qui justifie l'expédition punitive du duc d' Epernon en Béarn 35 . Ce qu'il dit à propos du duc Henri de Rohan (1579-1638), le chef protestant, est évidemment très négatif : « le sieur Rohan a continue d'exercer ses rebelions et cruaultes sur le puple mais il n'y a rien gaigné… » 36 . De façon significative, la dernière notation du
Au fil des années, l'expression du loyalisme monarchique semble concurrencer et même dépasser ces manifestations d'allégeance au catholicisme. A. Bacalerie vante les victoires de l'armée du roi sur les rebelles huguenots : ainsi note-t-il, en 1625, que « Monsieur l'admiral fist merveilles sur mer contre monseigneur de Soubize lui ayant prins plusieurs navires » 38 .En 1627, le notaire, à la suite du récit de l'échec de Soubise à l'Ile de Ré, écrit à propos du sort de La Rochelle : « On attend sa prise ce printemps. Dieu en fasse la grace au Roy et a son armee » 39 . Surtout il rapporte l'enthousiasme unanime de la population à l'occasion des réquisitions pour les opérations de « dégât », en 1621, menées écrit-il « avec grande joyes et alegresse, cryant d'un commun accord VIVE LE ROY. Il feust porté telle dilligence que la dite execution feust parfaite dans 24 jours, chacun desquels vaquoient de sept à huit mille hommes de travail » 40 . Tout cela est trop parfait et trop grandiose pour ne pas éveiller un soupçon de gasconnade ou d'abusive complaisance au moins.Certes, il faut certainement envisager le fait qu'Anthoine Bacalerie est de par la définition même de ses fonctions, un « officier public ». Sans doute a-t-il conscience d'être, sans doute modestement mais réellement, un pilier de l'ordre établi et dès lors il estime devoir se placer forcément du côté du roi, de la loi et de l'obéissance 41 . Mais il convient aussi de souligner le caractère officiel des minutes notariales. Le Mémorial, même s'il relève de la propre initiative d'Anthoine Bacalerie ne saurait être un document privé et confidentiel où pourraient s'épancher sans retenue les secrets d'un coeur ou d'une conscience. On ne saurait trop remarquer que si A. Bacalerie parle de lui à travers ses propos, il ne le fait que très exceptionnellement à la première personne du singulier. Bien sûr il s'agit de se garder du point de vue anachronique, qui conduirait à oublier qu'au début du XVIIe siècle, contrairement à des époques ultérieures, le moi est vraiment haïssable et haï. Il faut rappeler que la réserve ou l'extrême pudeur en matière de sentiments caractérisent le livre de raison.De la même façon, il serait erroné d'être étonné, en fonction de comportements qui se sont imposés par la suite, de l'absence de neutralité et de retenue en matière confessionnelle. Par-38 A.D Tarn-et-Garonne, 5 E 17460, f°179 r°. Il s'agit du duc Henri II de Montmorency (né en 1595), fils du maréchal et connétable Damville qui détint l'amirauté de 1612 à 1626, date à laquelle Richelieu l'accula à démissionner pour réaliser sa réforme de la marine. Il périt, on le sait, décapité à Toulouse, le 30 octobre 163239 A.D Tarn-et-Garonne, 5 E 17460, f°418 r°.40 A.D Tarn-et-Garonne, 5 E 17458, f° 419 r°.41 Moreau (A.), op.cit., p. 97 : Une décision de François 1 er interdisait aux titulaires d'offices notariaux d'avoir « la barbe sale, le pourpoint et les chausses déchiquetés et autres habits dissoluts ». Si les tabellions ne revêtaient que ce que l'on dénommerait actuellement le costume civil, les notaires royaux, ceux des grandes villes surtout portaient comme les magistrats et les avocats la robe de palais et la toque. Elles étaient obligatoires à toutes les solennités auxquelles les communautés assistaient en corps constitué. Dans la vie courante, les notaires utilisaient comme d'autres professionnels du Droit, la robe courte et noire. delà le ton et le verbe dictés par les usages de son temps, Anthoine Bacalerie écrit en sachant qu'il peut être (sinon qu'il sera) lu et jugé par d'autres et sans aménité, c'est à prévoir. Certes l'heure n'est pas encore à la France « toute catholique » qu'institue une soixantaine d'années plus tard la révocation de l'Edit de Nantes. Néanmoins la ferveur contreréformatrice de Louis XIII et sa détermination à établir un ordre royal incontesté où le civil et le religieux se confondent sont assez concrètement et même violemment manifestées au début des années 1620 dans le Sud-Ouest pour ne pas rendre A. Bacalerie prudemment orthodoxe. Il en va non seulement de la sécurité de sa personne, mais encore de son emploi. Statutairement, en effet, il faut être de religion catholique pour exercer le métier de notaire. Dans le cas d'Anthoine Bacalerie s'ajoute le fait qu'il tient son office du syndic du notariat de la compagnie de Jésus. Comme il va assez régulièrement à Toulouse peut-être continue-t-il à entretenir des liens avec les pères jésuites qui pourraient constituer pour lui une pratique lucrative ? On peut envisager l'hypothèse qu'il ait étudié dans leur collège, installé depuis 1567 dans les locaux de l'Hôtel de Bernuy 42 . En tout cas son benjamin, Daniel, en 1654 était novice au couvent de la Compagnie de Jésus de Toulouse. Il fait alors son testament, comme il est d'usage avant de prononcer ses voeux. Il lègue 1200 livres à la maison du noviciat, 100 livres à Pierre Bacalerie son cousin et fait de sa soeur Jehanne épouse de Raymond Tiar, peigneur de laine de Toulouse, son héritière universelle. Jehanne, deux ans plus tard donne tous les biens de l'hérédité de son frère à la Compagnie de Jésus 43 . Quoiqu'il en soit, Anthoine Bacalerie est très probablement d'autant plus soucieux de claironner son catholicisme qu'il n'a pas un passé (et peut-être un présent) familial -voire même personnelconfessionnellement irréprochable.
qu'un investissement parmi d'autres. Le frère unique d'Anthoine, Jean, est laboureur et il faut remarquer que lui, visiblement, n'a pas fait d'études : il ne sait pas signer. Cela ne l'empêche pas d'accéder à la fonction de consul de la cité en 1612, puis en 1618. C'est sur l'exercice de ces charges municipales que les Bacalerie, de toute évidence, misent pour défendre et augmenter, sûrement, leur importance à Sarrant. Le père d'Anthoine a été consul en 1594 et premier consul en 1604. Sur ces traces, et à l'instar de son frère, Anthoine lui-même est second consul en 1614 et premier consul en 1620. Et son neveu, Pierre, perpétue la tradition en 1654 et en 1675. Une forte tête Le conseil de ville semble bien en tout cas être le théâtre privilégié d'un conflit à répétition entre les deux notaires de Sarrant. Me Bacalerie ne parle qu'une fois des affaires de la cité, en 1627, à la suite d'une élection consulaire qui avait, une fois de plus, fait l'objet de contestation. Il se trouve qu'elle impliquait Me Philippe Larigaudère et c'est peut-être, pour cette raison, qu'elle a les honneurs du Mémorial. Les délibérations consulaires étaient, en effet, habituellement soigneusement notées par le greffier des consuls, mais il se peut que cette année-là, Me Bacalerie ait craint que tout cela soit passé sous silence. Il y avait des précédents. Ainsi, pour l'élection consulaire de Toussaint 1626, Pierre Capéran, maître chirurgien élu premier consul avait refusé de prendre sa charge. On ignore les motifs de ce refus. Il n'en demeure pas moins qu'il avait fait appel auprès de la cour du Sénéchal de Toulouse pour casser officiellement cette élection. Ce qui semble ne pas avoir fait l'unanimité : les trois autres consuls dont George Capmartin, élu en second, ont pris, eux, leur charge et ont prêté serment. Or sur les livres des délibérations consulaires, rien n'est mentionné concernant une nouvelle élection : elle a effectivement eu lieu en secret, comme l'indique dans son Mémorial Me Bacalerie. Bien sûr, on ne saurait trop souligner que Philippe Larigaudère qui a été nommé premier consul est aussi notaire de la ville et greffier des consuls. Ce qui n'est pas vraiment un gage d'impartialité et peut bel et bien justifier la défiance d'Anthoine Bacalerie 48 . Précisément son minutier de 1627 49 mentionne en moins dans la lignée masculine et c'est son neveu, Pierre Bacalerie -marchand de Sarrant et bourgeois consul en 1654 et 1675-qui est présent à la revente de l'office d'Anthoine Bacalerie en 1665.
procceder à la demolition des maisons, depublement et couppe de toutz boys, vergers et autres plaisances de ceulx qui s'estoient retires dans Montauban comme se veit aulx vestiges mesmes a Mauvesin » 22 .
, le siège du Mas d'Azil de 1625 conduit par le maréchal de Thémines 15 et en cette même année les menées du chef protestant Soubise 16 . Il en est encore question en 1627 à l'occasion de ses manoeuvres pour s'emparer de l'île de Ré, finalement contrées par Toiras 17 : tous ces personnages étant nommément cités par Anthoine Bacalerie qui mentionne encore évidemment le mémorable siège de La Rochelle de 1628 18 . Certes, Sarrant est à proximité du théâtre des opérations et A. Bacalerie ne pouvait être indifférent aux mouvements de troupes et à ce qui se passait autour des places huguenotes, plus ou moins proches. Aussi est-il question, dans le Mémorial, de Lectoure. Me Bacalerie note ainsi la « démission » du gouverneur Fontrailles dont se méfient les autorités protestantes et son remplacement par M. de Blaineville 19 ; Montauban, de même et surtout : parce qu'elle est l'une des citadelles du parti protestant, elle ne peut manquer de royales du duc de Mayenne, alors gouverneur de Guyenne 20 . Les Sarrantais furent mis à contribution cette année-là quoiqu' A. Bacalerie reste évasif à ce propos. En effet, le duc de
Mayenne fit procéder en août à la démilitarisation des places de Mas-Grenier, l'Isle-Jourdain
et Mauvezin : non seulement les habitants devaient rendre leurs armes, mais encore avait été
promulguée une « ordonnance d'abattement et demolission des murailles et fortiffications et
comblement des fossés ». De Bertrand, un conseiller au parlement de Toulouse fut chargé de
superviser les opérations « aulx despens et dilligence des consulz et communaultés des villes
et lieulx de quatre ou cinq lieues des environs, lesquelz y envoyerent les habitants attrouppés
en ordre par jours alternatifz les tambours battanz et aulcuns ayant enseignes
desployees… » 21 . Sarrant n'est pas explicitement nommé, mais les détails fournis par Me
Bacalerie sur cette véritable mobilisation sont trop précis pour n'avoir pas été constatés de
visu.
Or, par ailleurs, les délibérations consulaires confirment que Sarrant a bel et bien participé en
1621 à la démolition des fortifications de Mauvezin, en main d'oeuvre mais aussi en impôts
prélevés, et en hébergements obligés de troupes royales. Mais elles apprennent également que
les habitants de Sarrant ont été aussi très souvent « cottisés » pour l'entretien des armées du
roi pendant le siège de Montauban. Même si A. Bacalerie ne fait aucunement état de
réquisitions d'aucune sorte, des « manoeuvriers » et des maçons participèrent aux frais de la
communauté sarrantaise à des opérations préventives autant que punitives de « dégâts »
autour de Montauban en 1625 et en 1628, au moins : il s'agissait comme l'expose Me
Bacalerie : de « Il faut ajouter à ce voisinage avec les troubles, le fait que, de longue date, il y a en
Fezensaguet des communautés réformées. Ne parle-t-on pas de Mauvezin comme de la
« petite Genève de Gascogne » ? Dans ces conditions, la guerre de Religion de Louis XIII ne
Me Bacalerie, encore jeune mais étant malade, avait fait son testament le 19 mai 1619 cf A.D Haute-Garonne, 3 E 10386, f°135 (étude Larigaudère). Il avait alors trois enfants. Deux filles Jehanne et Biaise et un fils Sixte qu'il nomme pour son héritier. On ne retrouver pas son fils Sixte parmi les notables de Sarrant, il est probablement décédé au début de l'âge adulte. Me Bacalerie a peut-être eu, après 1619, un autre fils, Pierre marchand de la ville qui sera consul en 1654 et 1675. Il n'est pas impossible aussi qu'il soit son neveu. Me Bacalerie a eu, après 1619, un autre fils, Louis. En 1638 nous trouvons Louis Bacalerie chez Me Larigaudère, « faisant pour son père Anthoine notaire ». Mais on n'en trouve plus trace par la suite, or comme on l'a vu, le benjamin d'A. Bacalerie, Daniel est entré dans les ordres, en 1654. Me Bacalerie n'a donc pas de descendance au
A. D. Tarn-et-Garonne, 5 E 17458 (1619-1620-1621) , 5 E 17459 (1622-23), 5 E 17460 (1625-1626-1627), 5 E 17461 (1628-29)
A. D. Gers,
E 10512, f°309 (étude Saunyer) : A. Bacalerie acquiert son étude le 9 avril 1613 à Guilhaume Guilhamède, notaire royal de la ville de Sarrant, qui l'avait acquise de Jean de Nolhac. Il fait cet achat pour la somme de
livres tournois et son prédecesseur lui remet les minutiers depuis mai 1606 jusqu'au jour de lachat.3 Des achats et des ventes enregistrées chez Me Larigaudière, confrère d'Anthoine Bacalerie, l'indiquent cf A.D. Tarn-et-Garonne, 3 E 10394 , f°87 notamment. La revente de la charge de Me Bacalerie eut lieu, en tout état de cause, en 1665.
Le livre de raison, selon la définition donnée par Madeleine Foisil dans le Dictionnaire du Grand Siècle (Fayard, 1989) est l'une des expressions essentielles de l'écriture privée de la fin du XVIe au XVIIIe siècle. Il est le fait presque exclusif d'hommes, chefs de famille et il constitue grosso modo un moyen terme entre le registre de comptabilité et le journal intime, plutôt féminin, du XIXe et du XXe siècle. On ne saurait trop rappeler que le grand historien gascon Ph. Tamizey de Larroque (1828-1898) contribua grandement à la mise en valeur historiographique de ce type de documents. Il procura l'admirable édition de 8 d'entre d'eux au total. On ne citera pour mémoire que les Deux livres de raison de l'Agenais, Auch, Cocharaux, 1893 et les livres de raison respectivement des familles Fontainemarie, Dudrot de Capdebosc et du chevalier d'Escage, disponibles en version numérisée sur le site « Gallica » de la B.N.F.
voit notice "Climat" dans Bluche (Fr.) s.d, Dictionnaire du Grand Siècle,Fayard, 1989.
A.D Tarn-et-Garonne, 5 E 17458, f°130r°, 5 E 17460, f° 418r°, 5E 17461, f°183r°.
A.D Tarn-et-Garonne, 5 E 17460, f°139v°.
Jean, le laboureur, seul frère d'Anthoine Bacalerie, avait, à la suite du règlement de la succession de leur père, il avait accumulé beaucoup de terres délaissées par le notaire à son profit. Jean Bacalerie a eu deux fils Anthoine et Pierre.
A.D Tarn-et-Garonne, 5 E 5417458, f°130r°, 5 E 17461, f°182r° notamment.
A.D Tarn-et-Garonne, 5 E 17460 f°179v° : fin octobre 1620, alors qu'il est premier consul et juste avant de quitter sa charge, A. Bacalerie passe, avec ses collègues consuls, un « bail à arpenter la juidiction » qui devra se faire après la Toussaint 1620 -date de fin de charge-cf A.D Gers, 3 E 10515, f°89 (Me Saunyer). Cet arpentement cependant ne sera effectué qu'en 1625 quand Bernard Tressens sera premier consul.
A.D Tarn-et-Garonne, 5 E 17460 f°417v°-f°418r°
Traditionnellement les guerres de religion commencent en 1562 et s'achèvent en 1598. Mais les opérations militaires conduites par Louis XIII contre le parti protestant (entre 1620 : expédition en Béarn et 1629 : Edit de grâce d'Alès) sont aussi parfois qualifiées de « guerre de religion ». L'expression vient d'un contemporain des événements : l'historiographe officiel du roi Charles Bernard cf Tapié (V.-L.), La France de Louis XIII et de Richelieu, Flammarion, 1967, p. 124.
A.D Tarn-et-Garonne, 5 E 17458 f°140v° ; 5 E 17458 f°418v°.
A.D Tarn-et-Garonne, 5E 17458 f°419v° cf Tamizey de Larroque (Ph.), Récit de l'assassinat de Boisse-Pardaillan et de la prise de Monheurt, Plaquettes gontaudaises n°6, Paris, 1880.
15 Il s'agit de Pons de Lauzières de Cardaillac, marquis de Thémines, gouverneur du Quercy et maréchal de France (v. 1553-1627) : A.D Tarn-et-Garonne, 5 E 17460 f°179r° cf Barrière-Flavy (Casimir), Journal du siège du Mas d'Azil
en 1625, écrit par J. de Saint-Blancard, défenseur de la place contre le maréchal de Thémines, Foix, Vve Pomiès, 1894 (reprod. en fac-sim., coll. Rediviva, C. Lacour, Nîmes, 2000) : mené au mois de septembre 1625, contre un village qui ne comptait pas plus de 200 maisons, ce siège fut aussi remarquable par la farouche résistance des habitants, femmes et enfants compris que par la férocité des représailles qui suivirent. Le duc de Ventadour se vanta, paraît-il, de conduire à la messe à coups de bâton et d'étrivières ceux et celles que ses soldats épargnaient.16 Il s'agit de Benjamin de Rohan, duc de Frontenay et seigneur de Soubise, né en 1583, mort en 1642. C'est le frère cadet du duc de Rohan cf Deyon (P. et S.), Henri de Rohan, huguenot de plume et d'épée (1579-1638), Perrin, 2000, pp. 67-119.17 A.D Tarn-et-Garonne, 5 E 17460 f°179r° (1625) et 5 E 17460 f°418r°. Jean du Caylar de Saint-Bonnet, seigneur de Toiras (1585-1636) défendit trois mois durant St-Martin de Ré, en particulier alors qu'arrivaient des renforts anglais pour soutenir les huguenots rochellais cf Deyon, op. cit, p. 101.18 A.D Tarn-et-Garonne, 5 E 17461f°182.19 Il s'agit de Benjamin de Fontrailles, sénéchal depuis 1611, fils de Michel de Fontrailles dont parle Monluc dans ses Commentaires. Le registre BB5 des Archives Communales de Lectoure contient une lettre, datée de Bordeaux, le 4 avril 1620, du duc de Mayenne, gouverneur de Guyenne aux consuls qui nomme officiellement M. de Blaineville capitaine et gouverneur du château de Lectoure. Sur les places protestantes : Souriac (P.J.), Les places de sûreté protestantes, reconnaissance et déclin de la puissance politique et militaire du parti protestant (1570-1629), Mémoire de maîtrise, Université Toulouse II-Le Mirail, 1997.
Mousnier (Roland), L'homme rouge ou la vie du cardinal de Richelieu (1585-1642), Bouquins, R. Laffont, 1992, p.186-188.
Il s'agit d'Henri II , prince de Condé (1588-1646) : il est le petit-fils du chef protestant occis, le 13 mars 1569, sur le champ de bataille de Jarnac, par le pistolet de Montesquiou. ; ce personnage est le père du « Grand Condé » cf A.D Tarn-et-Garonne, 5 E 5417458, f°130r°.
A.D Tarn-et-Garonne, 5 E 17458, f°141r°. Lors de la bataille de la Montagne Blanche, du côté impérial : 25 000 hommes, du côté de l'armée de Bohême : 20 000 hommes. Il faut remarquer qu'il s'agit de mercenaires plutôt que d'armées nationales. La bataille ou plutôt l'échauffourée ne dura guère plus d'une heure : panique dans les rangs bohémiens. Fuite jusqu'à Prague dans la ville basse déjà encombrée de paysans réfugiés avec leurs charrettes et leur bétail. Fuite surtout de l'Electeur Palatin Frédéric V (1596-1632). Dans les rangs impériaux, le matin du 8 novembre sur le front des troupes des prêtres avaient distribué la communion. Un
A.D Tarn-et-Garonne, 5 E 17460, f°417 r°. |
00410427 | en | [
"info.info-dc"
] | 2024/03/04 16:41:22 | 2006 | https://hal.science/hal-00410427/file/scotch_parallelordering_pmaa.pdf | Cédric Chevalier
François Pellegrini
PT-Scotch: A tool
published or not. The documents may come
PT-Scotch: A tool for efficient parallel graph ordering Cédric Chevalier and François Pellegrini
I. Introduction
Graph partitioning is an ubiquitous technique which has applications in many fields of computer science and engineering. It is mostly used to help solving domaindependent optimization problems modeled in terms of weighted or unweighted graphs, where finding good solutions amounts to computing, eventually recursively in a divide-and-conquer framework, small vertex or edge cuts that balance evenly the weights of the graph parts.
Because there always exists large problem graphs which cannot fit in the memory of sequential computers and cost too much to partition, parallel graph partitioning tools have been developed [START_REF]MeTiS: Family of multilevel partitioning algorithms[END_REF], [START_REF]Jostle: Graph partitioning software[END_REF], but their outcome is mixed. In particular, in the context of parallel graph ordering which is the one of this paper, they do not scale well, as partitioning quality tends to decrease, and thus fill-in tends to increase much, when the number of processors which run the program increase. Consequently, graph ordering is the first target application of the PT-Scotch ("Parallel Threaded Scotch") software, a parallel extension of the sequential Scotch graph partitioning and ordering tool that we are currently developing within the ScAlApplix project. Graph ordering is a critical problem for the efficient factorization of symmetric sparse matrices, not only to reduce fill-in and factorization cost, but also to increase concurrency in the elimination tree, which is essential in order to achieve high performance when solving these linear systems on parallel architectures. We outline in this extended abstract the algorithms which we have implemented in PT-Scotch to parallelize the Nested Dissection ordering method [START_REF] George | Computer solution of large sparse positive definite systems[END_REF].
II. Algorithms for efficient parallel reordering
The parallel computation of orderings in PT-Scotch uses three levels of concurrency. The first level is the implementation of the nested dissection method itself, which is straightforward thanks to the intrinsically concurrent nature of the algorithm. Starting from the initial graph, arbitrarily distributed across p processors, the algorithm proceeds as follows: once a separator has been computed in parallel, by means of a method described below, each of the p processors participates in the building of the distributed induced subgraph corresponding to the first separated part. This subgraph is then folded on the first ⌈ p 2 ⌉ processors, such that the average number of vertices per LaBRI and INRIA Futurs Université Bordeaux I 351, cours de la Libération, 33405 TALENCE, FRANCE {cchevali|pelegrin}@labri.fr processor, which guarantees efficiency as it allows the shadowing of communications by a subsequent amount of computation, remains constant. The same procedure is used to build, on the ⌊ p
The second level of concurrency regards the computation of separators. The approach we have chosen is the now classical multi-level one [START_REF] Barnard | A fast multilevel implementation of recursive spectral bisection for partitioning unstructured problems[END_REF]. It consists in repeatedly computing a set of increasingly coarser versions of the graph to separate, by finding vertex matchings which collapse vertices and edges, until the coarsest graph obtained is no larger than a few hundreds of vertices, then computing a separator on this coarsest graph, and projecting back this separator, from coarser to finer graphs, up to the original graph. Most often, a local optimization algorithm, such as Fiduccia-Mattheyses (FM) [START_REF] Fiduccia | A linear-time heuristic for improving network partitions[END_REF], is used in the uncoarsening phase to refine the partition that is projected back at every level, such that the granularity of the solution is the one of the original graph and not the one of the coarsest graph. The matching of vertices is performed in parallel by means of an asynchronous probabilistic multi-threaded algorithm. At the end of each coarsening step, the coarser graph is folded onto half of the processors that held the finer graph, in order to keep a constant number of vertices per processors, but it is also duplicated on the other half of the processors too. Therefore, the coarsening process can recursively proceed independently on each of the two halves, which results in an improvement of the quality of the separators, as only the best separator produced by the two halves is kept at the upper level.
The third level of concurrency regards the refinement heuristic which is used to improve the separators. FM-like algorithms do not parallelize well, as they are intrinsically sequential, and attempts to relax this strong sequential constraint can lead to severe loss of partition quality when the number of processors increase [START_REF]MeTiS: Family of multilevel partitioning algorithms[END_REF]. We have proposed and successfully tested in [START_REF] Chevalier | Improvement of the efficiency of genetic algorithms for scalable parallel graph partitioning in a multi-level framework[END_REF] needs only to be passed a subgraph that contains the vertices that are very close to the projected separator. We have experimented that, when performing the refinement algorithm on band graphs that contains only the vertices that are at distance at most 3 from the projected separators, the quality of the finest separator is not significantly altered, and can even be improved in some cases. The advantage of constrained band FM is that band graphs are of a much smaller size than their parent graphs, and can therefore be used to run algorithms that would else be too costly to consider, such as evolutionary algorithms. What we have implemented is a multi-sequential approach: at every distributed uncoarsening step, a distributed band graph is created by using the very same algorithms as the one used to build each of the two separated subgraphs in the nested dissection process. Centralized copies of this band graph are then created on every participating processor. These copies can be used collectively to run a scalable parallel multi-deme genetic optimization algorithm [START_REF] Chevalier | Improvement of the efficiency of genetic algorithms for scalable parallel graph partitioning in a multi-level framework[END_REF], or fully independent runs of a full-featured sequential FM algorithm. The best refined band separator is projected back to the distributed graph, and the uncoarsening process goes on. Centralizing band graphs is an acceptable solution because for most graphs the size of the separators is of several orders of magnitude smaller that the size of the separated graphs: it is for instance in O(n 1 2 ) for 2D meshes, and in O(n 2 3 ) for 3D meshes [START_REF] Lipton | Generalized nested dissection[END_REF].
III. Experimental results
PT-Scotch is written in ANSI C, with calls to the POSIX thread and MPI APIs. Some of the graphs that we used to run our tests are shown in Table I. Table II presents the operation count of Cholesky factorization (OPC) yielded by the orderings computed with PT-Scotch and ParMeTiS. The improvement in quality yielded by PT-Scotch is clearly evidenced, and increases along with the number of processes, as our local optimization scheme is not sensitive to the number of processes.
IV. Conclusion
We have outlined in this paper the parallel algorithms that we have implemented in PT-Scotch to compute in parallel efficient orderings of large graphs. The first results are encouraging, as they meet the expected performance requirements in term of quality, but have to be improved in term of scalability, because the current version of our asynchronous matching algorithm does too many communications that are not shadowed by computations. A multibuffer version of the matching algorithm is therefore being developed.
Although it corresponds to a current need within the ScAlApplix project, to obtain as quickly as possible high quality orderings of graphs with a size of a few tens of millions of vertices, sparse matrix ordering is not the application field in which we expect to find the largest problem graphs, as existing parallel direct sparse linear system solvers cannot currently handle full 3D meshes of a size larger than about fifty million unknowns. Therefore, basing on the software building blocks that we have already written, we plan to extend the capabilities of PT-Scotch to compute k-ary edge partitions of large meshes for subdomain-based iterative methods, as well as static mappings of process graphs, as the Scotch library does sequentially.
a solution to this problem: since every refinement is performed by means of a local algorithm, which perturbs only in a limited way the position of the projected separator, the local refinement algorithm
Graph Size (×10 3 ) Average V E degree
audikw1 944 38354 81.28
conesphere1m 1055 8023 15.21
coupole8000 1768 41657 47.12
thread 29 2220 149.32
TABLE I
I Some of our test graphs.
⌋ remaining processors, the folded induced subgraph corresponding to the second part. These two constructions being completely independent, each of the computations of an induced subgraph and of its folding can be done in parallel, thanks to the temporary creation of an extra thread per processor. At the end of the folding process, the nested dissection process can recursively proceed independently on each subgroup of p 2 processors, until each of the subgroups is reduced to a single processor. From then on, the nested dissection process will go on sequentially, using the nested dissection routines of the Scotch library. |
04104410 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2022 | https://shs.hal.science/halshs-04104410/file/WorldInequalityLab_WP202218.pdf | Thomas Piketty
email: [email protected]
Emmanuel Saez
email: [email protected]
Gabriel Zucman
email: [email protected].
Rethinking Capital and Wealth Taxation
Keywords: optimal capital taxation, wealth taxation, inheritance taxation JEL classification: H21
This paper reviews recent developments in the theory and practice of optimal capital taxation. We emphasize three main rationales for capital taxation. First, the frontier between capital and labor income flows is often fuzzy, thereby lending support to a broadbased, comprehensive income tax. Next, the very notions of income and consumption flows are difficult to define and measure for top wealth holders where capital gains due to asset price effects dwarf ordinary income and consumption flows. Therefore the proper way to tax billionaires is a progressive wealth tax. Finally, as individuals cannot choose their parents, there are strong meritocratic reasons why we should tax inherited wealth more than earned income or self-made wealth for which individuals can be held responsible, at least in part. This implies that the ideal fiscal system should also include a progressive inheritance tax, in addition to progressive income and wealth taxes. We then confront our prescriptions with historical experience. Although there are significant differences, we argue that observed fiscal systems in modern democracies bear important similarities with this ideal tryptic.
Introduction
This paper reviews a number of recent developments in the theory of optimal capital taxation and confronts them with tax practice. The equity-efficiency tradeoff is at the heart of optimal tax theory. For capital taxation, this tradeoff is especially marked as capital ownership is much more concentrated than labor income. While the top 1% labor income share is generally below 10%, the top 1% wealth share is typically several times higher, ranging from 25% to 40% in advanced modern economies. While the top 10% labor income share generally ranges from 25% to 50%, the top wealth share is usually around 60-80%. Even more strikingly, while the bottom 50% labor income ranges around 20-25% in some of the most advanced countries in the world, the bottom 50% wealth share is below 5% in pretty much every country on the planet (see [START_REF] Chancel | The World Inequality Report 2022[END_REF], for a recent presentation of such statistics across the world). Capital taxation is also more complex than labor taxation as capital not only generates income flows but is also a stock variable.
There are a variety of taxes on capital: property and wealth taxes assessed on asset stocks, individual income taxes on many forms of capital income received by individuals, corporate income taxes on corporate profits, and inheritance (or estate) taxes on transfers at death. It is useful at the outset to show how tax progressivity looks like when including all taxes collected at all levels of government, using the recently developed Distributional National Accounts methodology which is being applied to a growing number of countries (e.g., Piketty, Saez andZucman 2018, Blanchet et al., 2021). Figure 1 depicts the average tax rate by income groups in the United States, France, and the Netherlands. Income is defined to match total national income as recorded in the national accounts, following internationally-agreed standard and methods, and thus maximizing comparability across countries. National income includes all forms of labor income (salary and benefits) and capital income (including profits retained in corporations).
Labor taxes are assigned to corresponding workers, capital taxes to owners, and consumption taxes to consumers. In all countries, tax progressivity, if it exists at all, is modest: all income groups, including those at the bottom of the distribution, pay about the same fraction of their income in taxes. At the very top, the tax system becomes regressive, particularly so in the Netherlands where the data are of highest quality for the very top (see below). Existing taxes on capital income-income which is highly concentrated at the top of the distribution-are not sufficient in practice to maintain progressivity at the upper end of the scale. This pattern of regressivity at the top seems to be true no matter whether countries have large governments (like France), medium-size governments (like the Netherlands) or relatively small governments (like the United States). 1Naturally, progressivity touches only upon the equity aspect of the optimal tax problem.
Figure 1 is silent on the efficiency effects of taxation. From a purely logical perspective, it is conceivable that lower tax rates on the rich (due to a more favorable treatment of capital income than labor income) is actually beneficial for the economy and in the interest of people with lower incomes. The literature on optimal capital taxation has developed models to capture the efficiency costs of capital taxation and analyze this tradeoff.
From this rich literature, we emphasize three main rationales for capital taxation that strike us as most important and hence most relevant in practice. First, the frontier between capital and labor income flows is often fuzzy, thereby lending support to a broad-based, comprehensive income tax to reduce tax avoidance opportunities. Next, the very notions of income and consumption flows are difficult to define and measure for top wealth holders. For the ultra-wealthy, capital gains due to asset price effects dwarf ordinary income and consumption flows. Furthermore, wealth at the very top is primarily in the form of business stock, which is by definition divisible and hence taxable. Therefore the proper way to tax billionaires is a progressive wealth tax. Finally, because individuals do not choose their parents, there are strong meritocratic reasons why we should tax inherited wealth more than earned income or self-made wealth, for which individuals are at least in part responsible for. This implies that the ideal fiscal system should also include a progressive inheritance tax, in addition to progressive income and wealth taxes.
We confront our prescriptions with historical experience. Although there are significant differences, in particular regarding the wealth tax, we argue that observed fiscal systems in modern democracies bear important similarities with this ideal tryptic. There is still a long way to go toward a socially optimal system, however. We argue that this tryptic should be reinforced and made more systematic and consistent, both at the domestic and global level. This would require much more intensive international coordination than what has been achieved until now, as well as a more active role by major international organizations and a modernization of existing fiscal doctrine (a task to which the present paper attempts to contribute). In the longer run, the wealth tax and inheritance tax components could and should be extended substantially if we want to reach a more equal distribution of wealth and economic power. In particular, progressive wealth and inheritance taxes could be used to finance a minimum inheritance to all-arguably one of the most promising avenues in order to equalize wealth and opportunities. This will certainly take time, but we feel that it is important and useful to take a long-run perspective on these important issues and to set broad targets for the future.
We should make clear from the outset that we do not attempt in the present paper to cover all possible rationales for capital taxation. In particular, we do not cover time-inconsistency arguments. 2 Nor do we cover rationales that are based upon redistribution between different age groups in the presence of inter-temporal market failures. 3 More generally, capital market imperfections offer a large variety of motives and implications for capital taxation and wealth redistribution, which we cover only partially in the present paper. 4 At a more modest level, our objective in this paper is to show that the theory of optimal capital taxation has made some progress, in the sense that we now have a number of simple, tractable economic models that allow us to think about the pros and cons of existing systems of capital taxation. Needless to say, more research is needed in order to reach a more complete understanding of this important issue.
We are not the first ones to review the recent literature on optimal capital taxation and confront it with practice. [START_REF] Boadway | From Optimal Tax Theory to Tax Policy: Retrospective and Prospective Views[END_REF] is a comprehensive survey connecting optimal taxation to practice. [START_REF] Diamond | The Case for a Progressive Tax: From Basic Research to Policy Recommendations[END_REF] is a shorter take on the same issue. [START_REF] Scheuer | Taxation and the Superrich[END_REF] provide a recent survey on the literature on taxing the rich covering both theory and empirics. [START_REF] Kaplow | Optimal Income Taxation[END_REF] provides a theoretically oriented survey on optimal income taxation with a significant focus on the rich and issues of market power. Stantcheva (2020) presents a survey on optimal dynamic taxation from a theoretical angle. Saez and Zucman (2019) discuss 2 Once capital is created, it is tempting to tax it, even if it would have been preferable to commit ex-ante not to do so (see e.g. [START_REF] Farhi | Nonlinear Capital Taxation Without Commitment[END_REF] for a model along these lines). In our view, this is a relatively weak rationale for capital taxation. If this was the main reason why capital is taxed, then the right policy response would be to create an independent tax authority with a zero-capital-tax mandate (or a low-capital-tax mandate), similar to the low-inflation mandate of independent central banks.
3 With uninsurable income risk and borrowing constraints, taxing capital income can be a way to shift the tax burden onto older cohorts and to alleviate the liquidity constraints faced by younger cohorts. For a model along these lines, see [START_REF] Conesa | Taxing Capital? Not a Bad Idea after All![END_REF]. In principle, this could also be achieved by using age-dependent taxation (which to some extent public pension systems do). 4 We refer below to a particular form of capital market imperfection, namely uninsurable idiosyncratic shocks to rates of return. Other imperfections, e.g., borrowing constraints, also matter a great deal for optimal capital taxation and redistribution. See, e.g., [START_REF] Chamley | Capital Income Taxation, Wealth Distribution and Borrowing Constraints[END_REF].
prospects for progressive wealth taxation mostly from a practical angle. [START_REF] Scheuer | Taxing Our Wealth[END_REF] provide a recent overview on wealth taxation. This contribution provides a very selective review focusing on the theoretical aspects that we think are the most relevant in practice, some of which have not yet been modeled and analyzed fully.
The Rationale for a Comprehensive Income Tax
In the real world, the frontier between capital and labor income flows is often fuzzy-or at least more difficult to draw than what is generally assumed in theoretical models. Typically, self-employed individuals and business owners can (at least partly) decide how much they get paid in wages vs. dividends. This also applies to corporate executives, whose compensation packages often involve a complex and diverse set of income flows. Sometimes it is not at all obvious to decompose these flows into a pure labor component (payment for labor services) and a pure capital component (compensation for capital ownership). For example, if individual wage bargaining power is influenced by one's equity position, or if there is collusion between employees and owners so as to minimize tax burden, the frontier might be fuzzy.
In our view, the fuzziness of the capital vs. labor frontier is the simplest-and the most compelling-rationale for a comprehensive income tax (i.e., an income tax treating labor and capital income flows alike) or, at least, for taxing capital and labor income flows at rates that are not too different.
Take the extreme case where the frontier is entirely fuzzy, i.e., each individual can costlessly convert labor income into capital income and vice versa. That is, each individual i receives total income y i = y li + y ki , where y li is labor income and y ki is capital income, but the government can only observe total income y i (the division between the two components can be manipulated at no cost). Then the only possible tax policy is a comprehensive income tax, i.e., a tax τ (y) on total income. Consider now the case where it is costly to shift income flows between tax bases. The government can now try to impose a dual income tax system, with different tax schedules τ l (y l ) and τ k (y k ) applying to labor and capital income flows. However if the tax rates differ widely, then individual taxpayers may choose to pay the cost and shift their income to the most favorable tax base. If we note e s the relevant income shifting elasticity, one can easily show that the optimal tax differential τ l -τ k is a declining function of e s . That is, the higher the income shifting elasticity, the closer the tax rates on labor and capital should be.5 Empirically, a large body of work has shown that tax avoidance responses can be large when there are tax avoidance opportunities. These avoidance responses often dwarf real responses (see Saez, Slemrod, and Giertz 2012 for a survey), as posited by the influential hierarchical model of behavioral responses to taxation of [START_REF] Slemrod | The Economic Impact of the Tax Reform Act of 1986[END_REF][START_REF] Slemrod | Income Creation or Income Shifting? Behavioral Responses to the Tax Reform Act of 1986[END_REF].
It is worth stressing that the fuzziness rationale also applies in economic environments where there is otherwise no reason at all to tax capital income. Consider the benchmark [START_REF] Atkinson | The Design of Tax Structure: Direct Versus Indirect Taxation[END_REF] model where individuals live during two periods t = 1, 2 and are born with zero inherited wealth. In period t = 1, nobody owns any wealth, so that income is simply equal to labor income, which individual i allocates to consumption and saving: y 1i = y 1li = c 1i + k 2i . In period t = 2, income is equal to the sum of labor and capital income: y 2i = y 2li + y 2ki , with
y 2ki = R • k 2i where R = 1 +
r is the exogenous rate of return on savings. Under standard separability assumptions on preferences for consumption vs. leisure, a well-known result in this class of models is that taxing capital income is useless: it creates a pure intertemporal distortion between periods 1 and 2 consumption decisions (just like differential commodity taxation) and brings no welfare gain. The efficient tax policy in this setting is to tax solely labor income flows (i.e. τ = τ (y l )).6
But if the government can only observe total income (or if individuals can easily convert labor into capital income and vice versa, so that the income shifting elasticity is large), then there is no choice but using a comprehensive income tax (τ = τ (y), with y = y l + y k ), or a dual income tax with limited tax differentials between income categories. Conservatives often advocate for a consumption tax on the grounds that it is equivalent to a labor income tax and hence exempts the return on capital from taxation. This is true in the basic two-period model just described, but the progressive tax on consumption would have to be defined on the present discounted value of life-time consumption c 1i + c 2i /(1 + r). Even assuming that c 1i and c 2i are measurable, this would require measuring r which again leaves the door open for manipulation, i.e., claiming that labor income is small and capital income is large, so that r is larger and c 1i + c 2i /(1 + r) is smaller. The most common form of progressive consumption tax proposed in practice is a progressive tax on annual consumption which cannot replicate the optimal labor income tax in the standard Atkinson-Stiglitz model.
As is common in optimal tax theory, a lot hinges on the relative magnitude of different elasticities. If the cross-sectional income shifting elasticity e s is large compared to the intertemporal substitution elasticity (as suggested by available empirical estimates), then comprehensive income taxes are desirable and create little intertemporal distortions. Conversely, if the shifting elasticity is small compared to the intertemporal elasticity, then the intertemporal distortion induced by capital taxation generates significant welfare costs, so that it is better to have a dual system with low tax rates on capital income. In economic environments where there are other reasons to tax capital (e.g., the existence of inheritance, as discussed below), then other parameters play a role. In any case, the income shifting elasticity e s plays an important role for the determination of the optimal tax system.
At the upper end of the distribution, the most important area where income shifting considerations are relevant involves shifting between the individual vs. corporate income tax bases.
The logic of comprehensive income taxation laid out here implies that corporate profits should be included in the individual income tax base, just like the profits of unincorporated businesses.
The US experience since the Tax Reform Act of 1986 shows that taxing profits at the individual level is possible even for large and complex businesses. Currently, S-corporations and partnerships (many of which are large and some of which are even publicly traded) are taxed directly and solely on the individual tax returns of their owners. Applying this regime to all corporations would be a key step towards a comprehensive income tax system. The corporate income tax would become a mere withholding tax at source that could be credited back to owners for individual tax purposes, just like the taxes withheld on wage earnings are. This comprehensive income tax would make it impossible for business owners to avoid taxation by shifting income from the individual to the corporate base and retaining profits in corporations; see [START_REF] Saez | The Triumph of Injustice: How the Rich Dodge Taxes and How to Make Them Pay[END_REF] and our discussion below.
3 The Rationale for a Progressive Wealth Tax One important limitation of income taxes is that income flows are often difficult to define and measure for top wealth holders. In particular, owners of very large fortunes typically receive personal, taxable income flows that are much smaller than their full economic income. Their wealth portfolio is generally a large stake in a business and/or managed through a holding company, a private foundation or other bodies, and most of the return is being accumulated within this vehicle. The individual owners then choose to receive an annual personal income flow that is sufficient to pay for their private consumption-which can be a very small fraction of their wealth if they are very wealthy. Although we do not have systematic data on this issue, there is much anecdotal evidence suggesting that the personal income reported by top billionaires can indeed be a tiny fraction of their true economic income.7
In other words, income flows themselves-and not only their division into capital and labor income components-are often non-observable for top wealth holders. Assume for simplicity that there is tiny group of billionaires-making a fixed fraction λ of the population-for whom the government can only observe the evolution of their net wealth k ti , k t+1i , etc. In principle, one could try to recover the full economic income y ti (in the Haig-Simons sense) by using the following accounting equation:
k t+1i = k ti + y ti -c ti i.e. y ti = ∆k ti + c ti , with ∆k ti = k t+1i -k ti
The problem is that the consumption flow c ti of top wealth holders might be as difficult to define and estimate as the income flow itself y ti . Should we include the private jet used by Bill Gates or his collaborators as part of his private consumption, or as part of the income flow that is being re-invested by his foundation in order to promote new projects? It can be quite difficult-and cumbersome-to decide.
The net wealth sequence k ti , k t+1i , etc., is generally easier to observe than y ti and c ti . Generally, billionaires' wealth is tied to large business stakes (e.g., Tesla stock for Elon Musk, currently the richest person in the world). If businesses are publicly traded, wealth is easy to measure.
In the United States for example, large equity stakes in publicly traded businesses have to be reported to the Security Exchange Commission. These data form the backbone of the rich lists compiled by Forbes and Bloomberg (see [START_REF] Saez | Top Wealth in America: A Reexamination[END_REF]. Using the global billionaires list compiled by Forbes over the 1987-2013 period, we can see that top global wealth holders have risen at a very fast pace over the past three decades. The average, real yearly growth rate ∆k ti /k ti appears to be of the order of 6 -7% over the 1987-2013 period (or even higher at the very top of the billionaires' list). 8 These high wealth growth rates primarily reflect asset price effects rather than ordinary capital income earned by the asset (such as profits in the case of corporate stock). Indeed, corporate stock values surge when a business is expected to become eventually profitable well before actual profits materialize (e.g., Amazon or Google). Therefore, for billionaires, wealth is the most relevant economic variable, not income flows derived from wealth.
For most billionaires the consumption flow c ti is likely quite small compared to ∆k ti . For instance, with net wealth k ti equal to $3 billion (roughly the average wealth of billionaires), average ∆k ti is of the order of $180-210 millions (6-7% of k ti ), so that a consumption flow c ti of (say) $10 million would correspond to about 5% of ∆k ti only.
One possibility would then be to ignore the consumption flow and to tax billionaires by applying the regular income tax τ (y) to their implicit income y ti = ∆k ti (a lower bound for their Haig-Simons income), or maybe to y ti = max(∆k ti , y pti ), where y pti is their conventionally measured personal income, generally much smaller than ∆k ti . Indeed, some economists have advocated for the taxation of unrealized capital gains in addition to regular income (see Saez, Yagan, Zucman 2021 for a recent discussion in the US context). For the ultra rich, this corresponds approximately to taxing wealth gains ∆k ti . Such proposals have gained traction in the US tax policy debate in recent years and the Biden administration proposed a version of this idea in its 2023 budget, in the form of a minimum tax on unrealized gains for taxpayers with wealth in excess of $100 million (US Office of Management and Budget, 2022).
Taxing increments in wealth is relatively arbitrary, however. Most billionaires seem to derive direct utility from the wealth they own (and the power, prestige and influence conferred by their wealth or by the control of a large business), at least as much as from their private consumption (probably because of consumption satiation). 9 Indeed, popular rich lists such as the Forbes billionaires list focus on wealth rather than wealth gains. 10 Moreover, ∆k ti can be highly volatile (strongly negative in stock market downturns, and strongly positive in booms), which raises practical implementation difficulties. 11 8 See Piketty, 2014, chapter 12, table 12.1. 9 Saez and Stantcheva (2018) show that introducing wealth in the utility in the standard dynamic model overturns the classic result by [START_REF] Chamley | Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives[END_REF] and [START_REF] Judd | Redistributive Taxation in a Simple Perfect Foresight Model[END_REF] that capital taxes should be zero in the longrun.
10 To be sure, these lists do mention sometimes the largest gains or the largest losses in wealth but the primary focus is on wealth itself.
11 The recent US proposal to tax the unrealized gains of the ultra-wealthy (US Office of Management and
Standard optimal capital tax theory-such as the one coming out of the [START_REF] Atkinson | The Design of Tax Structure: Direct Versus Indirect Taxation[END_REF] life-cycle model or the [START_REF] Chamley | Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives[END_REF] and [START_REF] Judd | Redistributive Taxation in a Simple Perfect Foresight Model[END_REF] infinite horizon modelabstracts from price effects on wealth and falls short for understanding the value of wealth taxes, particularly at the very top of the distribution. To take a simple illustration, consider an annual tax on wealth τ (k) and assume that the population includes a fixed fraction 1 -λ of workers with fixed labor income y lti = y l (who do not save), and a fixed fraction λ of billionaires with the following stochastic wealth process:
k t+1i = R(e ti ) • (k ti -τ (k ti )), (1)
Where R(e ti ) is a stochastic rate of return (reflecting both price effects on assets and income earned on assets), which in general might depend on individual effort decision e ti such as entrepreneurial effort.
It is important to note that a tax on capital income cannot replicate the wealth tax for two reasons. First and theoretically, suppose that income is defined according to Haig-Simons as the full return on wealth including price effects, i.e.,
y ti = ( R(e ti ) -1) • k ti . With a tax on Haig-Simons income at rate τ Y , the transition equation is k t+1i = R(e ti )k ti -τ Y (( R(e ti ) -1) • k ti )
which is no longer equivalent to (1) if returns are heterogeneous across individuals. The Haig-Simons income tax hits harder those with particularly high returns than the wealth tax. Hence, the wealth tax favors the wealthy with high returns (typically new self-made billionaires) and disfavors the wealthy with low returns (typically older or inheriting billionaires). [START_REF] Allais | L'impôt sur le Capital et la Réforme Monétaire[END_REF] famously advocated for a wealth tax on such efficiency grounds and [START_REF] Guvenen | Use It or Lose It: Efficiency Gains from Wealth Taxation[END_REF][START_REF] Guvenen | Taxing Wealth and Capital Income when Returns are Heterogeneous[END_REF] provide a modern modeling. Second and practically, no income tax system to date has been able to tax the full return on wealth which includes unrealized capital gains. Actual income tax systems have a much narrower base limited to income produced by the asset (such as rent, interest, or dividends) and realized capital gains (i.e., gains measured solely when the assets are sold). Hence, annual wealth taxation has been so far the only-and still quite imperfect-tool to go after unrealized returns on a continuous basis (see our discussion in Section 5).
Coming back to our model, the long-run ergodic distribution of top wealth will depend on the wealth tax schedule τ (k) both through mechanical effects-the fact that the wealth tax directly reduces next period wealth in equation ( 1)-and through behavioral effects-the fact that the Budget, 2022) is akin to a pre-paid minimum tax on the stock of unrealized gains that builds up over time, rather than an attempt to tax annual unrealized gains in full year after year, precisely to smooth volatility.
wealth tax may reduce the incentives to put effort e ti toward building or maintaining wealth.
Take a simple example with a linear wealth tax at rate τ . Assume that billionaires choose effort e ti so as to maximize
U = Ek t+1i -V (e it ) • k ti = (1 -τ )E R(e ti ) • k ti -V (e it )
• k ti (effort costs are assumed to be proportional to portfolio size). This produces a first order condition for effort
(1 -τ )E R (e ti ) = V (e it ) so that effort will decrease with 1 -τ under standard assumptions of concavity of R(e) and convexity of V (e). As a result, the long-run wealth distribution depends on the net-of-tax rate 1 -τ through this behavioral effort channel. reacts to the net-of-tax rate on wealth in the long run, which nests insights from several existing models, and can be calibrated using estimable elasticities. In his benchmark calibration, the revenue-maximizing wealth tax rate at the top is high (slightly above 10%), but the revenue collected from the tax is lower than in the static case as the wealth tax has a large long-run impact on top wealth. 12
These recent contributions depart from the standard micro-founded model of optimal intertemporal consumption smoothing and focus instead on understanding the forces shaping the wealth distribution and the fiscal consequences of wealth taxation, which are significant given the high and rising wealth-to-income ratios and concentration of wealth (see, e.g., [START_REF] Saez | The Desirability of Commodity Taxation under Nonlinear Income Taxation and Heterogeneous Tastes[END_REF]Zucman, 2016, 2020 To capture the relevant trade-offs involved in taxing top-end wealth, this new approach is in our view more relevant and robust, because it seems unlikely that consumption smoothing considerations-the heart of standard dynamic models-play a large role in the creation and destruction of great fortunes.
The Rationale for a Progressive Inheritance Tax
Inherited wealth is usually perceived-and taxed-differently than earned income or self-made wealth. Most normative theories of distributive justice put a strong emphasis on individual responsibility and merit, and share the view that life opportunities should be equalized as much as possible (in particular between individuals with different levels of inherited wealth). From an equal-opportunity viewpoint, it seems to make sense to tax earned income or self-made wealthfor which individuals can be held responsible, at least in part-less heavily than inherited wealthfor which individuals can hardly be held responsible. 14 This merit-based argument implies that the ideal fiscal system should also include a progressive inheritance tax, in addition to progressive income and wealth taxes.
There is substantial controversy, however, about the proper level of taxation of inherited 12 Jakobsen et al. (2020) estimate wealth tax elasticities leveraging the high-quality administrative wealth data of Denmark and also find the wealth tax has significant effects on long-run wealth levels at the top.
13 A 1% marginal wealth tax above the top 1% wealth threshold raises 1% (instead of 1.5%) as it exempts wealth below the threshold which is 1/3 of the base (as average wealth above a given threshold is approximately 3 times the threshold).
14 For instance, according to the compensation principle of fair taxation, individuals should be compensated for inequality they are not responsible for-such as bequests received-but not for inequality they are responsible for-such as labor income [START_REF] Fleurbaey | Fairness, Responsability and Welfare[END_REF].
wealth. The public debate centers around the equity vs. efficiency trade-off. In the economic debate, there is a disparate set of models and results on optimal inheritance taxation. These models differ primarily in terms of preferences for savings/bequests and the structure of economic shocks. One central conceptual difficulty is that each individual is at the same time-at least potentially-a bequest receiver and a bequest leaver. Even individuals who received zero bequest might prefer not to tax inheritance too heavily, because they themselves value the possibility of leaving a bequest to their own children. At the same time, if the tax burden falls entirely on labor income and inheritance is not taxed at all, then it might be more difficult for zero bequest receivers to accumulate wealth out of their labor income. The challenge is to be able to take into account these different effects in a tractable manner. Piketty and Saez (2013) consider such a model and show that optimal inheritance tax formulas can be expressed in terms of estimable "sufficient statistics" including behavioral elasticities, distributional parameters, and social preferences for redistribution. 15 Those formulas are robust to the underlying primitives of the model and capture the key equity-efficiency trade-off in a transparent way. They apply to a large class of models where inequality is two-dimensional: individuals differ both in terms of earnings (e.g., due to productivity shocks and labor taste shocks) and in terms of inherited wealth (e.g., due to their ancestors' productivity shocks and bequest taste shocks). 16Consider a simplified version of the Piketty and Saez (2013) model with a measure one of individuals indexed by i, who are both bequests receivers and bequest leavers. A linear tax on bequests at rate τ B funds a lumpsum grant E. The life-time budget constraint of individual i is
c i + b i = R • (1 -τ B ) • b r i + y Li + E where c i is consumption, b i is bequest left, y Li is inelastic labor income, b r i is pre-tax bequest received with R = 1 + r the generational rate of return on bequests received. Individual i has utility V i (c, b) with b = R(1 -τ B )b net-of-tax bequests left and solves max b i V i (y Li + E + R(1 -τ B )b r i -b i , Rb i (1 -τ B )) ⇒ V i c = R(1 -τ B )V i b .
The government budget constraint is E = τ B b with b aggregate (=average) bequests. The government chooses τ B to maximizes a social welfare function of the form:
SW F = i ω i V i (y Li + τ B b + R(1 -τ B )b r i -b i , Rb i (1 -τ B ))
with ω i ≥ 0 Pareto weights.
Let us consider a meritocratic Rawlsian criterion where the government maximizes welfare of those receiving no inheritances with uniform social marginal welfare weight ω i V i c among zero-receivers. In this case the optimum bequest tax rate takes the simple form:
τ B = 1 - b 1 + e B , (2)
V i b Rb i dτ B = -b i dτ B V i c /(1 -τ B
) (using the envelope theorem as b i maximizes utility). Therefore, the net effect on social utility is
ω i V i c bdτ B • [1 -e B τ B /(1 -τ B ) -(b i /b)/(1 -τ B )]
. Summing across all zero receivers (using the fact that the welfare weights ω i V i c are assumed to be uniform in this group) and setting this equal to zero at the optimum leads to 1
-e B τ B /(1 -τ B ) -b/(1 -τ B ) = 0
which can be rewritten as formula (2).
The optimal linear bequest tax rate τ B in this formula refers to what we label the "zerobequest receivers" optimum, or "Meritocratic Rawlsian" optimum. This is the tax rate maximizing the welfare of individuals who received zero bequests. Note that about half the population in France or the US-or in any country for which data is available-receives negligible bequests. 17
Hence, this " Meritocratic Rawlsian" optimum has relatively broad appeal. 18
17 The bottom 50% of the distribution of received bequests typically receives less than 5% of the aggregate inheritance flow, while the top 10% generally receives at least 60-70% (see Piketty 2011 for a comprehensive study for France) 18 For more general formulas applying to the case with any level of received bequest (and with positive elastic-
The elasticity e B in the formula is the long-run elasticity of the aggregate bequest flow with respect to the net-of-tax rate 1 -τ B . This parameter reflects how much individuals respond to bequest taxation by accumulating less wealth. Available estimates using tax changes suggest that the elasticity e B is moderately positive (say e B 0.1 -0.2, see [START_REF] Kopczuk | The Impact of the Estate Tax on Wealth Accumulation and Avoidance Behavior[END_REF]. However this is really an empirical issue, and one certainly cannot exclude the possibility of higher elasticities. 19 Note that if b = 0, i.e. zero bequest receivers also leave themselves negligible bequests (relative to average), then the formula boils down to the classic inverse elasticity τ B = 1/(1+e B ) that maximizes bequest tax revenue. Conversely, if zero bequest receivers expect to leave very large bequests, say above average with b > 1, they would favor bequest subsidies (i.e.
τ B < 0). One can see the crucial role of wealth mobility-and beliefs about wealth mobility-for the determination of optimal inheritance tax rates.
Empirically in France or the United States, b 0.5, i.e. zero receivers leave themselves bequests that are perhaps around half of the average. In such a case, it is in their interest to tax bequests at a rate of "only" τ B 50% even if the elasticity of bequests is zero. This is because zero receivers want to leave bequests, so that they face a true trade-off between raising inheritance tax revenue (in order to receive a larger lumpsum grant) vs. not taxing their own children.
The model just presented assumes that wealth accumulation is motivated solely by bequest motives. However, according to available estimates, there is wide variety of motives for wealth accumulation in the population: some accumulate wealth primarily due to bequest motives, others accumulate for precautionary reasons, or for the prestige, power or social status that sometime goes with wealth. If only a fraction ν of bequests are due to bequest motives, then b should simply be replaced by ν • b in formula (2). In principle, one can estimate ν using wealth surveys. An average value around ν = 0.5 might be realistic (see e.g. Kopczuk and Luton 2007).
Unsurprisingly, the optimal bequest tax rate τ B decreases sharply with ν. In the extreme case ν = 0, then the formula boils down to the standard inverse-elasticity formula τ B = 1/(1 + e B ).
ities of labor supply, which are assumed to be zero in the formula presented here), see Piketty and Saez (2013).
In the paper we also provide simulations of optimal inheritance tax rates using wealth survey data from France and the U.S.
19 Goupille-Lebret and Infante (2018) propose a particularly well identified study in the case of France where there was an abrupt change in inheritance taxation for a specific savings vehicle ("Assurance Vie"). They obtain an elasticity of .4 but this elasticity includes both avoidance and real effects so that the real elasticity e B could be substantially lower.
That is, if zero-bequest receivers do not care at all about leaving a bequest, then the only force limiting the taxation of bequests (from their viewpoint) is the elasticity effect. In case e B = 0, then they want to tax bequests at confiscatory rates: τ B = 100%. Consistent with this discussion, many countries, such as France or Germany, have lower inheritance taxes for bequests to children than for less related or unrelated heirs, presumably because bequest motives are stronger when bequests are left to children rather than less related individuals.
Finally, note that these results about optimal inheritance taxation also have implications about lifetime capital taxes. That is, if one introduces capital market imperfections, then it might be optimal to split the inheritance tax burden between a tax paid at the time of inheritance and a tax paid during the inheritor's lifetime (either in the form of a tax on the flow income from capital or a property or wealth tax levied on the stock). For instance, with uninsurable idiosyncratic risk about the future returns to capital, one does not know at the time of inheritance what the capitalized bequest value will be, so it is more efficient to spread the tax burden. As a consequence, depending on the specific parameters (e.g., the effort elasticity of future rates of return, the share of inheritance in total wealth, etc.), the optimal tax rate on capital income flows might be either higher or smaller than the optimal tax rate on labor income flows (see [START_REF] Piketty | A Theory of Optimal Capital Taxation[END_REF].
The recent model of stochastic wealth by [START_REF] Blanchet | Uncovering the Dynamics of the Wealth Distribution[END_REF] shows that, given the substantial mobility in wealth over a lifetime, bequests taxes are much less powerful than annual wealth taxes to reduce wealth concentration. To see this, note that even if bequest taxes were 100%, there would still be substantial wealth inequality due to self-made wealth. For example, a significant fraction of the Forbes 400 richest Americans are self-made and hence would still be billionaires even in presence of complete bequest confiscation. In contrast, wealth taxes at sufficiently high rates can quickly compress the distribution of wealth (as discussed and proposed by Piketty 2020). Given the level of wealth mobility estimated in [START_REF] Blanchet | Uncovering the Dynamics of the Wealth Distribution[END_REF] for the United States, a 100% bequest tax has approximately the same impact as a 3% annual wealth tax on reducing wealth concentration. This point connects with our previous section on the need for wealth taxation even on top of optimal inheritance taxes.
Comparing Actual Tax Systems with the Ideal Tryptic
Our analysis so far suggests that the ideal fiscal system should include a comprehensive income tax, together with an annual progressive wealth tax, and a progressive inheritance tax. We now briefly confront our prescriptions with historical experience. Although there are significant differences, in particular regarding the wealth tax, we argue that observed fiscal systems in modern democracies bear important similarities with this ideal tryptic.
The comprehensive income tax cum inheritance tax consensus
( We start with the comprehensive income tax. When the modern income tax was created, in the late 19th and early 20th centuries, all developed countries decided to institute a comprehensive income tax. In every country, the progressive tax schedule-and in particular the top marginal rate (depicted on Figure 2 for the United States, the United Kingdom and France)-applied to the sum of labor and capital income. The tax base was defined in a comprehensive manner, particularly for capital income that included all business profits, rents (sometimes even including imputed rent on homeowners), as well as all dividends and interest paid by corporate stocks and fixed-income claims. A separate corporate income tax was often added as a way to tax corporate profits at source. Some countries such as the United States effectively imposed a double taxation of corporate profits (with first the corporate income tax, and then the individual income tax on dividends). Most other countries-notably in Western Europe-developed ways to alleviate this double taxation by providing a credit for corporate taxes paid for dividend income reported on individual tax returns, effectively making the corporate tax like a withholding tax at source and respecting the logic of the comprehensive income tax we discussed in Section 2.
It is unclear the extent to which the choice to opt for comprehensive income taxes was due to concerns about income shifting. In the standard Haig-Simons writings about the comprehensive income tax, one finds for the most part rationales expressed in terms of ability to pay (all forms of income should be treated alike, because they reflect similar ability to pay taxes). 20 There was probably some concern about income shifting, but the main concern seems to have been about
horizontal equity (across income sources) and vertical inequality (across income levels). Given the huge concentration of wealth prevailing at the time-the highest incomes were mostly made 20 See e.g. [START_REF] Seligman | The Income Tax: A Study of the History, Theory and Practice of Income Taxation at Home and Abroad[END_REF], [START_REF] Haig | The Concept of Income -Economic and Legal Aspects[END_REF], [START_REF] Simons | Personal Income Taxation: the Definition of Income as a Problem of Fiscal Policy[END_REF].
of capital incomes (see [START_REF] Atkinson | Top Incomes in the Long Run of History[END_REF], it was obvious to everybody that the income tax should tax capital income at least as much as labor income. In practice, effective tax rates on capital (taking into account all capital taxes) typically exceeded effective tax rates on labor in developed countries in the 1960s and 1970s [START_REF] Pierre | Globalization and Factor Income Taxation[END_REF].
An illustration of this reality is that a number of countries applied tax surcharges for capital income flows. During the interwar period, capital income flows were taxed more heavily than labor income flows pretty much everywhere. In the United States and in the United Kingdom, the top rate applying to so-called "earned income"-i.e. labor income-was at times somewhat lower than the top rate applying to so-called "unearned income"-i.e. capital income. In particular, in the 1960s-1970s, the top rates reported on Figure 2 were those applying to capital income (the rates applying to earned income were often about 10 points lower). This is also confirmed by the fact that the rise of comprehensive income tax during the 20th century came together with the development of steeply progressive inheritance taxes, particularly in the United States and in the United Kingdom (see Figure 3). Inheritance taxes had long been advocated by a number of economists and philosophers as one of the most desirable forms of taxation (at least since Thomas Paine and John Stuart Mill). In the 1910s-1920s, when modern progressive inheritance taxes were created, the chief concern was to limit the perpetuation of large wealth disparities across generations. In his famous 1919 presidential address to the American Economic Association, Irving Fisher expressed strong concerns about the rising concentration of wealth in America (which in his view was becoming as unequal and "undemocratic" as in Old Europe), and called for steeply progressive taxes on inheritance and capital incomes as the proper way to restore equality of opportunities.21
The treatment of capital gains generated a conundrum for these progressive tax systems.
Countries typically taxed capital gains only upon realization, and generally at lower rates than ordinary income. For example, when individual tax rates on ordinary income were at or above Other countries, notably the United States, never implemented direct progressive wealth taxes, perhaps because of constitutional constraints. 22 However, the estate tax was seen as a backstop to go after unrealized gains accumulated over a lifetime at the time of bequest.
Similarly, the corporate tax was also a backstop to go after business profits even if these profits were retained within the corporation and not distributed to shareholders in the form of dividends. Furthermore, regulations were put in place to prevent corporations from excessive retained earnings and from distributing these earnings through share repurchases instead of regular dividends. 23 The estimates by [START_REF] Saez | The Triumph of Injustice: How the Rich Dodge Taxes and How to Make Them Pay[END_REF] show that the high effective tax rates on high earners in the post-war decades were achieved thanks to a large (as high as 50%) corporate tax on profits at source.
Obviously, such progressive tax systems combining highly progressive individual income and inheritance taxes along with heavy corporate taxation had to rely on strong enforcement to work as designed. This enforcement encompassed both direct enforcement from tax authorities but also more generally strong social norms against tax avoidance and evasion [START_REF] Saez | The Triumph of Injustice: How the Rich Dodge Taxes and How to Make Them Pay[END_REF].
The decline of tax progressivity and the vanishing capital tax base (since 1980)
Starting around 1980, one can observe in most developed country a sharp decline in tax progressivity, with the United States under Reagan and the United Kingdom under Thatcher leading the way. Top tax rates on large income flows and large bequests were reduced, especially in English-speaking countries (see Figures 23). In many countries, a growing fraction of capital income was gradually left out of the progressive income tax base and either taxed at flatter
22 US states have a long tradition of proportional but comprehensive wealth taxation (on both real estate and financial assets); see [START_REF] Saez | The Desirability of Commodity Taxation under Nonlinear Income Taxation and Heterogeneous Tastes[END_REF]Zucman, 2019b, chapter 2, and[START_REF] Dray | Wealth and Property Taxation in the United States[END_REF]Stantcheva (2022).
23 For example, in the United States, share repurchases were largely banned through regulation until 1982 when they were almost completely deregulated. Since then, share repurchases have become the major form of distribution of profits to shareholders.
preferential rates or sometimes fully exempt for some asset classes (such as life insurance in France). As a result, in many OECD countries, the progressive income tax has almost morphed into a progressive labor income tax, sometimes with an explicit dual income tax system as in Scandinavian countries or the Netherlands. [START_REF] Saez | The Triumph of Injustice: How the Rich Dodge Taxes and How to Make Them Pay[END_REF] show that this reduced nominal progressivity translated into real reduced progressivity in the case of the United States. When taking into account all taxes at all levels of government, the US had a very strongly progressive tax system in the decades just after World War II with high-income groups paying a much larger fraction of their income in taxes than lower income groups. Today, by contrast, the US tax system looks like a giant flat tax with all income groups paying fairly similar tax rates, with some regressivity at the very top.
As shown on Figure 1, recent studies show that this is also the pattern in European countries such as France [START_REF] Bozio | Predistribution vs. Redistribution: Evidence from France and the U.S[END_REF]) and the Netherlands [START_REF] Bruil | Inequality and Redistribution in the Netherlands[END_REF]. Regressivity for European countries is more pronounced than in the United States and starts around percentile 95th while it is confined to the top 0.01% in the United States. The study for the Netherlands is the most sophisticated, as it links individual business owners to their businesses and hence can assign business-level taxes to individual owners with great precision (something that cannot be done systematically in the United States, due to the lack of administrative data on the owners of businesses subject to the corporate income tax).
In all countries, regressivity at the top is driven by three main factors. First, the large payroll and consumption taxes that generate a substantial fraction of total tax revenues are regressive at the top, as income at the top does not come primarily from labor (and payroll taxes are often capped above some earnings levels) and is to a large extent saved rather than consumed (so that consumption is a small fraction of income). Second, the individual income tax is regressive at the top-end (in spite of nominally progressive schedules) because a large fraction of the income of the very rich is sheltered from the individual income tax (separate taxation at preferential flat rates for various capital income categories, or retained profits within corporations or shells).
Third, the corporate income tax, which is the main tax for the very rich (who derive most of their income from business profits) has shrunk over the last 50 years in most OECD countries.
In the United States, the federal corporate tax rate was in excess of 50% in the mid-20th century while it is only 21% today (see Zucman 2014 for a longer discussion).
One can think of several explanations for the demise of progressive taxation. To some extent, this can be viewed as a rational collective response to changes in the nature of wealth. That is, one can observe in the postwar period a decline in inherited wealth, a relative rise of life-cycle wealth, and a compression of wealth inequality [START_REF] Piketty | Capital in the 21st Century[END_REF]). In the extreme case where there is zero inherited wealth and pure lifecycle accumulation, then under preference separability and perfect capital markets assumptions it can indeed be optimal not to tax capital income flows [START_REF] Atkinson | The Design of Tax Structure: Direct Versus Indirect Taxation[END_REF].
This can be only a partial explanation, however. While it is true that inheritance flows were historically low in the 1950s-1960s (at the time Modigliani formulated the pure lifecycle model), this was largely a transitory state due to war shocks, and inheritance flows are now back to higher levels [START_REF] Piketty | On the Long-Run Evolution of Inheritance: France 1820-2050[END_REF]. It is also important to note that the historical decline in wealth concentration has been less spectacular than what some observers tend to imagine. The top 10% wealth shares used to be as much as 80-90% of aggregate wealth in developed countries at the beginning of the 20th century; in the late 20th and early 21st centuries, it is about 60-70% [START_REF] Chancel | The World Inequality Report 2022[END_REF]. The bottom line is that wealth is so concentrated that from a social welfare viewpoint distributional effects are very much likely to dominate intertemporal distortions effects-unless one is ready to assume very high intertemporal elasticities. 24 A more promising line of explanation is a change in the balance of political power. For instance, according to the optimal inheritance tax formulas described above (calibrated with plausible distributional parameters and elasticities), the top inheritance tax rates observed in the United States until the 1970s-1980s were close to optimal from the viewpoint of the bottom two-thirds of the population, while those observed in the 2000s-2010s are closer to the optimum from the viewpoint of the top 10-20% of the distribution. Why and how this change in political power-and also the change in perceptions and beliefs about expected wealth mobility-came about is a complicated and fascinating political science question, and which is indeed attracting growing attention. 25 Finally, there is little doubt that financial globalization and international tax competition have contributed to the decline in capital taxation (and possibly to the shift in the balance of political power). With free capital flows and little reporting of cross-border assets and income, 24 [START_REF] Lucas | Supply-Side Economics: An Analytical Review[END_REF] views the zero-capital-tax result obtained in the zero-shock, infinite-horizon model of [START_REF] Chamley | Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives[END_REF] and [START_REF] Judd | Redistributive Taxation in a Simple Perfect Foresight Model[END_REF] as the "largest genuinely free lunch" brought by economic science. However there is little evidence supporting the infinite long-run elasticity of capital supply implicitly assumed in this class of models.
25 See e.g. [START_REF] Bonica | Why Hasn't Democracy Slowed Rising Inequality?[END_REF].
20
each country is in effect facing a highly elastic capital tax base. This is particularly true for small European countries, such as Scandinavian countries where dual income tax systems were adopted in the 1990s-2000s and in some cases where the inheritance tax was abolished (in spite of strong egalitarian values, such as in Sweden). From a single-country perspective, it might indeed be optimal with perfect capital mobility to opt for zero capital taxes, even though every country would attain higher social welfare from tax coordination and positive capital taxation. From this viewpoint, it is particularly striking to compare the conclusions of the Mirrlees ( 2011) report (which takes for the most part a single-country, U.K. perspective on the optimal tax system, and therefore recommends to pursue corporate tax cuts and favors a very moderate approach on tax progressivity and inheritance taxation) and the previous British reports on the ideal tax system (e.g., [START_REF] Kaldor | An Expenditure Tax[END_REF][START_REF] Meade | The Structure and Reform of Direct Taxation, Report of a Committee chaired[END_REF], who take a much more progressive perspective.26
As narrated in [START_REF] Saez | The Triumph of Injustice: How the Rich Dodge Taxes and How to Make Them Pay[END_REF] in the case of the United States, the demise of progressive taxation took place in two steps. First, there was a weakening of enforcement that led to an explosion of tax avoidance. This rise in tax avoidance (e.g., the boom of tax shelters in the late 1970s and early 1980s) was then used to argue that "progressive taxation does not work" and to advocate for lower statutory taxes on the rich.
The return of the wealth tax and the future of tax coordination
It is unclear at this stage whether rising tax competition or increased tax coordination will prevail in the future. According to the estimates of [START_REF] Tørsløv | The Missing Profits of Nations[END_REF], close to 40% of multinational profits-profits booked by corporations outside of the country of their headquarter-are shifted to tax havens each year. In spite of ambitious reforms-such as the Base Erosion and Profit Shifting initiative launched under the auspices of the OECD in 2015, and the US tax reform of 2017-there is no sign that this shifting is abating [START_REF] Wier | Global Profit Shifting, 1975-2019[END_REF][START_REF] Garcia-Bernardo | Did the Tax Cuts and Jobs Act Reduce Profit Shifting by US Multinational Companies?[END_REF]. In this context, there is growing recognization that more tax coordination-e.g. in the form of a common minimum corporate tax-might be desirable. In 2021, 136 countries and territories agreed to impose a 15% minimum tax on the country-by-country profits of multinational companies. However, as of the end of 2022, no large countries had yet enacted this tax, so it remains to be seen whether this change would happen.
The agreement also has a number of limitations. The 15% rate is a low tax rate (relative to, e.g., typical tax rates on wage income). It applies to a narrow base that allows for significant carveouts for payroll and physical capital deployed abroad. If implemented, this international agreement would undermine artificial profit shifting (moving paper profits to territories with no real economic activity) but would keep tax competition (moving workers and physical capital to low-tax places) very much alive [START_REF] Barake | Minimizing the Minimum Tax? The Critical Effects of Substance Carveouts[END_REF].
There has also been growing awareness of the fact that a rising fraction of household wealth is located in tax havens [START_REF] Zucman | The Missing Wealth of Nations: Are Europe and the U.S. Net Debtors or Net Creditors?[END_REF][START_REF] Zucman | The Hidden Wealth of Nations: The Scourge of Tax Havens[END_REF], most of which owned by households at the top of the wealth distribution [START_REF] Alstadsaeter | Who Owns the Wealth in Tax Havens? Macro Evidence and Implications for Global Inequality[END_REF]. To address this issue, the United States moved unilaterally in 2010 by passing the Foreign Account Tax Compliance Act (FATCA) that requires all foreign financial institutions to report to US tax authorities balances and income information for all US individual clients they may have. Failure to comply carries heavy penalties that the United States can credibly enforce as it is the key player in the financial international system, and leak risks for financial institutions are real. The unilateral move by the United States was later followed by an international agreement, the Common Reporting Standard for the Automatic Exchange of Information on financial accounts between tax authorities, which was developed by the OECD in 2014 and which about 100 countries and territories signed in. The list of signatories include many tax havens such as Switzerland which used to have strict bank secrecy rules that in effect protected foreign tax evaders. Even though implementation has been slow-many tax authorities may not yet have used the reported information aggressively, and important forms of wealth such as real estate are not yet covered by this agreement [START_REF] Alstadsaeter | Who Owns Offshore Real Estate? Evidence from Dubai[END_REF]-it is clear that offshore tax evasion now carries a higher risk (or perceived risk) of detection than before 2010. This change is perhaps the starkest example of how new forms of international cooperation, long deemed utopian, can materialize relatively quickly and change the practicality of taxing capital in a globalized world.
Proposals in favor of a coordinated international registry on financial securities (e.g., as described in [START_REF] Zucman | Taxing across borders: Tracking personal wealth and corporate profits[END_REF][START_REF] Zucman | The Hidden Wealth of Nations: The Scourge of Tax Havens[END_REF] are also becoming increasingly popular. In this context, some form of annual wealth tax-or registration duty-would be a natural way to establish individual property rights and contribute to such a registry. Given the fast growth rates in wealth observed among billionaires, a coordinated wealth tax would also be a logical response (see above). This is particularly evident in Europe, where aggregate wealth-income ratios have been rising steeply over the past decades, so that the wealth tax base is quite dynamic as compared to the income tax base [START_REF] Piketty | Capital is Back: Wealth-Income Ratios in Rich Countries 1700-2010[END_REF][START_REF] Chancel | The World Inequality Report 2022[END_REF].
More generally, it should be noted that annual wealth taxes have been much more present historically in Europe than in the United States (or United Kingdom). Annual progressive wealth taxes have been applied since the early 20th century in countries like Germany, Switzerland or Sweden, and were introduced in the last third of the 20th century in countries like
France or Spain. The top rate was as large as 4% in Sweden in the early 1980s and came in addition to progressive income and inheritance taxes (though these two taxes were less steeply progressive than in English-speaking countries).
Annual wealth taxes experimented in Continental Europe during the 20th century suffered from four main weaknesses, however (see [START_REF] Saez | The Desirability of Commodity Taxation under Nonlinear Income Taxation and Heterogeneous Tastes[END_REF]Zucman 2019, 2022 for detailed discussions).
First, they did not have a comprehensive base, creating tax avoidance opportunities. The biggest issue was the exemption of the business wealth of owner-managers (broadly defined), making it easy for the ultra-rich to avoid the tax. Some asset classes such as real estate were
often not valued at market prices, creating horizontal inequities. These issues led to the repeal of the wealth tax in Germany and Sweden in the 1990s and 2000s. Second, the wealth taxes were relatively easy to avoid by moving abroad and easy to evade by putting wealth in offshore accounts. Available estimates suggest that in the 2000s, more than 20% of the wealth of the top 0.01% richest Scandinavians was hidden in tax havens [START_REF] Alstadsaeter | Tax Evasion and Inequality[END_REF]. Third, the European wealth taxes had relatively low exemption thresholds (typically $1 million or even less) creating financial hardships for some illiquid millionaires that could be exploited by politicians to repeal the tax. Last, these taxes were based on self-reported wealth (as opposed to pre-populated returns) and enforcement was often weak. In France, tax authorities stopped requiring a detailed tax return from most taxpayers after 2011; taxpayers could simply report their estimate of their net wealth with no further details, increasing opportunities for tax evasion [START_REF] Garbinti | Tax Design, Information, and Elasticities: Evidence from the French Wealth Tax[END_REF].
These issues could be remedied by making the tax base more comprehensive (as Switzerland does for example), taxing expatriates (if not for life as in the United States, at least for a number of years), using the new common reporting standard to tax offshore wealth, increasing the exemption level so that the wealth tax hits only the very rich where liquidity issues are minimal, and strengthening information reporting requirements and enforcement. The recent wealth tax proposal by presidential candidate Elizabeth Warren in 2019 had these characteristics. [START_REF] Saez | Progressive Wealth Taxation[END_REF] estimate that such a tax, if successfully enforced, would dramatically increase the progressivity of the US tax system at the very top.
The redistribution of inheritance and the redistribution of wealth
Let us conclude by stressing that the taxation of capital and wealth has two main purposes: the first one is the raising of fiscal revenue in the most efficient and fair manner; the second one is the redistribution of wealth. While this paper has largely focused upon the first objective, it is important to stress that the second objective is also very important.
If the objective of wealth taxation is not only to raise revenue but also to have a significant impact on the long-run distribution of wealth, then going beyond revenue maximization can make sense. For instance, the 2019 tax plan made by presidential candidate Elizabeth Warren initially proposed a wealth tax at a rate of 2% above $50 million and 3% above $1 billion.
While this is enough to raise very significant tax revenue, this might not be sufficient to affect the wealth distribution in a significant manner, first because top billionaire wealth has been rising much faster in recent decades (typically around 6-8% per year), and next because the proposal did not include a wealth transfer to lower wealth classes. In contrast, Senator Bernie Sanders proposed a more graduate wealth tax with rates of up to 8% for fortunes above $10 billion. Elizabeth Warren later proposed a higher marginal tax rate of 6% surtax above $1 billion instead of 3% in her original plan. With top tax rates around 6-8% it becomes possible to reduce top-end wealth concentration, or at least to keep it under control (Saez and Zucman 2019 provide a simple simulation of impact based on the Forbes 400 richest since 1982). The complicated question from a normative perspective is to decide what the target should be in terms of long run distribution of wealth. E.g. is it desirable to return to the wealth distribution of 1980 or 1950, or do we aim for a more equal distribution of wealth? The models and simulation tools that were recently developed by [START_REF] Blanchet | Uncovering the Dynamics of the Wealth Distribution[END_REF] make it possible to simulate the longrun impact of any progressive wealth tax schedule, an important step toward tackling these complicated questions.
In order to analyze how progressive taxation of capital and wealth can help us reach a more equal distribution of wealth and economic opportunities, it is also critical to analyze how tax revenues can be used to finance wealth transfers to the benefit of lower social groups. The bottom 50% wealth share is extremely small (less than 5%) in pretty much every country and region of the world: it is currently about 4% in Europe, 2% in the US and 1% in Latin America (see [START_REF] Chancel | The World Inequality Report 2022[END_REF]. Given that the distribution of wealth among decedents is approximately the same as among the living (as a first approximation), the distribution of inherited wealth is similarly skewed: bottom 50% children typically receive less than 5% of the total, while top 10% children receive 60-80%. Proposals have been made in order to use progressive wealth and inheritance taxes in order to have a system of minimum universal inheritance equal to 60% of per adult wealth at age 25. E.g. the minimum inheritance would be equal to e120 000 if average per adult wealth is equal to e200 000 (as is approximately the case in Western Europe currently). The total annual cost would be about 5% of national income and could be paid for by a mixture of progressive wealth tax (raising about 4% of national income) and progressive inheritance tax raising an additional 1% of national income (see Piketty 2022, Table 2, p.161).
Needless to say, a fully satisfactory analysis of such redistributive schemes would require substantial progress in terms of theoretical modeling and normative thinking. In particular, the long-run efficiency impact of wealth redistribution requires a proper modeling of credit constraints and capital market imperfections. In principle having bottom 50% children receive around 20-30% of total inheritance (which would be approximately the case with a minimum inheritance equal to 60% of per adult wealth) could make a big difference in terms of investment opportunities as compared to the current situation where they receive less than 5% of total inheritance. But putting numbers on such efficiency gains is certainly a very complicated task.
It is worth pointing out however that one would need a minimum inheritance of at least this amount in order to make a substantial difference to the long-run wealth distribution. E.g. if the minimum inheritance is only equal to 10% of per adult wealth, then this is very unlikely to make a big difference regarding the long-run evolution of the bottom 50% wealth share. Finally, the practical implementation of such redistributive schemes would obviously require major shifts in the balance of political power between social groups. It is worth noting however that the magnitude of the tax revenues required for this transformation to take place (5% of national income) are not enormous as compared to the historical rise of public spending in advanced countries (from less than 10% of national income before World War I to about 30-50% today).
In any case, we feel that it is important that scholars working in this area contribute to develop a long-run perspective on these important issues and to set broad targets for the future. P 0 -1 0 P 1 0 -2 0 P 2 0 -3 0 P 3 0 -4 0 P 4 0 -5 0 P 5 0 -6 0 P 6 0 -7 0 P 7 0 -8 0 P 8 0 -9 0 P 9 0 -9 5 P 9 5 -9 9 P 9 9 -9 9 . 9 P 9 9 . 9 -9 9 . 9 9 P 9 9 . 9 9 -9 9 . 9 9 9 P 9 9 . 9 9 9 -1 0 0
France
Notes: This figure depicts the top marginal tax rate for the inheritance (or estate) tax in the United States (Federal income tax only), the United Kingdom, and France from 1900 to 2020. The tax rate for bequests to children is direct line is reported (when rates for other inheritors are higher in France). Source is [START_REF] Piketty | Capital in the 21st Century[END_REF] and updates.
. The elasticity created by pure mechanical effects is equal to the average number of years billionaires remain billionaires and hence are subject to the billionaire wealth tax. The wealth tax has less impact on top wealth when there is a lot of mobility among billionaires and when new fortunes are created and destroyed quickly. Conversely, in a dynastic environment with little wealth mobility, a progressive wealth tax will have much larger long-run effects as it hits fortunes for a long time, possibly several generations. Using a simple calibration based on the Forbes 400 rich list for the United States since 1982,[START_REF] Saez | Progressive Wealth Taxation[END_REF] find that billionaires have been on the list for fifteen years on average, implying a long-run elasticity of around 15. The corresponding average wealth tax rate that maximizes wealth tax revenue is then given by the standard inverse elasticity rule τ * = 1/(1 + e) = 1/(1 + 15) = 6.25%.[START_REF] Blanchet | Uncovering the Dynamics of the Wealth Distribution[END_REF] considers a much more general model of the wealth distribution using a continuous time stochastic model calibrated using US wealth distribution data, allowing one to assess the short-and long-run effects of wealth taxation. The model can incorporate both mechanical effects and behavioral effects. He derives simple formulas for how the tax base
with e B = 1-τ B b db d(1-τ B ) the elasticity of aggregate bequests with respect to the net-of-tax rate 1 -τ B and b = E[b i |b r i =0] b the average bequest left by zero-receivers relative to the population average b. The proof is easy to understand intuitively. Suppose the government increases τ B by dτ B . This mechanically raises bdτ B extra in taxes but it loses τ B db = -τ B e B bdτ B /(1 -τ B ) through behavioral responses so that net government new revenue funding the lumpsum grant is dE = bdτ B [1 -e B τ B /(1 -τ B )]. There are two welfare effects on zero receivers utility V i (y Li + Eb i , Rb i (1 -τ B )). First, there is the mechanical effect of dE equal to V i c dE. Second, there is the effect of dτ B as the change in τ B hurts those who leave bequests by -
Figure 1 .
1 Figure 1. Average tax rates by income group: US, France, Netherlands
Figure
Figure 3: Top Inheritance Tax Rates 1900-2020
[START_REF] Saez | Progressive Wealth Taxation[END_REF] consider a simple model of billionaires' wealth that includes only mechanical effects and no behavioral responses. Mechanical effects still produce a long-term elasticity of top wealth with respect to the net-of-tax wealth tax rate that can be large. To
see this, if consumption is negligible for billionaires, after T years of taxation at average rate τ each year, the wealth of a billionaire is mechanically reduced by a factor (1 -τ ) T
in the case of the United States, and Zucman, 2019, for a review of the global evidence). In advanced economies, wealth to national income ratios have reached levels about five or more today (up from 2-3 in mid-20th century). If the top 1% owns 30% of wealth, a 1% average wealth tax on the top 1% raises about 5 • .3 • 1% = 1.5% of national income which is quite large.13
90% in the United States in the 1950s, the tax rate on capital gains was only 25%. Realized capital gains are often lumpy, making it challenging to apply the ordinary progressive tax schedule on gains that may represent a lifetime of wealth accumulation (e.g., a founder selling his or her business). The United States recognized this problem as early as 1922 when it first gave preferential treatment to capital gains by introducing an alternative tax rate of 12.5 percent for gains on assets held at least two years. Taxing unrealized capital gains is the Haigs-Simons theoretical solution, sometimes proposed (most recently by the Biden administration in 2022, US Office of Management and Budget 2022), but never implemented in practice. Instead, some countries, mostly in continental Europe, adopted annual progressive wealth taxes to indirectly tax unrealized gains.
Of course, taxes fund transfers and transfers are progressive, so that the net effect of the tax-and-transfer system is almost always progressive (seeChancel et al.,
2022).
See Piketty, Saez and Stantcheva (2014) for a formal modeling for the top rate rate andPiketty and Saez (2013b) for a simpler linear tax rate analysis.
See Atkinson and Stiglitz (1976). See also[START_REF] Saez | The Desirability of Commodity Taxation under Nonlinear Income Taxation and Heterogeneous Tastes[END_REF].
See Saez and Zucman (2019b) and the billionaire tax data leaked to Propublica(Eisinger et al. 2021) for evidence in the case of the United States.
[START_REF] Decerf | Fair Inheritance Taxation[END_REF] use a similar model but using fairness principles instead of standard social welfare maximization. They consider more general inheritance taxation and obtain qualitatively similar results.
The bi-dimensionality of inequality-labor and inheritance-is the key element that breaks the classical Atkinson-Stiglitz result of zero capital taxation (which is obtained in a life-cycle model with only labor income inequality).
Fisher recommended to apply the Rignano principle, according to which the entire bequest should be taxed if it has been transmitted for at least three generations. See[START_REF] Fisher | Economists in Public Service: Annual Address of the President[END_REF].
Note that the steeply progressive consumption tax advocated by Kaldor has never been implemented in any country (in part because the proper measurement of individual consumption levels requires the measurement of both income and wealth, i.e., a progressive consumption tax requires the existence of a progressive income tax and a progressive wealth tax). |
04104585 | en | [
"shs"
] | 2024/03/04 16:41:22 | 2023 | https://shs.hal.science/halshs-04104585/file/WorldInequalityLab_WP202301.pdf | Ram Singh
email: [email protected]
Amit Bhaduri
Surjit Bhalla
Nitin Bharti
Keshav Choudhary
K L Krishna
J V Meenakshi
Subhash C Pandey
Thomas Piketty
Ajit Ranade
Soumyajit Ray
Do the Wealthy Underreport their Income? Using General Election Filings to Study the Income-Wealth Relationship in India
Keywords: Income, Wealth, Income-Wealth Ratio, Inequality, Income Tax, Tax Evasion JEL Classification: D31, D63, H24, H26
The income reporting behaviour of different wealth groups is a critical public finance issue that has remained under-researched in the Indian and international contexts. We model and estimate the relationship between wealth and reported income for individuals and families across different wealth groups. We use a new dataset based on affidavits filed by election contestants, the Forbes List of billionaires, and the statistics published by the Indian Tax Department. We show that the wealthier the individual or the family, the lesser is reported income relative to wealth. On average, a 1% increase in family wealth is associated with a decrease of more than 0.5% in the reported income as a ratio of wealth. The total income reported by the bottom 10% of families in the data amounts to more than 188% of their wealth; in contrast, the wealthiest 5% [respectively 0.1%] of families reported incomes that were just 4% [respectively 2%] of their wealth. The total income reported by the wealthiest Forbes list families is less than 0.6% of their wealth. From another perspective, the total income reported by the wealthiest 0.1% of families is only about a fifth of the returns from their capital, and at least 80% of their capital income goes unreported in the income tax returns. For the Forbes-listed 100 families, more than 90% of the capital returns do not figure in their reported incomes! The income-wealth ratios for affluent individuals exhibit very similar patterns. We discuss the processes responsible for the "missing" income of the wealthy groups and show that this "missing" income leads to an underestimation of income inequality. Furthermore, it reduces the tax liability of the wealthiest percentile group to a mere 1% of their wealth. The tax liability of the wealthiest 0.1 centiles and the Forbes-listed families is less than one-tenth of their capital income. Tax paid by these groups relative to their wealth is smaller than the relative tax liability for middle-wealth groups. Finally, we show that ceteris paribus, women report lower incomes than men, and that individuals exposed to greater media and civil society scrutiny report relatively high incomes. Our analysis suggests that recent measures taken by the Indian central government against illicit income and wealth hoarding have delivered the intended results.
Introduction
The income reporting behaviour of taxpayers is a central issue in public finance. Tax revenue depends on the income reported by taxpayers: the higher the reported income, the larger the tax revenue, and vice versa. The taxman, faced with increasingly ambitious targets of tax collection, thus wants to know what fraction of the total income is being reported by the citizens. The issue is also of concern to other government agencies whose ability to sponsor and execute welfare schemes and programmes depends on tax revenue, which is a dominant component of government finances. The income reporting behaviour of different wealth groups is also critical from an equity viewpoint. If wealthy groups can get away with paying tax on a relatively small part of their income, the outcome can be a regressive income tax regime, which in turn can exacerbate income and wealth inequalities.
Of late, these issues have attracted much attention from the media and think-tanks. Media reports abound on how billionaires like Jeff Bezos, Elon Musk, and Warren Buffett pay very little by way of income tax. In India, while movie stars such as Akshay Kumar, Amitabh Bachchan, and Salman Khan are among the top income taxpayers, very few of the wealthiest Indians figure on the list. 1 Yet, the income reporting behaviour of different wealth groups has remained under-researched in Indian and other contexts even though the relationship between national wealth and income has been extensively examined for many countries. 2 There is very little empirical research on the relationship between wealth and reported income at the individual or household levels. 3 The most plausible reason for the absence of empirical microeconomic research on the subject is the lack of data required for this purpose. Data sources that provide information on both individual wealth and income levels are hard to come by.
In this paper, we compile and use a new dataset to examine incomes reported to tax authorities by different wealth groups. This dataset is based on affidavits submitted by the contestants of elections to the Lok Sabha, the house of representatives in Parliament of India. These affidavits are the only simultaneous source of information on the wealth and income levels of a large number of Indians. These documents provide information on the wealth and income of 7,596 households (HH) and their adult members. To our knowledge, this study is the first to use election affidavit data to examine the relationship between wealth and the income reported by individuals and households from across wealth groups. We supplement this source of information with the Forbes' List (FL) of billionaires, and statistics published by the Government of India's Central Board of Direct Taxes (CBDT). These data sources enable us to cover India's entire range of wealth and income distributions.
The coverage of the affidavit data itself is extensive. The HH wealth covered by it ranges from a negative wealth (net liability) of ₹51 crores to more than ₹8,911 crores -a figure not very far below what is seen at the left tail of the wealth distribution for the Forbes-listed families. Similarly, the annual family incomes it lists vary from a paltry sum of ₹178 to as high as ₹206 crores. Further, by virtue of its structure, this dataset is reasonably representative of the Indian context in terms of the regional and rural-urban distribution of the population. As we will discuss in the next section, the dataset includes all leading social, demographic, professional, and educational categories. We will refer to the affidavit data as the General Election (GE) data.
The GE data also exhibits several properties known of the wealth distribution and, separately, the income distribution in India; such as, the concentration of wealth and income in a few hands, 4 and male dominance over family income and wealth, among others. As one would expect, the share of the financial assets such as company stocks and firm ownership increases with wealth levels. The patterns emerging from the affidavit data are also consistent with what can be gleaned from the other independent sources. For instance, the asset portfolios of the wealthiest individuals in the data resemble the portfolios held by the most affluent non-politician Indians on the FL. Furthermore, asset holdings exhibited by the wealthy groups in the affidavit data are very similar to what is observed in international studies such as Piketty (2018, chapter 9), Wolff (2017), OECD (2018), and Chancel, Piketty, Saez, Zucman, et al. (2022).
It is worth emphasising that we do not use GE data to estimate wealth distribution or income distribution. We use it only to examine the relationship between wealth and corresponding income levels reported to tax authorities. From this viewpoint, besides being the only simultaneous source of information on incomes and wealth, the GE data pass the 'smell' tests on several counts in addition to possessing the properties discussed above. For instance, as will be discussed in Section 4, the downward trend in the income-wealth ratio emerging from the data nests well within what can be inferred from other independent data sources such as the FL and the statistics published by the Indian Income Tax Department put together. The trends are also consistent with the inferences gleanable about the leading wealth groups in other countries. 5 Still, there can be legitimate concerns regarding the representativeness of GE data for Indian society. Technically speaking, the income-wealth relationships emerging from the data might not hold in general. Section 3 discusses several aspects of this concern in detail and presents the robustness checks used in this study to address them.
As such, the affidavits are an invaluable source of information as far as information on family wealth and its components are concerned. However, these documents are only partially informative as far as income is concerned. Affidavits provide information only on what can be described as the net "taxed in hand income" reported by the candidates and their family members to the tax department. In particular, these documents do not offer information on the total income reported as "taxable income" in the candidates' income tax returns (ITRs). Additionally, they contain no information on the income reported by the candidates and their families under the head called "tax-exempt income".
To estimate the total income reported by individuals as taxable, we use statistics published by the CBDT. First, we examine the relationship between net taxed income and total taxable income reported to tax authorities; we then use this relationship to estimate the latter from the former. In addition, we use CBDT data to estimate the top income levels reported in the ITRs. For this purpose, we derive the Generalised Pareto Interpolations (GPIs) for the annual statistics published by the Tax Department. These interpolations are used to estimate the top levels for net taxed income and taxable income. To estimate the various forms of exempt income -such as agricultural income, dividend income, long-term capital gains, etc. -we use the affidavit data on asset ownership along with various sources on the rate of returns for different classes of assets. Further details on this are provided in Sections 4 and 5.
We show that the reported income as a proportion of wealth decreases with the wealth. On average, the wealthier a household is, the smaller its income is relative to its wealth. This decreasing trend in the income-wealth ratio persists for all versions of income reported to tax authorities, namely net-taxed income, gross income reported as taxable, and total income declared including the income reported under the category of tax "exempt income". The decreasing trend holds for individuals as well. The income-wealth ratios for HHs and individuals decrease continuously with wealth and fall precipitously at the top wealth levels.
According to our estimates, for the bottom 10% of households, reported income is almost double their wealth. In contrast, for the top 1% of families, the total reported income amounts to just 3-4% of their wealth. For the wealthiest 0.1%, the total reported income adds up to less than 2% of their wealth. For the most affluent ten families on the FL, the reported income adds up to just about half a per cent of their wealth! The relationship between individual wealth and reported income exhibits a very similar pattern. Even if we ignore the labour income and consider only the capital income as a reference point, the income reported in ITRs of the wealthy and super-wealthy groups is a small fraction of the returns from their wealth.
As the dynamics of capital income modelled and empirically examined by us are similar across market economies, our findings should be of interest and relevance beyond the Indian context. Specifically, our study contributes to three kinds of literature. First, by examining the relationship between wealth and reported income for individuals and HHs from across various wealth groups, it contributes to an area that has remained under-researched in the Indian and international contexts. From Dynan (2009), Piketty (2014), OECD (2018), and Chancel, Piketty, Saez, Zucman, et al. (2022) one can infer only the aggregate of income wealth ratios for select wealth groups. Our findings show that the broad patterns discernible from these studies hold at the individual and household levels.
Second, the affluent Indians, much like their counterparts in other market economies, can choose what fraction of their capital income gets transferred to their individual accounts and in what forms. To minimise tax liability, they transfer only a tiny fraction of the returns from capital to their personal accounts. We show that only a tiny proportion of their capital income gets accounted for in their tax reports a more significant fraction of these wealthy groups' capital income goes missing from tax data and therefore remains untaxed.
Third, our findings underscore the case for a wholesome assessment of the income tax regimes. The standard approach considers a regime to be progressive if the applicable marginal tax rates increase with the reported income. Accordingly, Nayak and Paul (1989), Piketty and Quin (2009), Besley and Persson (2014), CBGA-India (2015), Chancel and Piketty (2019), and Datt, Ray and Teh (2022), among others, conclude that the Indian income tax regime is progressive. However, when the affluent can choose how much of their income gets taxed, the question we should ask is this: Is the tax regime progressive with respect to the actual total income as opposed to merely the income reported to tax authorities?
We present evidence suggesting that the Indian tax regime is regressive vis-à-vis the total income as opposed to the income reported in the ITRs. Our most generous estimates suggest that the tax paid by the wealthiest 5% of individuals amounts to less than one-fifth of their capital income and the tax liability of the wealthiest 0.1 percentile is just about one-tenth of their capital income. Superwealthy Indians on the FL pay tax amounting to a mere 5% of their capital income.
The tax regime is even more regressive with respect to wealth. We show that at top wealth levels, the wealthier a taxpayer, the smaller is the tax paid relative to wealth. For the wealthiest centile, the tax liability amounts to about 1% of their wealth. For the wealthiest 0.1% individuals, the tax liability amounts to approximately 0.7% of the wealth. Super-wealthy Indians on the FL face effective tax liability amounting to just 0.4% of their wealth. The relative tax liability of the ultra-wealthy groups is lower than that of the middle-wealth groups, even after considering the various exemptions granted to the latter under the tax law.
Our results also make relevant contributions to the literature on income inequality in India. Several studies have estimated income inequality using the statistics published by the Income Tax Department and other sources such as the National Sample Survey Office (NSSO), the Central Statistics Office (CSO), and the Reserve Bank of India (RBI). 6 As will be discussed below, the income tax data used by these studies miss a substantial share of the opulent group's income and thus underestimate inequality.
By showing that the set of the top income-rich Indians differs from the country's wealthiest individuals, our results supplement similar findings in international contexts, as seen in Piketty (2014) and Chancel, Piketty, Saez, Zucman, et al. (2022). Our results are also relevant for studies on the differential effect of transparency on reporting behaviour. In line with the broad findings reported in Djankov, La Porta, Lopez-de-Silanes, and Shleifer (2010), Libman, Schultz and Graeber (2016) and Szakonyi (2022), our results suggest that people exposed to media and civil society scrutiny have a stronger incentive to report their incomes truthfully. In addition, we find profession and gender fixed effects. For instance, ceteris paribus, women report smaller incomes than men.
Full-time agriculturists and politicians also report relatively low-income levels.
The paper is organised into the following sections. Section 2 introduces a mathematical model that provides an analytical framework for the empirical analysis. Readers not keen on technical details may skip Section 2 and proceed directly to Section 3, which discusses the datasets and the summary statistics used in this study. Section 4 presents our findings on the relationship between the different types of income reported by the taxpayers on the one hand and their wealth on the other. Section 5 presents the regression results on the determinants of income-wealth ratios for individuals and households. Section 6 examines the proportion of total individual income that goes missing from the reports filed to the tax authorities. It also discusses the mechanisms that facilitate partial income reporting by opulent groups. Section 7 discusses the implications of the missing income for the progressivity of the tax regime and the existing estimates of income inequality in India. In Section 8, we offer concluding remarks. The details of methodology used to estimate the various kinds of income reported to the tax authorities are in the Appendix.
Wealth and Income: A conceptual framework
In this section, we develop a conceptual framework to answer the question: What relationship should we expect between different forms of income and wealth? Readers not interested in these technical details may skip to the next section.
Following Djankov, La Porta, Lopez-de-Silanes, and Shleifer (2010), Piketty (2014), Asher and Novosad (2019), and Fisman et al (2019), we consider an individual's wealth to be the market value of all assets owned, net of all debts owed. A household's wealth is simply the sum of the wealth of its members. Our definition of wealth includes all assets (financial and non-financial) together with consumer durables and jewellery on which the ownership rights can be enforced, including the right to sell on the market. Following Piketty (2014), we use the terms wealth and capital interchangeably.
We denote wealth by 𝑊 and income by 𝑌. Personal income, 𝑌 𝑃𝐼 , consists of all the earnings by an 6 See Ojha and Bhatt, (1964), Banerjee and Piketty (2005), Sarkar and Mehta (2010), Basole (2014), Ahmed and Bhattacharya (2017), Sinha et al. (2017), Assouad, Chancel and Morgan, M. (2018), Chancel and Piketty (2019), Sahasranaman and Jensen (2021), and Datt, Ray and Teh (2022), among others.
individual in a given year. It has two components: labour and capital income. Let 𝑌 𝐿 and 𝑌 𝐾 denote the labour and capital income respectively. Thus, personal income 𝑌 𝑃𝐼 = 𝑌 𝐿 + 𝑌 𝐾 .
The labour income of an individual is the annual total remuneration received for services provided. It includes earnings in the form of salary or wages, commissions, honoraria, etc. The capital income, 𝑌 𝐾 , on the other hand, is the total annual returns from the wealth owned by the individual. Specifically, capital income is the sum of economic returns from all assets combined. It includes dividend income from stocks, interests from deposits, rental income from property, equity income from stakes owned in estates and trusts, and profits from corporations, sole proprietorships, and partnerships. It also includes capital gains from assets owned. We use the terms total capital income and returns from wealth interchangeably.
The total capital income can be split into two categories. The first is what we call direct returns or "direct capital income". This is the "regular" income from capital. For example, rent is a regular direct income from a commercial property. Similarly, a residential property generates direct income as rent if leased out, and as "imputed rent" in the case of self-occupied dwellings. Interest is a direct income from instruments such as bonds, bank deposits, and savings accounts. Profits are a direct income from the ownership of firms, sole proprietorships, and partnerships. Company stocks also provide direct payment in the form of dividends.
In addition, a capital asset provides economic returns in the form of capital gains, defined as the appreciation in the market value of the asset. Wealth assets, such as residential and commercial properties, stocks, and equities, tend to appreciate over time, leading to capital gains for owners. Capital gains from an asset remain unrealised unless the asset is exchanged or sold. If realised, capital gains must be reported to the tax authorities as capital income. We term the unrealised capital gains "indirect capital income". The total income from an asset is the sum of the direct and indirect income.
Note that we have not included the "realised" capital gains as part of capital income. The neglect of unrealised capital gains in the model is deliberate and temporary. We will revisit this issue later in this section.
To formalise the relationship between wealth and capital income, let us suppose that wealth consists of 𝑛 assets; 1, … , 𝑛. Let the market value of these assets be 𝐴 1 , 𝐴 2 , … , 𝐴 𝑛 , respectively. Let 𝐴 = 𝐴 1 + 𝐴 2 + ⋯ + 𝐴 𝑛 . Thus wealth 𝑊 = 𝐴 -𝐿, where 𝐿 ≥ 0 denotes the liability of the individual or family. By definition, 𝑊 𝐴 ≤ 1, and it is plausible to assume that 𝑊 ′ (𝐴) > 0. We define 𝑠 𝑖 = 𝐴 𝑖 𝐴 , i.e., 𝑠 𝑖 is the share of the first asset in the asset portfolio. Let, 𝑦 𝑖𝐷 and 𝑦 𝑖𝐼 denote the direct and indirect annual (income) from asset 𝑖 respectively. The total annual return from an asset is the sum of the direct and indirect income generated by it. So, the total (yearly) returns from an asset 𝑖 is 𝑦 𝑖 = 𝑦 𝑖𝐷 + 𝑦 𝑖𝐼 . The total direct capital income from all assets combined is 𝑌 𝐾𝐷 = 𝑦 1𝐷 + 𝑦 2𝐷 + ⋯ + 𝑦 𝑛𝐷 . The total indirect income is 𝑌 𝐾𝐼 = 𝑦 1𝐼 + 𝑦 2𝐼 + ⋯ + 𝑦 𝑛𝐼 . The total capital income from all assets is 𝑌 𝐾 = 𝑌 𝐾𝐷 + 𝑌 𝐾𝐼 . For simplicity, let us assume 𝑦 𝑖𝐷 > 0 for all 𝑖 = 1, … , 𝑛, so 𝑌 𝐾𝐷 >0. The available evidence suggests that, the riskier an asset is, the higher the rate of return on it, and vice-versa. For instance, stocks and shares are riskier assets than commercial properties, which are in turn riskier than fixed-term bank deposits. The rate of returns follows the same order even after factoring in applicable taxes. On average, rates of return on stocks and shares are higher than those on property investments, which are typically more rewarding than fixed-term deposits. 7 Without loss of generality, let us assume that the riskiness of the asset increases with index 𝑖 = 1, … , 𝑛; that is, asset 𝑘 is riskier (more volatile) than asset 𝑗, if 𝑗 < 𝑘. The higher-risk-higher-returns relationship implies that 𝑟 𝑖 increases with the index 𝑖. Formally, if 𝑗 < 𝑘 then 𝑟 𝑗 < 𝑟 𝑘 . As to the relationship between 𝑊 and 𝐴, we assume that the ratio 𝑊 𝐴 increases with 𝐴. Thus, the higher the worth of the assets is, the smaller the liability will be as a ratio of assets. Now, consider two individuals at wealth levels 𝑊 and 𝑊 ̂ with the corresponding asset levels 𝐴 and 𝐴 ̂ respectively. 𝐴 and 𝐴 ̂ may or may not be equal. Suppose the individual at wealth 𝑊 has asset allocation as 𝐴 1 , 𝐴 2 , … , 𝐴 𝑛 . The share of 𝑖th asset in the first portfolio is 𝑠 𝑖 = (2.1)
The above inequality is strict for at least some 𝑘. Simply put, portfolio (𝐴 ̂1, 𝐴 ̂2, … , 𝐴 ̂𝑛) is riskier than (𝐴 1 , 𝐴 2 , … , 𝐴 𝑛 ), if the former assigns a larger share of investment to the risky assets. Several studies, including this one, show that the wealthier an individual is, the larger is the share of risky assets in their portfolio and vice versa. 8 Accordingly, we assume that individuals exhibit increasing appetite for risky assets as their wealth grows. Specifically, assume that for wealth levels 𝑊 and 𝑊 ̂, whenever 𝑊 < 𝑊 ̂ the relationship in (2.1) holds. 9 This assumption and that 𝑟 𝑖 is increasing in 𝑖, leads to the following inference:
𝑊 ̂> 𝑊 ⇒ ∑ 𝑟 𝑖 𝑠̂𝑖 𝑛 𝑖=1 > ∑ 𝑟 𝑖 𝑠 𝑖 𝑛 𝑖=1 (2.2)
In other words, the weighted rate of returns increases with wealth. In addition to the effect of decreasing risk aversion on portfolio choices, and the scale effects, the average rate of returns increases with 𝑊 on account of several other factors. For instance, investment opportunities expand with wealth. The wealthy are better at spotting investment opportunities and can even afford to hire financial advisors to earn higher returns on their investment(s), especially from equities, bonds, and other financial assets. Moreover, wealthy individuals have more bargaining power visà-vis the lenders. Thus, their relative cost of borrowing -and hence their burden of debt servicing -is relatively low. This, in turn, implies increasing returns to wealth. On all of these counts too, capital income is expected to be an increasing and convex function of wealth.
7 See, for instance, Campbell et al. (2019), who show that larger account holders diversify more effectively and thereby earn higher average than holders of smaller accounts. 8 See Guiso and Paiella (2008), and Section 3 of this paper. 9 Formally, we assume individuals are risk averse with von Neumann-Morgenstern utility function 𝑢(𝑊) such that -𝑢 ′′ (𝑊)𝑊/𝑢 ′ (𝑊) is decreasing in 𝑊. For simplicity, assume that there is no discounting. Individual investors choose their asset portfolios to maximise the expected utility of the terminal wealth, including direct and indirect returns. With these assumptions, one can show that the wealthier an individual is, the greater their share of riskier assets is, and vice versa (see Cass and Stiglitz, 1972).
To be clear, the above assumptions are not a logical necessity. They are motivated by what is observed in the data examined in this study and several others. 10 Formally put, the above assumptions imply 𝑌 𝐾 ′ (𝑊) > 0 and 𝑌 𝐾 ′′ (𝑊) > 0. Of course, there can be no capital income without wealth, so 𝑌 𝐾 (0) = 0. It can be seen that 𝑌 𝐾 ′ (𝑊) > 0 and 𝑌 𝐾 ′′ (𝑊) > 0 imply:
𝜕 𝜕𝑊 ( 𝑌 𝐾 (𝑊) 𝑊 ) > 0. (2.3)
In summary, due to the decrease in risk aversion with wealth, and on account of the other factors discussed above, an increase in wealth leads to more than proportionate increases in total capital income. However, this logic does not extend to each of the two components of capital income: direct and indirect capital income. This is because the relationship between wealth and the rate of direct returns, on one hand, and wealth and indirect returns, on the other, is very different.
To illustrate this point, let us revisit the case of company stocks, real estate, and fixed-term deposits.
As mentioned above, risk and total returns go hand in hand. Accordingly, for any given amount of investment, on average, the total returns are the highest for stocks, followed by real-estate, which are in turn followed by instruments such as fixed-term deposits with banks. Now consider the direct income from these three assets. The direct income from company stocks is dividends. Direct returns from real estate are rents, and from fixed deposits are interest incomes. Save for some exceptions, the rate of direct returns is the highest for fixed-term deposits (upward of 7-8%), relatively low for real estate (2-4%), and the lowest for equity (1-2%). 11 In other words, there is an inverse relationship between the rate of direct returns and the riskiness of an asset. According to available evidence, the inverse relationship between the risk and the rate of direct returns holds for most assets and continues to hold even after we factor in taxes on the direct income.
In Section 6, we will examine the underlying causes behind the inverse association between riskiness and the rate of direct returns. For now, we take as given the observed relationships between risk and the direct rate of returns. Accordingly, we assume that 𝑟 𝑖𝐷 is decreasing in 𝑖 as the latter is an index of the asset's riskiness. It can be seen that when 𝑟 𝑖𝐷 is decreasing in 𝑖 and appetite for risk is increasing in wealth (i.e., whenever 𝑊 < 𝑊 ̂, the relationship in (2.1) holds) we get the following result: While the rate of direct returns decreases with the asset's riskiness, the share of risky assets increases with wealth. These two aspects of the capital income imply that the rate of direct returns, i.e., the ratio of the direct income to wealth,
𝑊 ̂> W → ∑
𝑌 𝐾𝐷 (𝑊)
𝑊 , is decreasing in 𝑊. However, from (2.3) we 10 For a review of literature on relationship between riskiness of portfolio, returns, and wealth see Bach, Calvet, and Sodini (2018), Fagereng, et al (2020) and Wojciech and Zwick. (2020).
11 Recently, interest rates have been low but fixed-term interest rates are still in the range of 7-8%. On the other hand, rental incomes tend to be between 2-4% of the property value. In contrast, the average dividend income from stocks of the top 500 private listed companies is less than 2% of the market values of these assets.
know that the ratio of the total capital income to wealth, Simply put, the rate of indirect returns is increasing in wealth.12 Thus we have the following result.
Proposition 1: The average rate of direct income,
𝑌 𝐾𝐷 (𝑊)
𝑊 , decreases with wealth. The average rate of indirect income, 𝑌 𝐾𝐼 (𝑊) 𝑊 , increases with wealth.
Next, consider the labour income, 𝑌 𝐿 . This depends on several factors such as the quality of individual health, education, work experience, and labour market conditions, among other things. Plausibly, the labour income depends on wealth -a great facilitator of access to quality healthcare and education, and hence an essential determinant of labour income. Moreover, for any given level of education, the wealthy enjoy better employment opportunities with remunerative wages, especially at low-and medium-wealth levels. A considerable body of evidence supports the positive relationship between wealth and labour market outcomes. 13 At very high wealth levels, though, the effect of wealth on wages is expected to be modest at best. Accordingly, we take that controlling for other factors, 𝑌 𝐿 is an increasing and concave function of 𝑊. Moreover, labour income can be positive even when the individual has no wealth at all. Formally, we assume 𝑌 𝐿 (0) > 0, 𝑌 𝐿 ′ (𝑊) > 0 and 𝑌 𝐿 ′′ (𝑊) < 0. Now, differentiating is decreasing in 𝑊. As both components of personal income (capital and labour income) increase with wealth, the total personal income, 𝑌, is an increasing function of 𝑊.
The question now is: What can we say about the income-wealth ratio 𝑌 𝑊 ? Given the above, 𝑌 𝐾 is an increasing and convex function of 𝑊 but 𝑌 𝐿 is an increasing and concave function of 𝑊. Therefore, how the ratio 𝑌 𝑊 varies with 𝑊 cannot be predicted a priori.
At the top wealth levels, however, the 𝑌 𝑊 ratio is expected to be increasing in 𝑊. For the ultra-wealthy groups, the share of 𝑌 𝐿 is relatively small; 𝑌 𝐾 accounts for most of their total income. That is, 𝑌 ≈ 𝑌 𝐾 . As the latter increases more than proportionately with wealth, total income for the wealthy is expected to follow suit. Consequently, their income-wealth ratio is also likely to increase with wealth.
Moreover, one can predict how the sum of labour income and the direct capital income, i.e., [𝑌 𝐿 + 𝑌 𝐾𝐷 ] will vary with wealth. We describe the term 𝑌 𝐿 + 𝑌 𝐾𝐷 as the direct personal income and denote it by 𝑌 𝑃𝐼𝐷 . That is, 𝑌 𝑃𝐼𝐷 = 𝑌 𝐿 + 𝑌 𝐾𝐷. ) < 0.
Since
Proposition 2 provides the basis for framing hypotheses for our empirical analysis. Note that we have not considered the unrealised capital, neither as a part of the direct capital income nor under the indirect income from capital. However, as is shown in Section 4, the realised capital gains are only a tiny fraction of the direct personal income. Therefore, we expect the prediction in Proposition 2 to hold both with and without factoring in realised capital gains as a part of the direct capital income.
Finally, consider the effect of portfolio churning while holding fixed wealth levels. Holding the wealth constant, we do not expect portfolio choice to exert any significant effects on labour income. However, the rate of direct and indirect returns varies across assets. Therefore, different allocations of a given amount of wealth across asset classes will result in different values of direct and indirect returns. In terms of notations used, for any given wealth 𝑊, different asset portfolios will lead to varying values of 𝑌 𝐾𝐷 , 𝑌 𝐾𝐼 , and hence 𝑌 𝐾 in general. Specifically, we make the following claim.
Proposition 3 For any given level of wealth, the ratio of direct personal income to wealth depends on the shares of various assets in the portfolio.
While concluding this section, it bears emphasising that our definition of direct capital income includes direct returns from all assets constituting the wealth. However, the above predictions will hold even if some of assets are dropped from the definition of 𝑊 and the corresponding income is excluded from 𝑌 𝐾𝐷. . It is crucial to keep the set of assets constant for 𝑊 and 𝑌 𝐾𝐷.
.
3 Data Sources and Preliminary Findings
Data Sources
We work with several data sources including ProwessIQ, the Forbes List of billionaires, data on Income Tax Returns, and annual accounts of listed companies managed by the wealthiest families in India. Below, we provide a brief description of these data sources and summary statistics relevant to our study.
General Election (GE) Data: The data is based on the sworn affidavits submitted by contestants for elections to the Lok Sabha, the lower house of Parliament of India. Specially appointed returning officers scrutinise the affidavits for accuracy, correctness, and completeness. The GE data is the only source that simultaneously provides information on both income and wealth for individuals and households. "Myneta",14 an online platform run by the Association for Democratic Reforms (ADR), offers easy access to the information contained in these affidavits in the form of digitised records. These records are the primary source of our GE dataset. We have verified the accuracy of the digital records for a small sample of randomly selected affidavits directly taken from the Election Commission of India (ECI) website. 15 Even though there have been 17 General Elections since the independence of India, only in 2011 did the ECI mandate the declaration of wealth and income for election contestants. Therefore, only affidavits filed in the last two GEs -2014 and 2019 -provide information on wealth and incomes of the candidates, their spouses, and dependents (i.e., the candidates' family or household).
Table 3.1 below describes the assets reported in the affidavits and their broad categories. The liabilities comprise all types of loans and dues owed to government agencies. Wealth is defined as the value of all assets owned minus the liabilities. In addition, each contestant must disclose the amount reported as the "total income" in the income tax return (ITR) forms filed for themselves, their spouse, and dependents. In the ITR forms, the term "total income" refers to the net taxed income calculated as the taxable income reported by the taxpayer minus the "deductions" available to them under the tax rules. In other words, this is the net income taxed in the taxpayer's "hands". By definition, the net taxed income is less than the total amount reported taxable. For example, assume a taxpayer who reports a taxable income of ₹10 lakhs but is eligible for tax deductions amounting to ₹2.6 lakhs. So, his taxed in-hand income is ₹10-2.6 lakhs (i.e., ₹7.4 lakhs). If this taxpayer were to contest an election, ₹7.4 lakhs would be the income reported in his affidavit. In other terms, the income reported in an affidavit does not include the part of the reported income that qualifies as deductions under the tax rules. We thus cannot know the total taxable income reported by the candidate to the tax department from an affidavit.
Additionally, the affidavits do not cover income declared under the category called "tax-exempt income", which includes incomes such as agricultural income. In other words, the affidavits do not provide information on the entire income reported by the candidates and their families in the ITRs. Section 4 discusses this in greater detail.
For as many as 8,501 candidates, we found income information to be entirely missing, as these candidates did not report their income. According to the Election Commission, the punishment for inaccurate and inaccurate reporting includes fines, imprisonment for up to six months, and disqualification from the contest. While one cannot rule out misreporting altogether, it seems plausible to assume that most cases with missing income pertain to candidates whose families earn less than the ₹2.5 lakh threshold for filing ITRs. We dropped all households (HHs) with missing income from our analysis.
We were able to gather information on HH income for the remaining 7,596 candidates and their households. The General Elections of 2014 and 2019 each account for roughly half of these observations. The HHs from the two GEs have very similar demographic attributes such as genders, castes, educational qualifications, and professions.
Occasionally, we combine observations on wealth and income from GE 2014 and GE 2019 to present an overall picture. As they pertain to two different points in time, the GDP deflator is used to convert wealth and income levels to March 2019 prices. 16 Our main results, however, hold without merging the data from the two GEs.
Sources have been mentioned for all the tables and figures. Tables and figures without reference cited have been generated using our own computations on the relevant data. As distributions of wealth and income are both highly skewed towards large values, the density plots are not very revealing. So, in Figure 3.1 we present density plots for the distribution of the log of wealth for GE 2014 and GE 2019 separately as well as combined. 17 The average income and wealth are higher for 2019 than for 2014.
Table 3.2 shows the average HH wealth and income across wealth percentiles. As expected, average income increases with wealth. The share of riskier assets such as equity and commercial properties also increases in tandem with wealth (also see Figure 3.3 below). As can be seen in the Appendix, income in the GE data is less concentrated than wealth -a pattern that has been observed in international contexts too. 18 All Appendices are available online.
Figure 3.2 presents comparison of the wealth distribution in our sample via-a-vis the wealth distribution for the Indian population emerging from the All-India Debt Investment Survey (AIDIS) of 2019, at constant prices. As is evident from the plots, an average politician is wealthier than the average Indian. The elected politicians are even wealthier. In 2019, the average family wealth of Lok Sabha members was USD 998,311, whereas the average household wealth (as per NSSO 77th round) was just USD 26,867. This is in line with what has been observed in other countries. In general, politicians are wealthier than the rest. 19 Of course, the wealth differences between politicians and non-politicians vary across countries. For instance, Indian politicians are much more affluent than the general population. The gap between the average family wealth of politicians and the average wealth of the population is 1 17 In principle, we should fit a distribution that is left truncated at Rs. 2.5 lakhs. But given the focus of this paper, we estimate log normal distributions based on the observed data alone. 18 See Dynan (2009), Piketty (2014, Chapter 12), Saez and Zucman (2016), and Chancel, Piketty, Saez, Zucman, et al. (2022). 19 For discussion, see Piketty (2014). For a comparison of politicians' wealth vis-à-vis the country averages, for Sweden see Bó et al. (2017); for United Kingdom see Eggers and Hainmueller (2009); for the USA see Lenz and Lim (2009), for Italy see Gagliarducci, Nannicini, and Naticchioni (2008).
to 37 in India versus 1 to 4 in the case of the US 20 , and possibly even higher in China. Even though the average household in the GE data is wealthier than the average Indian family, the dataset does not cover the top wealth levels. The highest wealth reported in the GE data was US$1.3 billion at 2019 prices. On the other hand, the least wealthy family in the 2019 FL had a total wealth of US$1.4 billion. To cover the entire spectrum of wealth distribution in India, we thus supplement the GE data with data from the FL. Together, the GE and FL data cover the entire range of wealth spectrum in India in that the pooled data has HHs with negative, zero and very little wealth to those figuring at the very top of the wealth ladder. However, its coverage of the right tail is even better.
Forbes List (FL) is an annual listing of the 100 wealthiest Indian families. The List comprise families of business tycoons and some CEOs at the top of the wealth pyramid. In recent years, a few promoters of start-ups and unicorns have also made it to the list. Only six of these families are headed by women. The FL provides information on family wealth. Notably, the definition of wealth used on the FL is the same as that used by us for the GE data.
The concentration of wealth in the hands of the wealthiest Indians on the FL is several orders of magnitude higher than the concentration exhibited by the individuals in the GE data. 22 In recent years, the wealth held by billionaires in FL has come to account for an increasing fraction of the 20 Calculations are based on the data available at the online forum Open Secretes for the US, and the Harun list for China. 21 Figures of China are computed by matching Hurun list of wealthiest for 2018 with NPC delegate list to identify the wealthiest members of the latter. Of course, if the fraction hidden wealth is higher for the Indian politicians than their foreign counterparts, the actual differences will be smaller. 22 According to Karmali (2021), In 2021, wealthiest family on the FL had a net worth of $92.7 billion and the least wealthy family wealth was $1.94 billion.
national income. Note: a) Safe assets consist of cash, deposits in banks, financial institutions and non-banking financial companies and investments in National Savings Schemes and postal savings; b) Risky assets consist of equity, non-agricultural land, and commercial buildings; c) the two classes are not exhaustive.
Figure 3.3: Different assets as a % of total assets across the wealth percentiles Note: a) Safe and risky assets are as defined for Table 2.2 above, b) 'Equity', a part of risky assets, comprises bonds and debentures, shares and units in companies/mutual funds, and firm shares.
Income Tax Returns (ITR) data: For our analysis, we need to estimate the total income reported by different wealth groups. However, as mentioned above, the income disclosed in GE affidavits is only a part of the total income reported in the ITRs. We thus use statistics published by the Central Board of Direct Taxes (CBDT) and other sources to estimate the total income reported by the individuals and families in the GE dataset and those on the FL.
The CBDT statistics used by us are for the category of "individuals".23 These statistics provide information on the number of ITRs, and the average income reported under various income brackets. They cover incomes ranging from zero to more than ₹500 crore. The two types of income covered by this data are what we have described above as the taxed-in-hand income and the reported taxable income.
The statistics on the taxed-in-hand income are extracted by the CBDT from the ITRs and clubbed together for the various income groups. This data is published as tables listing the number of taxpayers and the average incomes for different income brackets. The tax data provide similar information on the taxable income reported by the taxpayers from various income groups. Within an income bracket, the difference between the two types of income arises from multiple deductions and exemptions allowed on the declared value of the taxable income. For instance, in the example cited previously, a candidate's reported taxable income was ₹10 lakhs but the net taxed-in-hand income was ₹7.4 lakhs, with the difference between the two stemming from the deductions worth ₹2.6 lakhs availed by the candidate. Overall, deductions amounting to ₹2.5-₹4.5 lakhs can be availed depending on the investment decisions of the taxpayer. Summing up:
𝑇𝐴𝑋𝐸𝐷 -𝐼𝑁 -𝐻𝐴𝑁𝐷 𝐼𝑁𝐶𝑂𝑀𝐸 = 𝑇𝐴𝑋𝐴𝐵𝐿𝐸 𝐼𝑁𝐶𝑂𝑀𝐸 -𝐷𝐸𝐷𝑈𝐶𝑇𝐼𝑂𝑁𝑆
We use the CBDT statistics to estimate the relationship between the taxed-in-hand income and the reported taxable income for various income groups. This estimated relationship, in turn, is used to compute the reported taxable income for HHs and individuals covered by our study.
In addition, we use the CBDT statistics to estimate the top levels of the taxed-in-hand income and the top levels of taxable income reported in the ITRs, as described in the next section.
Prowess and Annual Accounts of Companies: The CBDT data does not offer any information regarding the income reported under the head "exempt income". This category includes agricultural income and dividends, among several other types of incomes. To estimate the dividend and equity income, we use details of the wealth portfolio available in GE and the dividend yield rates using "Prowess", a database of the financial performance of over 40,000 Indian companies. 24 For the top families on the FL, the equity income is computed directly from the annual accounts of their group companies. We use a similar approach to estimate other types of exempted income. Section 4 and the Appendix contain further details on this approach.
Before concluding this section, we need to address the following important question.
Is GE data representative?
The coverage of the GE data is extensive. By virtue of its structure, the data is reasonably representative of the Indian context in terms of the regional and rural-urban distribution of the population. It also includes all the leading social categories. Reservation of Lok Sabha seats for the members of Scheduled Castes (SCs) and Scheduled Tribes (STs) means that these disadvantaged sections of the society are proportionately represented in the sample. Moreover, the dataset covers a wide range of educational levels (candidates range from being illiterate to holding PhD degrees) and professions that range from landless labourers, farmers, and artisans to landlords in rural areas;
from wage earners and self-employed businesspersons from urban centres to professionals, CEOs, and promoters of big companies.
The data also exhibits several known properties of wealth and income distributions in India, such as the concentration of wealth and income in the hands of the male members of households. 25 Moreover, it is worth emphasising that we do not use the GE data to estimate wealth and income distributions. Instead, we use it to examine the relationship between the reported wealth and income levels. From this viewpoint, in addition to being the only simultaneous source of information on wealth and income, the GE data passes the 'smell' test on several counts.
For instance, shares of the financial assets and commercial property increase with the wealth level. See Figure 3.3. For the top 0.1% in the GE data, the share of equity assets in the total wealth is 83%, which is comparable to the corresponding figure for families on the FL. 26 In other words, the asset holdings of the wealthiest in the GE data resemble what can be seen from alternative data sources about the most affluent non-politician Indians. The asset holding trends exhibited in the GE data are also in line with patterns observed in many international studies on the composition of wealth at the top.
As we will see in Section 4, the income-wealth ratio emanating from the GE data is very high for lowwealth groups but takes relatively small values for the wealthiest groups. Moreover, the incomewealth ratios observed in the GE data are decreasing in wealth. (See sections 4 and 5 below). These findings are consistent with the whatever evidence is available on this issue in the context of other countries. 27 Moreover, the decreasing trend observed in the GE data squares well with what can be inferred from other independent data sources such as the FL and ITR data put together.
If the decreasing trend in the income-wealth ratios emanating from the GE data hold for the entire population, we would expect the ratios to be the lowest for families on the FL. In particular, the income-wealth ratio for these families should be smaller than the wealthiest in the GE data, as the wealth owned by the former groups is relatively large. This is precisely what we find. The average ratios for FL families are significantly smaller than that for the wealthiest group in the GE data. In keeping with these decreasing trends, the income-wealth ratios for the top 10 families on the FL are smaller than the rest of the list. The income-wealth relationship in the GE data is also consistent with the evidence available from media reports. Periodic media reports suggest that the wealthiest Indians are not among the top income tax-paying individuals. As discussed earlier, the list of top income tax filers is dominated by movie stars, cricketers, etc., while the most affluent members of the FL do not figure on the list. This anomaly between the top wealth and the top reported income levels is also evident in the GE data. Table 3.3 shows the income ranks of the wealthiest 100 HHs and individuals in the GE data. Out of the 100 wealthiest individuals in the GE data, only 35% have reported income levels belonging to the top 100 income levels in the dataset. This partly explains why the top 10% of HH in GE data account for 80% of the total wealth, but only 66% of total taxed income. The wealthiest individuals and families are not the same as those who report the highest incomes.
Notwithstanding these properties of the GE data, there can be concerns about its representativeness for the Indian society. Many people consider the election contestants a breed different from the rest of society. Thus, technically, the concern is that the income-wealth relationships emerging from the GE data might not hold in general.
Several studies argue that the politicians are subjected to greater scrutiny and thus have stronger incentives to report their finances more truthfully than the general public. 28 In India, income reported by the non-politician is a private information between them and the tax department. In contrast, income declared by politicians in their affidavits can easily be accessed by the media and the other third parties. Still, there is a perception that politicians report a relatively small share of their total income and wealth. In any case, it is possible that the relationship between reported income and wealth is different for the politicians qua politicians, i.e., the reported income-wealth is different for politicians just because they have political abilities.
One way to account for this possibility is to use a measure of the political ability of candidates and check if it has a bearing on the income-wealth ratio. To this end, we use vote share as a proxy for political ability of a candidate. Controlling for wealth and other correlates, we find that the vote share has a statistically significant bearing on the income-wealth ratio. However, our results presented in Section 5 are quite counterintuitive.
The Income-Wealth Ratios: The Cobra curves
In this section, we present our empirical findings on the relationship among the various types of income reported by taxpayers and their wealth. Our focus is the total reported income and the income reported as taxable. However, to present the overall picture we also consider the net taxedin-hand income and the total personal income. From Section 2 we expect the ratio of personal income to wealth to be decreasing in the latter. However, the ratio of the total reported income to wealth is a matter of empirical investigation. The same is the case with the ratio of wealth and the income reported as taxable.
For our empirical analysis, we include all assets including consumer durables and jewellery as part of wealth. We decided to include durables and jewellery for two reasons. First, durables increase productivity of labour and therefore have an indirect bearing on income. Second, ownership of gold and jewellery can not only help ease credit constraints but actually help earn direct capital income. 29 Therefore, these assets are relevant for our study. Additionally, for our empirical analysis, it is difficult to get disaggregated information on the distribution of different types of assets within a family, especially for the families on the FL. To keep the definition of wealth consistent across individuals and HHs, we define wealth as the value of total assets minus the total liabilities.
Moreover, our definition of wealth is consistent with several empirical works based on the affidavits filed by election contestants. 30 Given the small share of consumer durables and jewellery, there exclusion from the definition of wealth is not expected to make much difference to the relationship between income and wealth, especially for the affluent groups for who share of these assets is negligible.
As to the income, all Indians with taxable income report their earnings to the tax authorities under two leading categories: the income taxable in the hands of the recipient (i.e., the filer of the returns), and the income legally treated as tax exempt in the hands of the receiver. For concreteness, let:
𝑌 𝑇 denote the income taxable in the hands of the recipient, 𝑌 𝐸𝑥 denote the income treated as tax exempt in the hands of the recipient, and 𝑌 𝑅 denote the total reported income. By definition, 𝑌 𝑅 = 𝑌 𝐸𝑥 + 𝑌 𝑇.
The taxable income, 𝑌 𝑇 , is the sum of all types of income reported by a taxpayer as taxable in their hands, i.e., the income on which the recipient themselves are liable to pay tax. It includes salary and other forms of labour income, professional income, interest income, rentals, capital gains, as well as capital income from businesses and other sources not included in the exempt category. In effect, a part of 𝑌 𝑇 becomes tax free due to the various tax deductions and exemptions available to taxpayers.
We have defined the taxed-in-hand income, 𝑌 𝑇𝑑 , as that part of the 𝑌 𝑇 on which the taxpayer actually pays tax. In other words, for a tax unit (an individual or a household), 𝑌 𝑇𝑑 is equal to 𝑌 𝑇 minus the tax deductions availed by the unit. 31 The 𝑌 𝑇 and 𝑌 𝑇𝑑 do not cover several types of income reported under the head called the (tax) "exempt income" such as agriculture income, and profits from firms and partnerships, for which the recipient is not liable to pay tax. Therefore, the exempt income is a class separate from 𝑌 𝑇 and hence 𝑌 𝑇𝑑 .
Simply put, the total income reported by a taxpayer can be defined as the sum of the income reported as taxable and the income reported as tax exempt. By definition, the taxable income, 𝑌 𝑇 , is only a part of the total income reported by the taxpayers in their ITRs, i.e., 𝑌 𝑅 . The exact relationship among the different types of income in ITRs can be expressed as: 𝑌 𝑅 = 𝑌 𝐸𝑥 + 𝑌 𝑇. = 𝑌 𝐸𝑥 + 𝑌 𝑇𝑑 + 𝑌 𝐷𝑑 .
The direct personal income, 𝑌 𝑃𝐼𝐷 , as defined in Section 2, is the sum of the labour income and the direct capital income including the imputed rent on self-occupied dwellings. There is a direct relationship between 𝑌 𝑃𝐼𝐷 and 𝑌 𝑅 . The latter includes the entire labour income and all of the direct capital income except the imputed rent on self-occupied dwellings. Formally, 𝑌 𝑃𝐼𝐷 = 𝑌 𝑅 + 𝑌 𝐻 , where 𝑌 𝐻 is the imputed rent from buildings used for self-housing. 32 Summing up, 𝑌 𝑃𝐼𝐷 > 𝑌 𝑅 > 𝑌 𝑇 > 𝑌 𝑇𝑑 . Moreover, we have the following relationship between the 𝑌 𝑃𝐼𝐷 and the other types of income examined by us: 𝑌 𝑃𝐼𝐷 = 𝑌 𝑅 + 𝑌 𝐻 = 𝑌 𝑇 + 𝑌 𝐸𝑥 + 𝑌 𝐻 = 𝑌 𝑇𝑑 + 𝑌 𝐷𝑑 + 𝑌 𝐸𝑥 + 𝑌 𝐻 .
Our estimation methodology follows the following order: We start with 𝑌 𝑇𝑑 and use it to estimate 𝑌 𝑇 , which along with estimates of 𝑌 𝐸𝑥 , is used to estimate 𝑌 𝑅 and 𝑌 𝑃𝐼𝐷 .
We obtain exact information on 𝑌 𝑇𝑑 for individuals and HHs in the GE dataset directly from the affidavits. To estimate 𝑌 𝑇𝑑 for the FL individuals, first we use the Generalised Pareto Interpolations (GPIs) to estimate the right tail of the distribution of taxed-in-hand incomes reported to the Tax Department. 33 The GPIs are then used to precisely isolate the group averages for the top income levels: the top 10, the next 11-20, the top 100, the next 101-200, 201-300, and 301-400, and so on.
To estimate 𝑌 𝑇 for individuals and HHs in the GE data, we use the statistical relation between 𝑌 𝑇 and 𝑌 𝑇𝑑 derived from the statistics published by the Income Tax Department. To estimate 𝑌 𝑇 for FL individuals we use the GPIs derived from the tax data. We have estimated the total reported income, 𝑌 𝑅 , by supplementing the 𝑌 𝑇 of the concerned unit (individual or household) with the estimated value of their 𝑌 𝐸𝑥 . To estimate the imputed rent based on the value of the residential property, we use average rental rates. The methodological details of the estimation process have been discussed in Appendix I.
Here, we present some plots showing the relationship between the above-discussed four versions of income reported by different wealth groups. Plots below show the entire range of the estimated 𝑌 𝑅 . For other versions of income, we depict only the point estimates. See Figure 4.1.
The income plots appear to be flat for the bottom 99%, even though in reality they are not. The apparent flatness is due to a relatively massive increase in the income levels for the top wealth groups -the top 1% in the GE data and the FL. For the ease of illustrating increasing trends in income at all wealth levels, below, we present plots for the 𝑝 5 -𝑝 10 to 𝑝 85 -𝑝 90 of household and candidates. Income wealth relationship for the wealthiest members shows similar patterns. As explained earlier, at low-and medium-income levels, the deductions and exemptions are a significant proportion of the 𝑌 𝑇 . In contrast, they are a small fraction of the taxable income for the richest taxpayers. However, as can be seen from the Figure 4.2, the average income is increases with wealth. Therefore, for the low and middle-wealth groups 𝑌 𝑇𝑑 is expected to be significantly smaller than their 𝑌 𝑇 . For the wealthy, the two should be approximately equal. Expectedly, the difference between the 𝑌 𝑇 and 𝑌 𝑇𝑑 decreases with wealth, but the relative difference between 𝑌 𝑇 and 𝑌 𝑅 increases with wealth. It is instructive to note that for the wealthy and super-wealthy groups, our estimates of the total income reported by them, i.e., 𝑌 𝑅 , are significantly higher than their taxable income, 𝑌 𝑇 . In other words, a significant part of the income reported by the wealthy groups falls under the category of tax-exempt income. For the top 1% HH in the GE data, the estimated 𝑌 𝑅 is about 112% of their 𝑌 𝑇 . For the FL, 𝑌 𝑅 is nearly 150% of the 𝑌 𝑇 . The relationship between 𝑌 𝑇 and 𝑌 𝑅 is also very similar for individuals.
By contrast, the relationships between 𝑌 𝑃𝐼𝐷 including the imputed rent and the 𝑌 𝑇 do not follow consistent patterns. As wealth increases, initially, the rent increases relative to 𝑌 𝑅 . It reaches its peak for the wealth groups at p90-p95; thereafter, it decreases continuously. The initial increase in 𝑌 𝑃𝐼𝐷 vis-à-vis 𝑌 𝑅 is because of the dominance of residential property in the asset portfolio of the middlewealth groups. At very high wealth levels, the share of residential property is comparatively tiny. As is shown in Plots B and C in Figure 4.3, a similar relationship holds between 𝑌 𝑃𝐼𝐷 and 𝑌 𝑅 for individuals.
For all categories of income, the average income reported by the wealthiest Indians is significantly higher than that reported by other groups. However, the trends are different when we compare the the HH income expressed as a percentage of the HH wealth) decreases with family wealth. This decreasing trend persists for all categories of income: 𝑌 𝑇𝑑 , 𝑌 𝑇 , 𝑌 𝑅 and 𝑌 𝑃𝐼𝐷 . The income-wealth ratios for individuals exhibit similar patterns as well. Wealthier an individual, the lesser is the reported income relative to wealth.
To give a sense of the magnitude of reported income relative to wealth, first we present the incomewealth ratios for various groups. To this end, we employ two approaches. Under Approach 1, we first compute the income-wealth ratio at the unit level, i.e., for each HH and individual separately. For example, assume there are two individuals in a wealth group. Let the income and wealth reported by the first individual be 100 and 150 respectively. Let the corresponding figures reported by the second individual be 200 and 250. Under Approach 1, the average of individual incomewealth ratios is computed in the group. Accordingly, in this example of two individuals, the ratio is Under Approach 2, on the other hand, the income-wealth ratio is computed for different wealth groups, say for the wealthiest 1%, the bottom 5%, etc. Under this approach, the ratio is computed as the total income of all units in a given group divided by their total wealth. In the context of the above example, the income wealth ratio will be 100+200 150+250 = 300 400 = 0.75. It is easy to see that with a large enough set of individuals, the two approaches are expected to lead to very similar results.
Given the similarity of the ratios emerging from the two approaches, to optimise space, all tables in this section other than Table 4.1 are generated using Approach 1. To provide a sense of the numbers produced by the two approaches, all plots presented hereafter are based on Approach 2.
In the rest of this section, we show that the income-wealth ratios are decreasing in wealth for all three versions of the reported income, regardless of the method used to compute the ratio. The downward trend is very pronounced at the super-high and ultra-large wealth levels, even if we use the most generous estimates of the income reported by wealthy groups.
4.1 The Taxable Income-Wealth Ratio (𝐘 𝐓 /𝑾) Now, we present the reported taxable income, 𝑌 𝑇 , as a proportion of the wealth. Group levels income-wealth ratios are presented in Table 4.1. While reading the tables, it will help to remember that the wealth percentiles have been calculated separately for the households, candidates, and wealthiest members. Furthermore, we have dropped individuals with zero income or wealth from the analysis. This has resulted in the number of individuals (candidates and wealthiest members) to be smaller than the number of HHs. For the FL, the candidates' incomewealth ratios are not relevant. Also, the negative income-wealth ratios in Approach 1 are due to the negative aggregate wealth i.e., the aggregate liabilities are greater than the assets. We have thus dropped the bottom p0-p5 units from the plots to preserve the scale and visual clarity. Moreover, for the FL, our plots show income-wealth ratios corresponding to the leading scenario -the range of the estimated ratios can be seen in the relevant tables.
Table 4.1 shows the 𝑌 𝑇 /𝑊 ratios for individuals and households across the wealth spectrum. The ratios in columns 2 and 3 of the table are based on the first approach for computing the incomewealth ratio; ratios in columns 4 and 5 are based on Approach 2. As can be seen from the plots below, the income-wealth ratios for the wealthiest are very similar to the ones for HHs and candidates. To avoid the clutter, from the tables we have dropped the ratios for the wealthiest members of HHs The Appendix has all the details. Note: (a) Units in top 1% (p99 -p100) and 0.1% (p99.90 -p100) are a subset of observation in top 5% (p95 -p100). The second approach is used for the FL; (b) In several instances, while the family wealth and income are positive, candidates has reported zero income or wealth. bottom 5% of HHs and individuals. Income heterogeneity is relatively much higher for low-wealth groups. This is because the HH wealth levels in these groups remain low, but the family income can vary significantly. When the HH wealth is negligible, the income-wealth ratio can jump violently depending on the income. This explains the very high ratio generated by the second method.
Overall, the two methods used to compute the income-wealth ratios produce consistently downward trends. Under both methods, the income-wealth ratio decreases with wealth. On average, the taxable income reported by a HH as a proportion of its wealth falls continuously with household wealth. In other words, the wealthier a HH, the relatively small the taxable income it reports.
The magnitudes of the ratios produced by the two approaches are also comparable. The taxable income reported by low-wealth households in the bottom 5-10 percentiles is more than 170%, i.e., more than 1.7 times their wealth. In contrast, for the top 5% HHs, the ratio reduces to merely 3.2%, i.e., the reported taxable income amounts to only 3.2% of their wealth. The ratio drops to less than 2% for the wealthiest 10% of the top percentile of the HHs in the GE data.
In line with these overall trends, the FL families have reported the most diminutive taxable income relative to their wealth. For the wealthiest 100 families on the FL, the estimated ratio is in the range of [0.4, 0.6] %, i.e., the reported taxable income is at most 0.6% of their wealth. 35 For the wealthiest 10 Indian families, the reported taxable income is at most 0.4% of their wealth.
The income-wealth ratios for individuals (wealthiest members of households and the candidates) follow very similar patterns. On average, the more affluent the individual is, the smaller the reported value of their taxable income tends to be.
The Total Reported Income-Wealth Ratio ( 𝒀 𝑹 /𝑾)
Now we consider the total reported income as a ratio of the reported wealth. As discussed above, the total reported income, 𝑌 𝑅 , is the sum of taxable income reported to tax authorities plus the income declared as "exempt income". In terms of notations, 𝑌 𝑅 = 𝑌 𝑇 + 𝑌 𝐸𝑥 . The latter category includes agricultural income and a part of capital income in the form of dividends and profits from firms and partnerships. As detailed in the Appendix, we have estimated capital income using the value of the underlying assets and their rates of returns. For instance, the dividend income of a household is estimated as the value of stocks owned multiplied by the average dividend yield rate for the top 100 private listed companies; rental income from non-agricultural properties is calculated as the value of the property times the average rents (as a proportion of the property value), and so on. Based on these estimates of the capital income and the reporting rules under the tax law, we have estimated the capital income reported as tax exempt.
It is easy to estimate the total reported income by the bottom 95% HHs and individuals in the GE data. From Section 4.1, we already have a precise estimate of the taxable income, 𝑌 𝑇 , reported by these groups. To compute the total income reported by these groups, we just need to estimate their total tax-exempt capital income. As to the rental income, it is taxable in the hands of the recipient and therefore already included in 𝑌 𝑇 . Profits from self-owned enterprises are also part of 𝑌 𝑇 . Of the other forms of capital income, agricultural income is the dominant form of the capital income reported as exempt by the bottom 95% of HHs and individuals. Their entire equity income (from partnership firms and companies) also qualifies as tax exempt. The reason for this is that all profits from partnership are tax exempt in the hands of the recipient. Besides, their dividend income falls well below the ₹10 lakhs threshold for taxability of dividends.
Specifically, for the bottom 0-95 percentile of HHs and individuals, 𝑌 𝐸𝑥 = 𝑌 𝐸𝑞 + 𝑌 𝐴 . Therefore, their total reported income is the sum of the taxable income, the entire equity income, and the agricultural income, i.e., 𝑌 𝑅 = 𝑌 𝑇 + 𝑌 𝐸𝑞 + 𝑌 𝐴 . In the absence of any direct source of information, profit rates from the equity in partnerships are taken to be the same as the dividend yield rates. Simply put, for all units in the GE data, the entire equity income, 𝑌 𝐸𝑞 , is estimated as the value of equity multiplied by the average dividend yield rates for the top 100 private listed companies. In view of the evidence presented in Section 2 on returns from various assets, we take the agricultural income to be 0.08-4% of the land value. Accordingly, we estimate farm income corresponding to the three leading rates: 2% being the most plausible case, 4% the absolute upper bound, and 0.08% the absolute lower bound on agriculture income.
It is challenging to estimate the exempt income reported by the wealthiest groups (say, the top 5% of HHs and individuals in the GE data and those on the FL). As elaborated upon in the Appendix I, these groups receive a major share of their capital income in the name of financial intermediaries such as the limited liability partnerships (LLPs), association of persons (AOPs), and body of individuals (BOIs). To the extent that the income received in the intermediaries' accounts is distributed to the partners as partnership shares, it must be reported in the ITRs under the category of exempt income. However, the part of income is retained in the account of the intermediaries and does not get reported at all in the ITRs of the partners.
The point is that there is uncertainty about the fraction of the direct capital income reported by super-wealthy groups, and also the capital gains realised by them. In terms of notations, we are not sure about the proportions of 𝑌 𝐸𝑞 , 𝑌 𝐴 , 𝑌 𝑃 , and 𝑌 𝐶𝑔 received by the wealthy groups in their own name and in the account of financial intermediaries used by them. Given the high degree of uncertainty, we consider a range of possibilities around the reporting of direct capital income by these wealthy groups. This range includes scenarios where most of their 𝑌 𝐸𝑞 , 𝑌 𝐴 , 𝑌 𝑃 , and 𝑌 𝐶𝑔 get reported in the ITRs. It also includes the case where most of these capital incomes remain un-reported in the tax returns.
Going by the evidence discussed in Appendix I, the most plausible assumption about the reporting of capital income by the wealthiest groups (the top 5% of HHs and individuals in the GE data, and the FL) can be summarised as follows:
One-fourth of the total [𝑌 𝐸𝑞 + 𝑌 𝐴 + 𝑌 𝑃 + 𝑌 𝐶𝑔 ] (the sum of direct capital income received in individual accounts or in the accounts of financial intermediaries) is reported as exempt income.
Under this assumption the total reported income for the household and individuals in the top 5% of the GE data and the FL is estimated as: 𝑌 𝑇 + 0.25[𝑌 𝐸𝑞 + 𝑌 𝐴 + 𝑌 𝑃 + 𝑌 𝐶𝑔 ].
As noted earlier, this estimate of the wealthy groups' total reported income appears to be the most plausible. However, given the uncertainty over the income reporting behaviour of these wealthy groups and for the sake of completeness, we also estimate what we consider as the absolute upper and the absolute lower bounds on the total income reported by the wealthiest 5% of units in GE data and the FL.
Absolute Lower-bound estimates are derived from the following assumption: Only 5% of the sum total reported income of the wealthiest groups is estimated as the sum of their taxable income plus 95% of their gross capital income (whether received in individual accounts or through intermediaries).
[
Our absolute upper bound is very likely an overestimation of the reported exempt income and hence the total income reported by the wealthy groups on at least two counts: First, the capital income itself is overestimated due to the unrealistically high assumption about the rental income from all types of land and commercial properties (assumed to be 4% of the property value). Second, the assumed share of the capital income reported as exempt income (95% of the total) is much above what is supported by the available evidence and common sense. The upper bound scenario assumes that the wealthiest group transfers almost all of the income received in intermediaries' accounts to their individual accounts. If they did so, it would defeat the very purpose of using the financial intermediaries. Besides, it would increase the tax obligation compared to a situation where the capital income is directly received in individual accounts. The plots also show the range generated by the lower and the upper bounds on the total reported income. Even if we consider the absolute upper bound of the range presented in these tables and plots, i.e., even if we assume that the super and ultra-wealthy report most of their capital income, their income-wealth ratios turn out to be the lowest. Within the GE data, the ratio is the lowest for the top 0.1% units. The range takes lowest value for the wealthiest 10 families on the FL. Therefore, the uncertainty over the part of the capital income between that gets reported in the ITRs does not change the fact that the wealthiest group in the country reports the lowest income in relative terms.
Overall, the total reported income as a ratio of the wealth decreases continuously for the HHs as well as the individuals. As shown in Table 4.2, in our most likely scenario, the reported income is more than 187% of the wealth for the HHs in the lowest decile. In contrast, for the top 1% of HHs in the GE data, the total reported income, including the labour income, amounts to just 3-4% of their wealth. For the top 0.1%, the ratio drops to less than 2%. For the wealthiest 10 families on the FL, the reported income adds up to just about half a percent of their wealth! The income-wealth ratio for the individuals follows a similar pattern: the wealthier an individual is, the smaller is their reported income. 𝑊 ratios in the previous subsection give us a good sense of the total income reported by different strata relative to their wealth. However, these ratios do not serve as a basis for empirically testing Proposition 2 proved above. The reason is the inconsistency in the types of assets covered by the definition of 𝑌 𝑅 on the one hand, and of 𝑊 on the other. By definition, 𝑊 comprises all assets. However, 𝑌 𝑅 does not include (imputed) income received from the self-occupied residential property (imputed rent enjoyed by the taxpayers is not reported in their ITRs).
Therefore, to empirically examine Proposition 2, we consider 𝑌 𝑃𝐼𝐷 (= 𝑌 𝑅 + 𝑌 𝐻 ). It includes income from labour and all forms of assets that contribute to the direct income from wealth as defined in Section 2, including residential properties. Specifically, we test the 𝑌 𝑃𝐼𝐷 𝑊 ratio. 36 As explained above, 𝑌 𝑅 , and hence 𝑌 𝑃𝐼𝐷 , includes realised capital gains but this amount is a negligible fraction of the total reported income (approximately 2.6% 𝑌 𝑇 ). So, we expect the ratio 𝑌 𝑃𝐼𝐷 𝑊 to decrease with 𝑊. This indeed is the case, as can be seen from Table 4.3 and Figure 4.9 below.
As expected, the effect of the imputed rent on the total income is noticeable only for the middle wealth groups for whom the residential property is a significant component of the total wealth. The share of residential properties is relatively small at the high wealth levels and reduces to a negligible level for those on the FL. Consequently, the income-wealth ratios with and without imputed rent are comparable. For the top 1% of HHs in the GE data, the reported personal income, including labour income and imputed rent, amounts to about 3.5% of the wealth. For the wealthiest 10 families on the FL, the reported personal income is slightly above half a percent of the wealth. The income-wealth ratio for the individuals follows a similar pattern.
Before concluding this section, a few remarks are in order. Our estimates of the exempt income do not include the tax-exempt long-term capital gains under Section 54 of the Indian Income Tax Act (ITA), which mainly include capital gains from the sale of housing property. We do not have any source of information on this form of income. However, at any point, such capital gains can accrue to only a minuscule fraction of the individuals. Therefore, inclusion or exclusion of these gains is not expected to significantly affect the income-wealth ratios presented here. At any rate, this form of capital income can be a significant fraction of the total income only for the middle wealth groups; for the wealthy and the super-wealthy, the residential property itself is a tiny fraction of their wealth holding. This means that by not including the capital gains under Section 54, we might have slightly underestimated total income at the middle wealth levels. By implication, the inclusion of these gains will add to the sharpness of the fall in the ratio. The other forms of tax-exempt incomes accrue to the taxpayers rather infrequently. We expect these incomes to be a negligible fraction of the taxable income and not bias our results. 36 Alternatively, we could work with the refined 𝑌 𝑅
𝑊
, where value of the residential property is subtracted from 𝑊. However, this approach makes the wealth negative for a significant number of observations at low, middle and upper-middle wealth levels, generating large negative ratios even for upper-middle groups.
The Correlates of Income-Wealth Ratio
In this section we attempt to identify the key determinants of the income-wealth ratios presented in the previous section. To this end, we consider all leading versions of the income discussed above, though our focus is on the total income reported to tax authorities, 𝑌 𝑅 .
Given the predictions emanating from our model in Section 2, wealth is expected to be a significant predictor of the ratio of the personal income relative to wealth. Indeed, our findings in Section 4 show that all versions of income-wealth ratios decrease with wealth. Besides, in Section 2, we discussed why capital income varies across assets. Specifically, for any given level of wealth, from Proposition 3, we know that the shares of different assets in the wealth portfolio have a bearing on the ratio of the direct personal income to wealth. Accordingly, we consider asset shares as possible determinants of the income-wealth ratios. This gives us regressors Equity, Banking, Advances, Agri_land, and Com_prop as shares of equity, bank deposits, personal advances, farmland, and commercial property respectively. These are defined in Table 5.1 below.
ECI Results Data
Summing up, the discussion in Sections 2 and 4 offers the following hypothesis for households.
H1: The income-wealth ratio is: decreasing in wealth; increasing in the share of bank deposits and personal advances; decreasing in the share of agricultural land and commercial property; decreasing in the share of equity assets; and relatively low for the general category.
The distribution of 𝑊 is skewed towards large values. As can be seen from Figure 3.1 and Figure A5.2 in Appendix I, 𝑊 and all versions of 𝑌 𝑊 follow a log-normal distribution. So, following the approach in Asher and Novosad (2019) and Fisman et al (2019), we use 𝑙𝑜𝑔 𝑊 instead of 𝑊 as an explanatory variable. For the same reason, we use 𝑙𝑜𝑔 of the income-wealth ratio as the dependent variable. Specifically, for the households in our datasets, including the FL families, we use the following specification to test the above hypothesis:
𝑙𝑜𝑔 ( 𝑌 𝑖 𝑊 𝑖 ) = 𝛼 0 + 𝛼 1 𝑙𝑜𝑔 𝑊 𝑖 + 𝛽. 𝑆 𝑖 + 𝛽 2019 𝐷 2019𝑖 + 𝛽 𝐺 𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑟 + 𝜖 𝑖 (5.1)
where 𝑆 is the vector of regressors representing shares of income yielding assets such as deposits in banks, personal advances (loans), equity, agricultural land, and commercial properties. The residual category of assets includes assets that do not yield income directly, such as gold, jewellery, durables, and properties used for housing. We use the dummy 𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑣 for the social category as a possible explanatory variable. 𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑟 takes value 0 for the SCs and STs, and 1 for the rest. The year dummy, D 2019 is used to examine the year fixed-effects. D 2019 =1 for the income and wealth reported in 2019; and, 0 otherwise.
The above specification is estimated for different versions of the income, i.e., by taking 𝑌 𝑖 to be 𝑌 𝑇𝑑 , 𝑌 𝑇 , 𝑌 𝑅 and 𝑌 𝑃𝐼𝐷 . Results are for 𝑌 𝑖 = 𝑌 𝑇𝑑 are omitted from the main text.
The Empirical Strategy
A host of factors work together to generate any given level of income and wealth for individuals and households. For HHs, data limitations do not allow us to model several factors of interest, and endogeneity thus remains a serious concern.
To reduce the likelihood of omitting a variable of interest, we consider a comprehensive list of regressors for the category of individuals (candidates). We have information on several demographic and other characteristics such as the age, educational attainment, gender, and profession of candidates.37 Thus, we examine the income-wealth ratio controlling for demographic characteristics in addition to the variables used in (5.1). As evident in literature, when holding constant other factors, age and education have a favourable effect on income. 38 The variables used to capture these individual characteristics are described in Table 5.1 above. In addition, we factor in individuals' profession, as the frequency and size of cash-based transactions differ across professions, which in turn means that the ability to underreport income can vary across occupations. We use profession dummies to examine profession-fixed effects on the reported income.
In addition to these, we examine if the reported ratios are different for candidates with a criminal history. To check this, we use the number of criminal cases against a candidate as an explanatory variable.
Given this large set of regressors for candidates, endogeneity becomes a smaller concern for the regression results presented in Tables 5.3-5.4. Still, it is conceivable that we may have omitted some relevant factors. One approach to address this concern and bolster inferences of causality would have been to use an instrumental variable (IV); however, identification of a suitable IV for our main regressors has proved to be challenging given the available data sources. In view of this limitation, we rely on additional evidence (presented in Sections 2, 3 and 4 above) to support our empirical finding that the income-wealth ratio decreases with wealth. In the next section, we will introduce more evidence to support and contextualise our inferences.
Another concern for our empirical analysis is that politicians may exhibit reporting behaviours that are very different from the rest of the population, i.e., the income-wealth ratios candidates report may differ from the rest of the population because they have political abilities. To examine if the political ability affects the reported ratios, we use a candidate's vote share (Vote) as a regressor to proxy for political ability assuming that the larger a candidate's vote share is, the greater is their political ability. Further, we examine the ratio for candidates with "extraordinary" political abilities. This set consists of winners of both General Elections and is represented by the dummy variable 𝐷 𝑊𝑖𝑛𝑛𝑒𝑟 .
Yet another concern arises on account of possible underreporting of income and wealth. As to the underreporting of wealth, its scope exists mainly for tangible assets such as land and buildings.
Several studies have established that people tend to underreport land and property values 39 Income from these assets can be manipulated even with greater ease. For instance, it is widely believed that people use farmlands to (mis)report their income from non-farm sources under the disguise of taxexempt agricultural income. Another strand of literature suggests that people underreport rental incomes from commercial properties. In contrast, it is relatively difficult to underreport financial assets such as bank deposits and equity, and the incomes arising from them. To capture the effect of such underreporting, we use the shares of farmland and commercial properties -the primary channels of underreporting -as control variables. Given that underreporting of income is easier than that of wealth, we expect these shares to affect the income-wealth ratios.
Still, in the absence of the suitable information, the exact extent of underreporting is hard to estimate. As a robustness check, we revisit the ratios presented in Section 4 by simply inflating the declared values of land and buildings by 25%. We find that the income-wealth ratios are still decreasing in wealth.
Summing up, for individuals, in addition to Hypothesis 1, we propose the following hypothesis.
H2: The income-wealth ratio is: decreasing in the vote share; different across professions;
increasing in age and education; and falling in the degree of criminality.
To test the above hypothesis for individuals, we use the following specification:
39 See Singh (2012 and 2013). where 𝑋 is the vector of individual characteristics such as age, education, number of criminal cases, and whether the candidate contested elections as a nominee of a national party. The dummy 𝐷 𝑀𝑎𝑙𝑒 = 1 for male candidates, and =0 otherwise. 40 As a background to our empirical analysis, t-tests reveal that the average wealth levels are relatively low for the HHs and individuals belonging to SC and ST categories Candidates of from national/state political parties are wealthier than the rest. We do not find statistically significant differences in the wealth levels of candidates from different professions, gender, and elections years. Further details are provided in Table A5.7 in Appendix II.
𝑙𝑜𝑔 ( 𝑌
Results
Now we present our econometric results. In the main text, we have presented results for the three versions of the income discussed above; namely, the taxable income (𝑌 𝑇 ), the total income reported to the tax department (𝑌 𝑅 ), and the direct personal income (𝑌 𝑃𝐼𝐷 ). To save the space, results for 𝑌 𝑇𝑑 are very similar and are presented in Appendix II.
Tables 5.2-5.3 present results using ordinary least squares (OLS) estimates for HHs with and without including the FL families. Tables 5.4 shows the regression results for individuals. Model 1 uses specifications (5.2) and ( 5.3) without fixed effects. Model 2 uses the full specification with fixed effects. The figures reported in brackets are use heteroscedasticity-corrected robust standard errors.
As can be seen from the regression tables, for households and also for individuals, all versions of the income-wealth ratio are decreasing functions of 𝑊 . As expected, the ratios are increasing in the share of bank deposits and personal advances, when other factors are held fixed.
Specifically, on average, a 1% increase in wealth is associated with approximately 1.5% [respectively 1.6%] decrease in ]. From Section 4, we know that the steepness of the fall in the ratio declines at the top wealth levels, indicating non-linear relationship between income and wealth. A 1% increase in equity share leads to 0.06-0.08% increase in the 𝑌 𝑅 𝑊 ratio reported by the HHs. The corresponding figure for individuals is 0.07%. For every 1% increase in farmland share, there is 0.03% [respectively 0.02%] decrease in the ratio for HHs [respectively individuals]. A 1% increase in commercial property share is associated with 0.03% decrease in the ratio for HHs and individuals. 40 We have also estimated models (5.1) and (5.2) with an added term (log 𝑊) 2 . See Appendix. Under this version of the models, coefficients of log 𝑊 and (log 𝑊) 2 are both significant with positive and negative sign, respectively, thereby suggesting a flattening of the decreasing pattern in the income-wealth ratios; otherwise, the results are very similar to the ones presented above. However, the effect of (log 𝑊) 2 on the income-wealth ratios becomes comparable to the effect of log 𝑊 at values of 𝑊 three-times the maximum wealth level observed in the data. Moreover, values of the 𝑅 2 do not change much. So, the term (log 𝑊) 2 is not included the main model. On the face of it, these results seem counterintuitive. The rental yield is generally higher than the dividend yields. As discussed in Section 2, the rate of direct returns tends to be the lowest for equity compared to the other income-yielding assets. Specifically, the ratio of dividend yields to the value of equity is less than the ratio of rental income to the property value, which is lower than the ratio of interest income to the value of the corresponding instrument (e.g., bank deposits, etc.). Thus, at any given level of wealth, direct capital income, and hence the 𝑌 𝑅 𝑊 , is expected to decrease with the equity share. However, we find the coefficient of the equity share to be positive. In other words, holding constant wealth levels and other factors, the more significant the share of equity, the higher is the reported income. On the other hand, larger percentages of farmland and commercial property are correlated with lower reported incomes, and vice versa. This result supports the widely held belief that people underreport agricultural and rental income.
Most agricultural incomes and a significant fraction of rentals are received in cash. These incomes do not create a verifiable trail of transactions and can easily be misreported. An underreporting of rental income reduces the tax burden on recipients, and in the process pulls down the reported values of taxable as well as the total income declared. The negative and significant coefficient of the commercial property share for all versions of 𝑌 𝑊 supports this inference. Given that the coefficient of equity is positive, it follows that the reported rental income is even smaller than the dividend income at a corresponding level of wealth. In the absence of underreporting, we would not expect this result.
The negative and statistically significant coefficient for the agricultural income share has somewhat different implications. Farm income is tax exempt; so, taxpayers have very little incentive to underreport it. However, farmland may offer owners the opportunity to disguise a part of their taxable income as (non-taxable) agricultural income, thus pulling down the declared value of the taxable income. This in turn pull down reported value of the total income. As can be seen from regression results, all versions of the 𝑌 𝑊 ratio are decreasing in the farmland share. This result is supported by government audit reports showing such misuse of farmlands by taxpayers to reduce their tax burden. 41 The dummy variable 𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑣 is insignificant, showing indicating that the social identity of a candidate and HH does not consistently affect the reported ratios. Age, similarly, does not appear to have a significant bearing on the reported ratios. Among other individual characteristics, educational qualification has a positive correlation with the 𝑌 𝑊 ratios, while the degree of criminality has a negative correlation. Ceteris-paribus, the larger the number of criminal cases, the smaller the reported income.
Our results are interesting with respect to candidates' political ability measured as the vote share.
Ceteris paribus, the larger a candidate's vote share, the higher is their reported ratio, and vice versa.
In other words, the stronger the political abilities, relatively high is the reported income. To make sense of this result, we should bear in mind that the media and civil society scrutiny are stricter for candidates with a serious chance of winning an election. 42 To avoid falling foul of the Election Commission and risking their chance at the hustings, these candidates have a stronger incentive to truthfully report their incomes as well as wealth. That the ratios are increasing in vote share also implies that media and official scrutiny have a more pronounced effect on reported incomes relative to reported wealth.
This result corroborates an earlier inference that underreporting incomes is easier than misreporting wealth. Besides, it suggests that the non-politician citizens (who presumably have no or very little political abilities) report smaller incomes than similarly placed politicians. groups, except for those who cannot avoid the media glare and official scrutiny.
Controlling for the vote share, the income-wealth ratios for "super-politicians" -i.e., candidates who won both GEs -are not different from the rest. The positive (and significant) coefficient of the variable 𝐷 𝑝𝑎𝑟𝑡𝑦 shows that the average reported income is relatively high for candidates belonging to national-and state-level parties, even though they tend to be wealthier than the other candidates. This finding further underscores the role of scrutiny and enforced transparency on income reporting behaviours.
We find the year, profession and gender fixed-effects. Ceteris-paribus, full time agriculturists and politicians report relatively low-income. On average, women tend to report smaller incomes than men. This latter finding appears to be a consequence of the two factors: at any given wealth level, labour market outcomes, including wages, are worse for women. In all, they receive less than onefifth of the national labour income. Women also own a larger share of non-income yielding assets like gold and jewellery. The income-wealth ratios are thus expected to be relatively low for women.
Our finding on 𝑌 𝑅 and, 𝑌 𝑃𝐼𝐷 , are very similar to 𝑌 𝑅 .
In conclusion, we would like to add that our results are robust to the inclusion/exclusion of dummies, fixed effects, and individual characteristics. However, we would like to emphasise that some of our results are sensitive to the specification of the dependent variable. As discussed above, to avoid the effect of skewed distribution of the variables ratios for the bottom 50% of the data points, such changes in the results are not surprising.
6. The Missing Income at the Top: How much and how come?
As is clear from our findings in Sections 4 and 5, the reported income relative to the corresponding wealth decreases continuously and sharply with the latter. Incomes reported by the bottom 25% of wealth groups are several times their wealth. In contrast, incomes reported by the groups at the top of the wealth pyramid are a minuscule fraction of their wealth. The same is true for the ratio of the estimated personal income reported by the different groups vis-à-vis their wealth. The personal income relative to wealth decreases continuously until it is reduced to a negligible fraction of wealth for super-wealthy groups.
On the face of it, these findings do not seem surprising. In view of Proposition 2 and the related discussion in Sections 2 and 3, we expect the personal income-to-wealth ratio, and reported income to wealth ratio to be decreasing in wealth. Moreover, we expect the income-wealth ratio to be very high for the poor. Consider a rural landless household living off an annual wage income of ₹1.20 lakhs. Assume the only asset owned by the household is a tiny house worth ₹40,000 and that it owes a debt of ₹10,000. In this case, the income is 400% of the family wealth. The ratio can be even higher for households with comparable income but lesser or no wealth. In contrast, consider a Forbes list family with a net equity wealth of, say, ₹10,000 crores. Assume the rate of total returns on its capital is 15% (a high rate of returns by all means). Even if this household earns another 500 crores as labour income, its cumulative income-wealth ratio will be 0.2= (1500+500)/10,000, i.e., the income will be just 20% of the family wealth. More generally, we expect the income-wealth ratio to be relatively low for wealthy groups because of their supersized wealth holdings.
Yet, the income-wealth ratios reported by wealthy Indians seem to be inexplicably low. For one, these ratios are far below the national average. In the decade the two GEs studied by us (i.e., during 2010-20), the average national income was 18-20% of the average private wealth. 43 The income levels reported by the wealthy groups pertain to the same period but are much small by comparison. Figure 6.1 depicts the 𝑌 𝑅 W ratios for wealthy groups compared to the average national income as a ratio of national wealth (18%). It is evident that the income-wealth ratios reported by wealthy groups are far below the national average. The total income-wealth ratio reported by the wealthiest 20% is less than a third of the national average. The estimated ratio for the wealthiest 0.1% is just 12% of the national average. For families on the FL, it is merely one-twentieth of the national average! Figure 6.1: Ratio of total reported income to wealth, i.e., 𝑌 𝑅 𝑊 , versus the national average of
𝑌 𝑊
Even considering the average returns on capital, incomes reported by wealthy groups are far below the expected levels. The rate of returns on capital is the capital income expressed as a percentage of the value of the capital stock. At the national level, the average rate of returns on the aggregate stock of capital can be estimated as the capital share of the national income times the ratio of national income to national wealth. The larger the share of capital in the national income or higher the income-wealth ratio is, the higher will be the rate of returns on wealth, and vice versa. For the decade relevant this study (2010-20)), the capital share of the national income has been upward of 40%. In the same period, the national income has been in the range of 18-20% of the national wealth, 43 In other words, during this period, the ratio of private wealth to the national income hovered in the range of 5-5.6. See GIR (2022, page 78). most of which is private wealth. 44 Therefore, even by a conservative approach, the national average of the rate of returns on private capital turns out to be at least 7.2% (= 0.4 × 0.18 × 100). Formally put, for the country as a whole, the average ratio of the capital income to wealth was upward of 7.2% in the decade covered by our study. During that period, one could easily get this kind of return even from fixed deposit accounts with commercial banks. In other words, one could ensure returns comparable to the national average by liquidating their assets and putting the proceeds in fixed deposits. The returns from mutual funds and direct equity investments were much higher.
As the rate of returns on capital is increasing in wealth, the rate of returns for the wealthy should be greater than the national average. Thus, for wealthy groups, capital income should be significantly higher than 7.2% of their wealth. Using returns from equity-oriented mutual funds as a reference point, the rate of returns on the capital owned by the top wealth groups, say, for the wealthiest 20%, should be upward of 10%. This assumption is additionally justified given the fact that during the last two decades, the average Indian growth rate has been upward of 6%, and historically, the rate of returns on capital has been several percentage points higher than the economic growth rate. The massive fortunes enjoyed by the wealthiest groups in recent years also suggest a higher than 10% rate of returns on their capital. Moreover, total income also includes labour income in addition to the capital income.
In simpler terms, even if we disregard the labour income earned by the super and ultra-wealthy, their total income is expected to be greater than 10% of their wealth simply on account of their capital income. However, the income levels they reported presents a strikingly different picture.
Figure 6.2: Total reported income, 𝑌 𝑅 , as a percentage of the capital income (7.2% of wealth).
Figure 6.2 shows the reported income, 𝑌 𝑅 , as a ratio of the capital income, taken to be 7.2% of wealth.
As is evident from the figure, for the top 15% of families and individuals, the reported income is less than the return from their capital. The total income reported by the top 5% HHs and individuals is about half their capital income. The total income reported by the top 0.1% adds up to less than onethird of the returns from the capital owned by this group. If we assume the rate of return on capital of the wealthy groups to be 10% (a very plausible assumption), the total income reported by the 44 On the shares of national income, see FRED Economic Data (2022). The ILO (2018) put wage share at 35.4% in 2013. For the national income wealth ratio, see Chancel, Lucas, Piketty, Thomas., Saez, E., Zucman, G. et al. (2022). top 5% HHs and individuals is about a third of their capital income. The total income reported by the top 0.1% adds up to only a fifth of the returns from the capital owned by this group. The FL families' total reported income is less than 10% of their capital income.
In other words, even after factoring in all types of income declared by the top 0.1% of families and individuals, their reported income amounts to just one-fifth of what they earn from capital alone. Since the reported capital income is less than the reported total income, this means that the capital income reported by this group is less than 20% of the returns from their capital; by implication, at least 80% of their capital income goes unreported in the ITRs. By similar logic, more than 95% of the capital income of families on the FL goes unreported!
Given the lion's share of capital income in the total income of wealthy groups, the difference between the reported and the actual income is mostly on account of the capital income that goes unreported. Therefore, the above numbers show that the share of the unreported income increases with wealth.
Furthermore, the difference between the total income reported by wealthy groups and their actual total income is more significant than what gets captured through the above figures. There are two reasons for this. First, we have quantified only the capital income of wealthy groups but their actual total income also includes labour income, and hence is greater than the capital income. Second, the rate of returns on capital owned by the wealthy groups seems to be much higher than the 10% rate assumed by us.
In view of the above-discussed large proportions of the income of wealthy groups going unreported in the individual tax accounts, we have to ask: What explains the vast proportions of the missing income at the top? The answer to this question lies in the types of assets owned by the wealthy groups, the forms of capital income received, and the reporting requirements for various kinds of capital income.
From Sections 2 and 3, we know that the wealthy groups in GE data hold most of their wealth as equity, non-agricultural land, and commercial properties. This class of assets enables owners to manipulate the split of the capital income between what is required to be reported and what can go legally unreported.
To understand this, it is helpful to bear in mind an essential consequence of the dominance of equity and commercial property in the asset portfolio of wealthy groups. It means that the capital gains, i.e., the appreciation in the market value of the assets, is a dominant form of capital income for the wealthy groups. The market value of commercial properties, stocks, and equities tend to appreciate over time, leading to the accumulation of massive capital gains for the owners. For accounting purposes, the capital gains from an asset are treated as "unrealised" unless they are exchanged or sold. 45 Under Indian tax law, only realised capital gains from a sale or a transfer of an asset are taxable. Unrealised capital gains are thus neither taxable nor required to be reported in the ITRs. This means that as long as an investment is not sold out, it is not a tax liability regardless of the quantum of appreciation in the asset's value on account of the unrealised capital gains. Even when the asset is finally sold or transferred to the next generation, the effective tax rate on the accumulated capital gains is much lower than the tax on other forms of realised income. Therefore, to reduce their tax liability, wealthy groups have a strong incentive to avoid realising capital gains. They do so by staying invested in the equity and commercial properties. Their motivation to stay invested is matched by their ability to do so.46 This is the primary reason for realised capital gains often being a tiny fraction of the capital income of the wealthy. Appendix II contains a deeper discussion of this point.
Guided by similar considerations, wealthy groups manipulate other forms of capital income. Take, for instance, the case of direct income from equity assets in the form of dividends, i.e., profits distributed among the stockholders of a company. Profits are taxed in the accounts of the company. Additionally, profits distributed as dividends are taxed in the form of Dividend Distribution Tax (DDT) or the tax liability for the recipient. On the other hand, reinvested profits not only do not invite any additional tax but also boost the market value of company stocks, leading to hefty capital gains for the owners. As discussed above, the capital gains remain unrealised and untaxed for the most part. Therefore, the reinvested profits lead to two benefits for the stockholders: they reduce the tax burden while propelling the value of equity capital. Eying these gains, wealthy groups want to reinvest most of their profits into the group companies by keeping their dividends pay-outs as low as possible. Wealthy individuals, such as CEOs, board members, or promotors of group companies, decide whether and how much of the profits will be distributed as dividends. Again, their incentive to reinvest profits is backed by the authority they enjoy in the hierarchy of corporate governance. Such manipulations of capital income in response to the dividend tax are an international phenomenon.47 However, compared to the scenario in several other countries, dividend pay-outs by the Indian companies are meagre. 48 The average dividend yield of the top 100 private listed companies amounts to a dividend income of just 0.85% of the value of their equity assets; the dividend yield for companies controlled by the top 10 families on the FL is even smaller. Deliberately suppressed dividend yields are one of the leading reasons why the reported equity income is a small fraction of the total equity income of the wealthy groups. On top of this, a part of the dividend income might be received indirectly and thus retained in the accounts of entities like LLPs. Consequently, most of the equity income of the wealthy groups goes unreported in the form of undistributed profits and unrealised capital gains. 49 This logic also applies to the income from non-agricultural land and commercial properties. Only rentals and farm income are required to be reported; capital gains are not. The rental yields are generally higher than the dividend yields.50 Thus, direct capital income is expected to increase with the share of commercial property in the wealth. However, rental transactions often do not create a verifiable record trail and hence can be manipulated and underreported easily. As can be seen from our regression analysis, at any given level of wealth, the reported income decreases as the share of the land and property increases. Our results suggest that the reported rental income is even smaller than the dividend income at a corresponding level of wealth.
As a result of the above-discussed manipulations and underreporting, only a tiny proportion of the capital income of the wealthy groups gets reported, while a more significant fraction of the returns from their capital goes missing from the tax data. The ability of wealthy groups to choose their income levels extends to their labour income. A case in point is the labour income of the wealthiest Indian, who has kept his labour income fixed at ₹15 crore annually since 2008-09. The amount includes salary, perquisites, allowances, and commission from all his business empire. 51
Before concluding this section, we find it pertinent to highlight that the relatively high income reported by the middle and low-wealth groups does not mean that these groups report all returns from their capital. Our results in Section 5 suggest that ipso facto reported income decreases with the share of agricultural land. Our results and the available evidence 52 suggest that people across wealth groups report a part of the taxable labour income as agricultural income to avoid paying tax.
Even then, income-wealth ratios are relatively very high for the low and middle wealth groups due to two interrelated factors. Given the small amounts of wealth held by these groups, their capital income is relatively small compared to their labour income. Thus, the scope of manipulating the former is restricted. Besides, their wealth is smaller than their income, leading to high income wealth ratios.
Two Implications of the Decreasing Income-Wealth Ratio
This section discusses two implications of the decreasing income-wealth ratio and the income missing from the top.
Tax regressivity
The Indian tax regime is considered to be progressive in that the marginal tax rate (the rate applicable on each additional unit of income) increases with the reported income. 53 However, as we have seen above, income levels reported by individuals and HHs in their ITRs are only a fraction of their total income. The difference between the declared and actual total income can be huge, especially for the high-wealth groups. This calls for a re-examination of the tax regime to see if it is progressive with respect to the total income as opposed to the reported income, which is typically used as a reference point. Moreover, as wealth is an important determinant of capital income, labour income, and social status, it is meaningful to ask: How does the tax liability of different groups compare to their wealth?
Below we explore these issues in brief. We examine the tax liabilities for the wealthiest members of the HHs -one member from each HH, including the families on the FL. The reason for choosing the wealthiest members is that in the ITR files, a tax unit is generally an individual. Only a few families file joint returns, and we do not have the data to estimate tax liability at the household level. Tax liability for the candidates can easily be computed, but this exercise cannot cover the individuals on FL; otherwise, we get results very similar to the ones presented here for the wealthiest members.
First, consider the tax on the income taxable in the hands of the individual receiving it, i.e., 𝑌 𝑇 . Tax liability on 𝑌 𝑇 is essentially the tax liability for 𝑌 𝑇𝑑 ; recall, out of 𝑌 𝑇 , only 𝑌 𝑇𝑑 is taxed, and we have 51 This amount is just 1% of the family dividend income, which, in turn, is not even half a percent of their family wealth. See https://economictimes.indiatimes.com/news/company/corporate-trends/mukeshambani-keeps-salary-capped-at-rs-15-cr-for-12th-yr-in-a-row/articleshow/76533898.cms? 52 See Compliance Audit of Union Government Department of Revenue Direct Taxes by the Comptroller and Auditor General of India (2019) on use of agricultural land to exaggerate the reported exempt income. 53 Under the old regime, the effective marginal tax rate including the surcharge increased up to ₹1 crore. Under the new regime, the effective rates increase up to ₹5 crore. See https://cleartax.in/s/income-tax-slabs the precise information on the latter. The computations are done using the online calculator provided by the income tax department for the assessment year 2019-20. 54 It is assumed that the taxpayer is a male aged below 65 with a "resident" status. Moreover, the source of taxed income is taken to be salary. These assumptions mean that we have estimated the highest direct tax liability applicable to each group.
Figure 7.1 shows the estimated tax liability as a ratio of the income reported as taxable, i.e., 𝑌 𝑇 , and the ratio of tax liability as a ratio of wealth. As can be seen, the tax regime is progressive for the reported taxable income, but not with respect to wealth. At the top wealth levels, the wealthier the taxpayer, the smaller the tax liability relative to wealth.
Figure 7.1: Tax liability as percentage of reported taxable income (𝑌 𝑇 ) and wealth
Next, consider the ratio of total tax liability of the wealthy groups as a ratio of their capital income.
We take the capital income to be 7.2% of the wealth -an underestimation of the capital income and hence the total income of the wealthy groups. As can be seen from Figure 7.2, there is an inverted U-shaped relationship between the tax liability and capital income. Tax paid by the wealthy groups in the 95-99 percentiles amounts to less than one-fifth of their capital income, and hence their total income. By a similar argument, the average tax liability of the top 0.1 centiles in the GE data is less than one-tenth of their income. This is even smaller than the liability for individuals in the 80-85 percentiles. The tax liability for the super-wealthy Indians on the FL is not even 5% of their income! For the tax regime to be progressive with reference to wealth, the reported taxable income by the wealthiest 0.1% has to go up by 100%. Reported value of the income reported by the FL has to be 12 times of what can be observed in the data. Figure 7.2: Tax liability as percentage of capital income (7.2% of wealth) 54 Income tax calculations are done at https://www.incometaxindia.gov.in/pages/tools/income-taxcalculator.aspx. Computations are based on the tax rate applicable for an adult resident Indian. We should point out that the tax liability discussed here does not factor in all of the tax paid on the income received by an individual. The reason is that the income reported as taxable in ITRs, i.e., 𝑌 𝑇 , does not include all types of individual income subjected to taxation. It leaves out the individual income not taxed in the hands of the recipient such as the dividend income amounting to less than ₹10 lakhs. An examination of the progressivity of the tax regime with respect to the total income requires an estimation of the total tax liability for various income groups. It is a complex exercise and a requires a separate study.
However, the ratios presented in Figure 7.2, along with the fact that most of the income of the wealthiest groups is their capital income, provide persuasive evidence to prove that the tax liability as a ratio of the total income decreases with wealth at the right tail of the distribution. Since the average total income is increasing in wealth, this means that the tax liability as a ratio of the total income is decreasing in the latter, making the effective tax regime regressive, at least at the top.
Even if we go by a conservative estimate of the capital income in Figure 7.2, for the tax regime to be progressive, the taxable income reported by the wealthiest 0.1% has to go up at least by 60%.
Reported value of the income reported by the FL has to be at least four-time of what can be observed in the data.
Underestimated Inequality
Our results highlight two serious issues with the existing estimates of income inequality in India. As discussed in the introduction to this study, most existing estimates of income inequality rely on taxable income reported in the ITRs, i.e., 𝑌 𝑇 , in terms of our notations.
We have shown that 𝑌 𝑇 is less than the total income reported by the taxpayers in their ITRs, i.e., 𝑌 𝑅 .
We have also shown that the difference between 𝑌 𝑇 and 𝑌 𝑅 increases with wealth and also income levels. Further, as is evident from Figure 4.3, for the rich and the super-rich, 𝑌 𝑇 is quite small compared to the total income reported by these groups in their ITRs. For instance, for the top 10% of candidates in the GE data, the total reported income is 10-11% larger than their 𝑌 𝑇 . For families on the FL, 𝑌 𝑅 is 60-70% larger than their 𝑌 𝑇 . Given these findings, it is clear that existing studies on income inequality have missed accounting for a substantial part of the income reported by the wealthy groups in their ITRs.
On top of this, as discussed in Section 6, the total reported income, 𝑌 𝑅 , itself is a small fraction of the total income of the rich and the super-rich groups. Our study points to a staggering level of difference between the income metrics that feed into existing studies on inequality and the actual income of the most prosperous Indians. According to our estimates, the total income reported by the wealthiest 5% of individuals and households is less than a third of their capital income. It is an even smaller fraction of their total income. The income reported by the top 0.1 centiles adds up to less than one-tenth of their actual total income. For the individuals and families on the FL, the total reported income is not even 5% of their total income. By capturing only a small fraction of the total income at the top, these existing studies have underestimated the levels of income inequality in the country. 55 The second issue pertains to identifying financially elite groups using income tax data. The (income) richest groups are commonly also considered the wealthiest. Our study shows that the top income earners identified by the income tax data are not necessarily the wealthiest; neither are the wealthiest the highest income reporters. As shown in Section 3, most of the 100 (income) richest individuals in India do not feature among the wealthiest 100 individuals, and vice versa.
Conclusions and Remarks
In this paper, we have modelled and estimated the relationship between wealth and reported income for more than 7,600 families and their adult members. We have used several data sources, including affidavits filed by election contestants, the ProwessIQ dataset, the Forbes List of billionaires, the annual statistics published by the Income Tax Department, as well as the annual accounts of listed companies managed by the wealthiest Indian families. Our analysis shows that the wealthier a household, the lesser the income reported by it relative to its wealth. Formally, the reported income-wealth ratio decreases with family wealth.
This decreasing trend persists whether we consider the income reported as taxable by households, or the total income declared by them, including the income reported as tax-exempt. These decreasing income-wealth ratios are consistent with what is predicted by our model. However, the magnitude of ratios is strikingly small particularly for the affluent groups.
According to our estimates, the average income reported as taxable by the bottom 10% of households is equivalent to more than 170% of the family wealth. In contrast, for the top 5% of HHs, the reported taxable income amounts to less than 4% of their wealth. For the top 0.1 percentile of HHs, the reported taxable income is less than 2% of their family wealth. For the wealthiest ten families on the FL, the reported taxable income is less than 0.6% of their wealth.
The results are very similar for the total income reported to tax authorities. For the bottom 10% of families, the total reported income amounts to more than 188% of the family wealth. In contrast, for the top 0.1%, this ratio drops to about 2%. For the top 100 families on the FL, the total reported income is less than 0.6% of family wealth.
The income-wealth ratios for individuals also exhibit very similar patterns -the wealthier an individual, the smaller is the reported income relative to their wealth. On average, the total income reported by the bottom 10% of individuals is more than 120% of their wealth; for the wealthiest 5% of individuals, it is just about 3.7% of their wealth. For the top 0.1% of the most affluent, the total reported income is only about 2% of their wealth. The ultra-wealthy individuals on the FL report the lowest income -about 0.5% of their wealth.
We have shown that the low values of the reported income-wealth ratios for affluent groups are primarily because a large share of their total income goes unreported. The problem of missing income is most pronounced for the capital gains enjoyed by them. According to our estimates, the total income reported by the wealthiest 5% of individuals is approximately only a third of the returns from their capital. The total income reported by the top one-tenth of the top centile adds up to just about one-fifth of the returns from their capital. In other words, even after factoring in all types of declared income, their total reported income amounts to less than 20% of their capital income; at least 80% of returns from their capital go unreported. For the families on the FL, more than 90% returns from their capital do not figure in their reported income. The proportions of the missing income are much higher if we compare the reported income with the total (labour plus capital) income.
The missing income of the affluent groups underscore the case for going beyond the standard approach for assessing the progressivity of taxation. There is a case for considering the total income, and not just the reported income, for this purpose. We have shown that the tax paid by the wealthiest 5% amounts to less than one-fifth of their capital income. The average tax liability of the wealthiest 0.1 centiles is just one-tenth of the returns from their capital. The tax liability of the super-wealthy Indians on the FL is less than 5% of their capital income!
We also find profession-, year-, and the gender-fixed effects. Moreover, our empirical analysis suggests that people across the wealth spectrum underreport their rental income, and misreport part of their taxable income by disguising it as tax-free farm income. Ceteris paribus, women tend to report lower incomes than men, and that full-time agriculturists and politicians report relatively low levels of income. Further, holding other factors constant, people with criminal records also report relatively low incomes. In contrast, individuals exposed to higher levels of media and civil society scrutiny report relatively high levels of income.
Moreover, we have shown that the effective tax rate is not progressive with respect to wealth. At the top wealth levels, the wealthier an individual is, the smaller their relative tax liability tends to be. Even with the most generous estimates, the tax liability of the top centile amounts to 1% of their wealth. For the top one-tenth of the top centile, the total tax liability amounts to less than 0.8% of their wealth. The super-wealthy Indians on the FL pay tax that is less than 0.2% of their wealthmuch smaller than the tax liability for individuals at middle wealth levels.
These findings should be of interest beyond just the Indian context since the dynamics of capital income modelled and empirically examined by us are similar across market economies. Indeed, going by the available evidence,56 it will not be surprising if similar research in other countries leads to similar findings, with some variations, of course, in the proportions of missing incomes.
Specific to the Indian context, the missing income at the top has implications for the existing estimates of income inequality. 57 Studies on the subject typically rely on the statistics on taxable income as published by the Indian Tax Department. Our study points to a significant difference between the income levels that are fed into these studies and the actual income of the affluent Indians. By failing to capture a non-negligible fraction of the top total income levels, these studies may have severely under-estimated inequality levels in the country.
In conclusion, we note some limitations of this study. Our regression analyses show that the incomewealth ratio is increasing in the vote share of political candidates. This means that our results have an upward bias. In other words, for any given level of wealth, except at the very top, the income reported by an average Indian is probably smaller than what is seen in our estimates. We cannot verify whether this is the case. Moreover, due to a lack of more detailed data, our categorisation of individuals among different professions and educational qualifications is not very precise. Further, endogeneity is also a concern. A study based on a bigger database with more granular information might produce different results.
Finally, it is pertinent to discuss the implications of possible under-reporting of wealth by different wealth groups. In the absence of the available information, we have revisited the income-wealth ratios presented in Section 4 by simply inflating the declared values of land and buildings by 25%. We find that the income-wealth ratios still decrease sharply and continuously with wealth, but the fall is less steep now. Future studies based on objective estimates of asset values may lead to results somewhat different from ours.
Let
of annual returns on asset 𝑖. Now we can express 𝑌 𝐾 as 𝑌 𝐾 = ∑ 𝑦 𝑖 𝑛 𝑖=1 = ∑ r I 𝐴 𝑖 . 𝑛 𝑖=1 Moreover, we can rewrite 𝑟 𝑖 = 𝑦 𝑖𝐷 𝐴 𝑖 + 𝑦 𝑖𝐼 𝐴 𝑖 = 𝑟 𝑖𝐷 + 𝑟 𝑖𝐼 , i.e., the rate of total returns is simply the sum of the rates of direct and indirect returns from the asset.
Figure 3 . 1 :
31 Figure 3.1: Wealth and income distribution of HHs for GE 2014, GE 2019, and combined (a) Wealth (b) Income
Figure 3 . 2 :
32 Figure 3.2: Household Wealth distribution GE vs AIDIS
Figure 4
4 Figure 4.1: Average 𝑌 𝑇𝑑 , 𝑌 𝑇 , 𝑌 𝑅 and 𝑌 𝑃𝐼𝐷 income reported by different wealth groups (households)
Figure 4 . 2 :
42 Figure 4.2: Average Income 𝑌 𝑇𝑑 , 𝑌 𝑇 , 𝑌 𝑅 and 𝑌 𝑃𝐼𝐷 reported across wealth groups Plot A: Household
Figure 4.4 show the log-log scatter plots of 𝑌 𝑇 /𝑊 ratios reported by households their wealth. Plots A, B and C, respectively, correspond to what we consider as the most plausible, the lowest bound and the upper bound on the total taxable income reported by the FL families. Plots for individuals are very similar and omitted from presentation.
Figure 4 . 4 :
44 Figure 4.4: Household income (𝑌 𝑇 ) vs wealth scatter plots (a) Lower Bound (b) Most Plausible (c) Upper Bound
Figure 4 . 5 :
45 Figure 4.5: Reported taxable income as a percentage of wealth, across groups (Approach 2) Plot A: p5 -p10 to the top 10 families on the FL
Figure 4 .
4 Figure 4.6 show the scatter plots of the 𝑌 𝑅 /𝑊 ratios reported by households versus their wealth. Plots A, B and C, respectively, correspond to what we have described above as the most plausible, the lowest bound and the upper bound on the total income reported by households. Plots for individuals are very similar.
Figure 4 . 6 :
46 Figure 4.6: Household total reported income (𝑌 𝑅 ) vs. wealth scatter plots (a) Lower Bound (b) Most Plausible (c) Upper Bound
standard errors in parentheses. Significance level (p-value): *0.05 **0.01 ***0.001
We consider the asset allocation (𝐴 ̂1, 𝐴 ̂2, … , 𝐴 ̂𝑛) to be riskier relative to the allocation (𝐴 1 , 𝐴 2 , … , 𝐴 𝑛 ) if the following holds: For all 𝑘 ≤
𝐴 ̂𝑖 𝐴 ̂. 1, … , 𝑛 with asset 𝐴 ̂ choose allocation 𝐴 ̂1, 𝐴 ̂2, … , 𝐴 ̂𝑛, with 𝑠̂𝑖 = 𝐴 𝑖 𝐴 . Let the individual
𝑘 𝑘
∑ 𝑠̂𝑖 ≤ ∑ 𝑠 𝑖
𝑖=1 𝑖=1
Table 3
3
.1: Categorisation of GE asset types
Assets GE Assets Type
Category
Land Agricultural Land + Non-Agricultural Land
Durables Motor Vehicles + Other assets, such as values of claims/interests
Buildings Commercial Buildings + Residential Buildings + Other Immovable Assets
Shares Bonds, Debentures and Shares in companies and firms
Deposits Cash + Deposits in Banks, Financial Institutions and Non-Banking Financial Companies +
NSS (National Savings Schemes), Postal Savings, etc. + LIC or other Insurance Policies
Jewellery Gold and Jewellery
Receivable Personal loans/advances given
Table 3 .
3 2: Household wealth and income (GE data) across percentiles
'Safe Assets' 'Risky Assets'
Wealth Percentile No. of HH Avg. HH Avg. HH % %
Wealth Income Total Assets Total Assets
p0 -p5 -3,951,493 641,146 56.2 5.1
p5 -p10 290,253 351,043 39.8 2.8
p10 -p15 709,479 448,498 30.6 6.7
p15 -p20 1,314,795 488,736 25.2 9.0
p20 -p25 2,054,144 520,649 22.5 9.5
p25 -p30 3,002,482 542,202 19.9 14.3
p30 -p35 4,018,420 573,599 17.4 12.3
p35 -p40 5,278,640 733,610 17.2 13.8
p40 -p45 6,779,390 781,518 15.7 13.2
p45 -p50 8,668,396 897,422 15.9 14.2
p50 -p55 11,024,009 941,918 15.6 16.9
p55 -p60 13,962,219 1,172,538 15.4 17.1
p60 -p65 18,136,096 1,275,765 15.0 19.3
p65 -p70 23,960,570 1,454,769 15.6 21.2
p70 -p75 32,221,960 1,782,296 12.6 22.7
p75 -p80 42,845,496 2,505,900 12.4 24.0
p80 -p85 60,533,676 2,657,566 10.1 27.5
p85 -p90 93,946,888 4,270,143 9.3 29.9
p90 -p95 183,624,992 7,824,278 9.0 32.8
p95 -p100 1,131,648,128 35,099,180 6.0 42.2
p99 -p100 3,576,561,408 98,649,136 3.7 47.3
p99.90 -p100 18,062,749,696 351,848,448 1.1 68.9
Table 3 .
3 3: Income rank of the wealthiest HHs and individuals in GE data
Income Rank Top 100 Wealthiest Households Top 100 Wealthiest Individuals 2014 2019 Overall 2014 2019 Overall
Top 100 47% 48% 34% 42% 42% 35%
101 -200 18% 17% 22% 18% 17% 19%
201 -300 8% 7% 10% 8% 8% 6%
Greater than 300 27% 28% 34% 32% 33% 40%
Taxed-in-hand Income Taxable Income Total Reported Income Direct Personal Income Figure
4.3: Ratio of 𝑌 𝑇𝑑 /𝑌 𝑇 , 𝑌 𝑅 /𝑌 𝑇 and 𝑌 𝑃𝐼𝐷 /𝑌 𝑇 reported by different wealth groups Plot A: Household
1.8
1.6
Ratio 1.2 1.4
Income 0.8 1.0
0.6
0.4
Taxed-in-Hand/Taxable Total Reported/Taxable Direct Personal/Taxable
1.4
1.3
1.2
Ratio 1.0 1.1
Income 0.7 0.8 0.9
0.6
0.5
0.4
Taxed-in-Hand/Taxable Total Reported/Taxable Direct Personal/Taxable
1.8
1.6
Ratio 1.2 1.4
Income 0.8 1.0
0.6
0.4
Taxed-in-Hand/Taxable Total Reported/Taxable Direct Personal/Taxable
Plot B: Candidate
Plot C: Wealthiest member reported income with the corresponding wealth. Below we show that the income-wealth ratio (i.e.,
Table 4 .
4 1: Reported taxable income-wealth ratio across wealth groups,
𝑌 𝑇 𝑊 * 100
Absolute Upper-bound estimates are derived from the following assumption: The wealthiest groups report 95% of the sum [𝑌 𝐸𝑞 + 𝑌 𝐴 + 𝑌 𝑃 + 𝑌 𝐶𝑔 ] as exempt income. That is, the total income reported is estimated as: 𝑌 𝑇 + 0.95[𝑌 𝐸𝑞 + 𝑌 𝐴 + 𝑌 𝑃 + 𝑌 𝐶𝑔 ]. Simply put, under the absolute upper bound, the
𝑌 𝐸𝑞 + 𝑌 𝐴 + 𝑌 𝑃 + 𝑌 𝐶𝑔 ] is reported as exempt income. This means that an absolute lower bound on the total reported income is estimated as: 𝑌 𝑇 + 0.05[𝑌 𝐸𝑞 + 𝑌 𝐴 + 𝑌 𝑃 + 𝑌 𝐶𝑔 ].
Table 4 .
4 2: Total reported income-wealth ratio ( Range of income-wealth ratio for the FL is attributed to (a) range of values in 𝑌 𝑇 and (b) scenarios of assets bifurcation and their different yield. Most Plausible Estimates assumes 𝛿 𝐴 = 2%, 𝛿 𝑃 = 2.5%; Absolute Upper Bound assumes 𝛿 𝐴 = 4%, 𝛿 𝑃 = 4%, Lower Bound assumes; 𝛿 𝐴 = 0.08%, 𝛿 𝑃 = 0.08%.
𝑌 𝑅 𝑊 * 100) across wealth groups (Approach 1)
Note:
Table 4 .
4 3:The ratio of personal income to wealth ( Range of income-wealth ratio for the FL is attributed to (a) range of values in 𝑌 𝑇 and (b) scenarios of assets bifurcation and their different yield. Most Plausible Estimates assumes 𝛿 𝐴 = 2%, 𝛿 𝑃 = 2.5%; Absolute Upper Bound assumes 𝛿 𝐴 = 4%, 𝛿 𝑃 = 4%, Lower Bound assumes; 𝛿 𝐴 = 0.08%, 𝛿 𝑃 = 0.08%.
𝑌 𝑃𝐼𝐷 𝑊 * 100) (Approach 1)
Table 5
5
.1: Description of variables used in the regression analysis and their data sources
Variable Description Source
𝑙𝑜𝑔 𝑊 Natural log of Wealth GE Affidavit
Data
GE Affidavit
Data
GE Affidavit
Data
𝐴𝑔𝑟𝑖_𝐿𝑎𝑛𝑑 Share of agricultural land in the total assets, i.e., "value of the GE Affidavit
agricultural land" as a ratio of the value of all assets. Data
𝐶𝑜𝑚_𝑃𝑟𝑜𝑝 Share of commercial property defined as commercial building + GE Affidavit
non-agriculture land. Data
D 2019 D 2019 = 1 if W and Y are reported for the year 2019 GE Affidavit
D 2019 = 0 otherwise Data
𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑣 𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑣 = 1 if the (social) category is "General", i.e., Unreserved ECI Results
(UR); 𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑣 = 0 otherwise Data
ECI Results
Data
𝐶𝑟𝑖𝑚𝑖𝑛𝑎𝑙 Number of criminal cases registered against the candidate GE Affidavit
Data
𝐸𝑑𝑢𝑐𝑎𝑡𝑖𝑜𝑛 Variable capturing the highest educational degree attained by the GE Affidavit
candidate. The higher the degree, the larger the value taken by this Data
variable.
𝐷 𝑊𝑖𝑛𝑛𝑒𝑟 𝐷 𝑊𝑖𝑛𝑛𝑒𝑟 = 1 if the candidate won both 2014 and 2019 GE elections; ECI Results
0 otherwise Data
𝐷 𝑃𝑎𝑟𝑡𝑦 𝐷 𝑃𝑎𝑟𝑡𝑦 = 1 if the candidate contested elections as a registered state ECI Results
or national party nominee; 0 otherwise Data
Profession Profession =1 if the candidate's profession is Agriculture and allied GE Affidavit
Agriculture activities. Data
Politicians
𝐵𝑎𝑛𝑘𝑖𝑛𝑔
Share of the banking assets in total assets defined as the value of "cash + deposits in bank + NSS + postal savings" divided by the "value of all assets combined" 𝐸𝑞𝑢𝑖𝑡𝑦 Share of equity in total assets. Equity comprises bonds, debentures, and share/stocks owned GE Affidavit Data 𝐴𝑑𝑣𝑎𝑛𝑐𝑒𝑠 Share of personal advances in total assets. Personal advances are private loans given out to others 𝑉𝑜𝑡𝑒 Variable 𝑉𝑜𝑡𝑒 for candidate 𝑖 contesting in constituency 𝑗 𝑉𝑜𝑡𝑒 𝑖𝑗 = (votes received by candidate 𝑖)/(votes received by winner of constituency 𝑗)
𝑖 𝑊 𝑖 ) = 𝛼 0 + 𝛼 1 𝑙𝑜𝑔 𝑊 𝑖 + 𝛽. 𝑆 𝑖 + 𝛽 2019 𝐷 2019𝑖 + 𝛽 𝐺 𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑟𝑖 + 𝛽 𝑀 𝐷 𝑀𝑖 + 𝛾. 𝑋 𝑖 + 𝜖 𝑖 (5.2)
Table 5 .
5 2: Households without the FL income-wealth ratio
Households log ( Y 𝑇 W ) log ( Y R W ) log ( Y 𝑃𝐼𝐷 W )
without FL Model 1 Model 2 Model 1 Model 2 Model 1 Model 2
𝑙𝑜𝑔 𝑊 -0.554*** (0.007) -0.553*** (0.007) -0.530*** (0.007) -0.529*** (0.007) -0.484*** (0.007) -0.483*** (0.007)
0.009*** 0.009*** 0.009*** 0.009*** 0.007*** 0.007***
𝐵𝑎𝑛𝑘𝑖𝑛𝑔 (0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
0.013*** 0.013*** 0.014*** 0.014*** 0.010*** 0.010***
𝐸𝑞𝑢𝑖𝑡𝑦 (0.002) (0.002) (0.001) (0.001) (0.001) (0.001)
0.012*** 0.012*** 0.012*** 0.012*** 0.009*** 0.008***
𝐴𝑑𝑣𝑎𝑛𝑐𝑒𝑠 (0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
-0.008*** -0.008*** -0.003*** -0.003*** -0.005*** -0.006***
𝐴𝑔𝑟𝑖_𝐿𝑎𝑛𝑑 (0.001) (0.001) (0) (0) (0) (0)
-0.002*** -0.002*** -0.002** -0.002** -0.004*** -0.004***
𝐶𝑜𝑚_𝑃𝑟𝑜𝑝 (0.001) (0.001) (0.001) (0.001) (0) (0)
0.104*** 0.105*** 0.100***
D 2019 (0.022) (0.02) (0.018)
0.002 0.009 0.012
𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑣 (0.025) (0.023) (0.021)
11.331*** 11.268*** 10.938*** 10.874*** 10.440*** 10.378***
𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡 (0.118) (0.118) (0.111) (0.111) (0.107) (0.107)
𝑅 2 0.655 0.656 0.668 0.669 0.683 0.684
𝑂𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛 7433 7433 7433 7433 7433 7433
Note: Robust standard errors in parentheses. Significance level (p-value): *0.05 **0.01 ***0.001
Table 5 .
5 3: Households (including the FL) income-wealth ratio
Households log ( Y 𝑇 W ) log ( Y R W ) log ( Y 𝑃𝐼𝐷 W )
with FL Model 1 Model 2 Model 1 Model 2 Model 1 Model 2
𝑙𝑜𝑔 𝑊 -0.541*** (0.007) -0.541*** (0.007) -0.517*** (0.007) -0.516*** (0.007) -0.474*** (0.007) -0.474*** (0.007)
0.009*** 0.009*** 0.009*** 0.009*** 0.008*** 0.008***
𝐵𝑎𝑛𝑘𝑖𝑛𝑔 (0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
0.019*** 0.019*** 0.021*** 0.021*** 0.015*** 0.015***
𝐸𝑞𝑢𝑖𝑡𝑦 (0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
0.011*** 0.011*** 0.011*** 0.010*** 0.008*** 0.007***
𝐴𝑑𝑣𝑎𝑛𝑐𝑒𝑠 (0.002) (0.002) (0.001) (0.001) (0.001) (0.001)
Table 5 .
5 4: Individuals' income-wealth ratio
Individuals log ( Y 𝑇 W ) log ( Y R W ) log ( Y 𝑃𝐼𝐷 W )
Model 1 Model 2 Model 1 Model 2 Model 1 Model 2
𝑙𝑜𝑔 𝑊 -0.632*** (0.008) -0.687*** (0.009) -0.605*** (0.007) -0.654*** (0.008) -0.555*** (0.007) -0.600*** (0.008)
0.007*** 0.006*** 0.008*** 0.006*** 0.006*** 0.005***
𝐵𝑎𝑛𝑘𝑖𝑛𝑔 (0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
0.012*** 0.011*** 0.014*** 0.013*** 0.010*** 0.009***
𝐸𝑞𝑢𝑖𝑡𝑦 (0.002) (0.002) (0.002) (0.002) (0.001) (0.001)
0.011*** 0.009*** 0.011*** 0.009*** 0.008*** 0.006***
𝐴𝑑𝑣𝑎𝑛𝑐𝑒𝑠 (0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
-0.007*** -0.006*** -0.002*** -0.001** -0.005*** -0.004***
𝐴𝑔𝑟𝑖_𝐿𝑎𝑛𝑑 (0.001) (0.001) (0) (0) (0) (0)
-0.002*** -0.002*** -0.002** -0.002*** -0.004*** -0.004***
𝐶𝑜𝑚_𝑃𝑟𝑜𝑝 (0.001) (0.001) (0.001) (0) (0) (0)
0.462*** 0.425*** 0.375***
𝑉𝑜𝑡𝑒 (0.041) (0.036) (0.033)
-0.004*** -0.002*** -0.002***
𝐶𝑟𝑖𝑚𝑖𝑛𝑎𝑙 (0.001) (0.001) (0.001)
0.022*** 0.020*** 0.019***
𝐸𝑑𝑢𝑐𝑎𝑡𝑖𝑜𝑛 (0.005) (0.005) (0.005)
-0.002* -0.003** -0.002
𝐴𝑔𝑒 (0.001) (0.001) (0.001)
0.024 0.023 0.022
𝐷 𝑊𝑖𝑛𝑛𝑒𝑟 (0.05) (0.045) (0.041)
0.128*** 0.121*** 0.113***
D 2019 (0.022) (0.02) (0.018)
0.134*** 0.132*** 0.118***
𝐷 𝑀𝑎𝑙𝑒 (0.039) (0.035) (0.033)
0.054* 0.061** 0.067**
𝐷 𝑈𝑛𝑟𝑒𝑠𝑒𝑣 (0.025) (0.024) (0.022)
0.147*** 0.130*** 0.111***
𝐷 𝑃𝑎𝑟𝑡𝑦 (0.027) (0.025) (0.023)
-0.205*** -0.184*** -0.161***
Agriculture (0.039) (0.033) (0.03)
-0.162*** -0.151*** -0.139***
Politicians (0.037) (0.033) (0.03)
Technically, our results likely have an upward bias; we overestimate the reported income across all wealth groups. By implication, the income reported by ordinary citizens is likely smaller than what our results indicate. This result points toward a general tendency to underreport income across wealth41 See Compliance Audit of Union Government Department of Revenue Direct Taxes by the Comptroller and Auditor General of India (2019).42 It seems plausible to assume that the candidates have a good sense of their electoral prospects. So, candidates with better prospects end up with relatively high vote shares ex-post.
as the dependent variable and restrict the analysis to the top 50% of the GE data points, our results still remain very similar to the ones presented above in terms of sign and significance levels of the coefficients. However, some results change if the substitution of
𝑌 𝑖 𝑊 𝑖 , and in the interest of fitness of the
model, we have chosen to work with 𝑙𝑜𝑔 ( 𝑊 𝑖 𝑌 𝑖 ) as the dependent variable. If we substitute 𝑌 𝑖 𝑊 𝑖 for
𝑙𝑜𝑔 ( 𝑊 𝑖 𝑌 𝑖 ) 𝑌 𝑖 𝑊 𝑖 for 𝑙𝑜𝑔 ( 𝑊 𝑖 𝑌 𝑖 ) is
applied to the entire dataset (see Tables A5.3-A5.6 in Appendix II), but wealth remains the most
important determinant of 𝑌 𝑖 𝑊 𝑖 ratios. In view of the skewed distributions of 𝑌 𝑖 𝑊 𝑖
See ProPublica June
2021, Indian Express, and India Today, July 2022. 2 For a review of the literature on the subject, see[START_REF] Piketty | Capital in the twenty-first century[END_REF]. For the evolution of the income-wealth relationship in India, see[START_REF] Kumar | The evolution of wealth-income ratios in India 1860-2012[END_REF].3 A few studies do inform us about the income-wealth relationship but only for broad categories of wealth groups in the United States and Europe. See[START_REF] Dynan | Changing household financial opportunities and economic security[END_REF],[START_REF] Piketty | Capital in the twenty-first century[END_REF][START_REF] Chancel | World Inequality Report[END_REF].
See Lancet andPiketty (2018) and[START_REF] Sahasranaman | Dynamics of reallocation within India's income distribution[END_REF], among others.
See Dynan (2009) and[START_REF] Chancel | World Inequality Report[END_REF].
Several studies show that the unrealised capital gains (indirect returns) are a significant component of the increasing returns on the wealth. For details, seePiketty (2014, Chapter 12), Saez and Zucman (2016), and[START_REF] Kaymak | Accounting for Wealth Concentration in the US[END_REF].
See, for example, references in https://www.urban.org/sites/default/files/publication/49116/2000178-How-are-Income-and-Wealth-Linked-to-Health-and-Longevity.pdf
https://www.myneta.info
https://affidavit.eci.gov.in/
For this purpose, we take the value of the adjusted deflator in January-March 2014 (around 97.28) and duringJanuary-March 2019 (around 112.35).
We use the tax data for Assessment Year (AY) 2013-14 to AY 2018-19. This period covers the two GEs studied by us. Statistics for after this period have not been released as of January 2022.
The list includes all listed companies and most unlisted public companies, and private companies of all ownership groups.
In 72% of HHs, the male share of wealth is greater than that of the female; only in 28% HHs do women have a larger share of wealth.
Top income levels reported in GE (up to ₹194 crores). See Section 3.
For a discussion, seePiketty (2018, chapter 2). To our knowledge, the overall patterns of the income-wealth ratio for individuals and household levels have not been examined comprehensively.
For a review of this literature see[START_REF] Libman | Tax return as a political statement[END_REF] andSzakonyi (2022).
The Gold Monetization Scheme 2015 enables gold owners to earn an tax-free interest income up to 2.5% of the value by depositing gold with the government.
See literature cited in[START_REF] Fisman | The private returns to public office[END_REF] and Bhavnani (2012).
In the terminology of the income tax returns (ITR) forms used by the Indian Tax Department, 𝑌 𝑇 is called the "gross total income" (GTI). 𝑌 𝑇𝑑 is called the "total income" (TI), and is commonly referred to as the "returned income" by professionals such as accountants.
𝑌 𝑅 also includes realised capital gains, which are a negligible share of the total reported income and hence of 𝑌 𝑃𝐼𝐷 .
Compared to the alternatives available, the GPIs are better suited for estimating the right tail of income distribution. See[START_REF] Blanchet | Generalized Pareto Curves: Theory and Applications[END_REF].
For the FL individuals and households, we do not have unit-level information on the reported income. So we have worked with the average wealth and income for the top 100 and the top 10, thus rendering methods 1 and 2 estimates of the income-wealth ratio for the FL the same.
As explained in Section 3, for the FL families, we have estimated a range of the income-wealth ratios on account of uncertainty related to the household income, and for the FL individuals on account of uncertainty related to their share in the family wealth.
We do not have such detailed information for non-candidate members of the household in the dataset. Therefore, our empirical analysis of individuals is restricted to the candidates.
See Piketty (2018, chapter 7).
In the parlance of commerce, unrealised capital gains are a part of the economic income from capital but not the accounting income. The latter includes only the actually realised income.
In contrast, middle-wealth groups have to sell their assets to meet other financial needs and pay capital gains tax in the process.
For a discussion, see[START_REF] Chetty | Dividend taxes and corporate behavior: Evidence from the 2003 dividend tax cut[END_REF],[START_REF] Kari | Tax treatment of dividends and capital gains and the dividend decision under dual income tax[END_REF], and[START_REF] Boissel | Dividend taxes and the allocation of capital (No. w30099[END_REF].
Compared to the assets of Indian companies, their declared profits are also low. For more information, see[START_REF] Kanojia | Dividend and Payout Policy: An Empirical Analysis of Listed Indian Companies[END_REF]. Also see[START_REF] Labhane | Dividend policy decisions in India: Standalone versus business groupaffiliated firms[END_REF].
As an aside, since most of the profits remain undistributed, one can understand why top Indian corporations want to reduce debt on their books: Given the huge cash in their accounts, they do not need to borrow much.
The rents tend to be in the range of 2-4% of the property value. While the rental income is only a fraction of the total returns from the property, it is still more than three times the rate of realised income from the equity assets.
Since our focus is not inequality per-se, we refrain from estimating the magnitude of underestimation.
See ProPublica June 2021.
See Ojha and Bhatt, (1964),[START_REF] Banerjee | Top Indian incomes, 1922-2000[END_REF],[START_REF] Basole | Dynamics of income inequality in India: Insights from world top incomes database[END_REF],Ahmed and Bhattacharya (2017),[START_REF] Sinha | Income distribution, growth and basic needs in India[END_REF],[START_REF] Assouad | Extreme Inequality: Evidence from Brazil, India, the Middle East, and South Africa[END_REF],[START_REF] Chancel | Indian Income Inequality, 1922-2015: From British Raj to Billionaire Raj?[END_REF], and[START_REF] Sahasranaman | Dynamics of reallocation within India's income distribution[END_REF] for an overview of these findings.
for excellent research support, Saveri Sargam, Nandini Kumar and Richa Udayana for useful inputs, and the CDE for institutional support. The research presented here has been facilitated by the ICSSR research grant F.No. 02/68/GN/2021-22/ICSSR/RP /MJ.
Abbreviations and Notations
ADR:
Association for Democratic Reforms AIDIS:
All-India Debt Investment Survey AOP:
Association of Persons AY:
Assessment Year BOI:
Body of Individuals CAG:
Comptroller and Auditor General of India CBDT:
Central Board of Direct Taxes Organisation for Economic Co-operation and Development RBI:
Reserve Bank of India SC: |
04104601 | en | [
"shs"
] | 2024/03/04 16:41:22 | 2023 | https://shs.hal.science/halshs-04104601/file/WorldInequalityLab_WP202302.pdf | Thomas Blanchet
Jess Benhabib
François Bourguignon
Laurent Bach
Frank Cowell
Thomas Xavier D'haultfoeuille
Muriel Piketty
Roger
Uncovering the Dynamics of the Wealth Distribution
I introduce a new way of decomposing the evolution of the wealth distribution using a simple continuous time stochastic model, which separates the effects of mobility, savings, labor income, rates of return, demography, inheritance, and assortative mating. Based on two results from stochastic calculus, I show that this decomposition is nonparametrically identified and can be estimated based solely on repeated cross-sections of the data. I estimate it in the United States since 1962 using historical data on income, wealth, and demography. I find that the main drivers of the rise of the top 1% wealth share since the 1980s have been, in decreasing level of importance, higher savings at the top, higher rates of return on wealth (essentially in the form of capital gains), and higher labor income inequality. I then use the model to study the effects of wealth taxation. I derive simple formulas for how the tax base reacts to the net-of-tax rate in the long run, which nest insights from several existing models, and can be calibrated using estimable elasticities.
In the benchmark calibration, the revenue-maximizing wealth tax rate at the top is high (around 12%), but the revenue collected from the tax is much lower than in the static case.
Introduction
Wealth inequality has sharply increased in the United States. By combining income tax returns with macroeconomic balance sheets, Saez and Zucman (2020a) find that the share of wealth owned by the top 1% has increased by more than 10 pp. since the late 1970s. 1 [START_REF] Kuhn | Income and Wealth Inequality in America, 1949-2016[END_REF] find a similar trend using survey data (Figure 1). 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020 Sources: Saez and Zucman (2020a), [START_REF] Kuhn | Income and Wealth Inequality in America, 1949-2016[END_REF]. But despite this growing amount of data documenting the historical evolution of the distribution of wealth, our understanding of the drivers of this trend remains elusive. Is it solely a consequence of the rise of labor income inequality? Or is it also the result of higher rates of return? What about capital gains? Does it have anything to do with the decline of the estate tax? Does it reflect changes to the distribution of saving rates? These are basic questions, and if we had long-run longitudinal data on income and wealth, they would be fairly easy to answer.
After all, they simply ask how observable phenomena affect the process of wealth accumulation in direct, mechanical ways. But because longitudinal wealth data is scarce and limited, in practice, answering them has remained a challenge. That we don't have a straightforward understanding of such proximate causes of wealth inequality is a problem in and of itself, but it also impedes our ability to answer several related questions. It makes it more difficult to adequately calibrate economic models of the wealth distribution, which are used to investigate the deeper, underlying causes of rising inequality. It limits our understanding of the effect of widely discussed policies, such as wealth and inheritance taxation.
Contribution This paper addresses this issue by introducing a new way to decompose the evolution of wealth inequality in terms of individual-level factors, and in spite of the lack of panel data. The decomposition accounts for demography, inheritance, assortative mating, labor income, rates of return, consumption, and, importantly, mobility. It does so in a way that only requires repeated cross-sections, and therefore can be applied to historical data. The approach is tractable and allows for a transparent, visual identification of the key parameters.
I estimate this decomposition in the United States since the 1960s, and in doing so, I establish the main direct drivers of the rise of wealth inequality over that period. To that end, I use historical microdata on the distribution of income and wealth (Saez and Zucman, 2020a), which I combine with a large set of data from censuses, surveys, demographic tables, and macroeconomic accounts. For example, I construct measures of assortative mating as well as age-specific marriage and divorce rates since the 1960s to estimate their effect on the wealth distribution. I also microsimulate the entire demographic history of the United States since the mid-19th century to statistically reconstruct intergenerational linkages, and combine them with distributional parameters for intergenerational wealth transmission as a way to measure the effect of inheritance and of the estate tax.
Finally, I develop a way to incorporate wealth taxation within the framework of this paper, derive a simple formula for how the wealth distribution would react to any wealth tax in the long run, and use my empirical estimates to calibrate it for the United States. The framework of the paper presents several advantages here as well. A typical difficulty when analyzing wealth or capital taxation is that the standard models lead to a menagerie of "corner solutions," and the associated policy recommendations are often extreme, fragile, and hard to interpret. Small changes in parameters (e.g., an intertemporal elasticity of substitution just above or below one) lead to diametrically opposite results (e.g., an optimal tax of 0% or 100%) [START_REF] Straub | Positive Long-Run Capital Taxation: Chamley-Judd Revisited[END_REF]. 3 In light of this situation, [START_REF] Saez | Generalized Social Marginal Welfare Weights for Optimal Tax Theory[END_REF] have argued for a simpler framework -one that centers the same equity-efficiency trade-off that governs the theory of optimal labor income taxation (Piketty and Saez, 2013b) and, in practice, dominates policy considerations. In this framework, the elasticity of capital supply with respect to the net-of-tax rate directly dictates the optimal level of capital taxation. 4 But finding a way to model this elasticity in a realistic yet tractable way, without facing the same degeneracy issues as earlier models, remains difficult. Here, because the model is stochastic and features mobility, it organically leads to well-behaved steady-states and finite elasticities of wealth with respect to the net-of-tax rate, under a wide range of economic behaviors, and without the need to resort to ad hoc modeling devices such as wealth in the utility function (e.g., [START_REF] Saez | Generalized Social Marginal Welfare Weights for Optimal Tax Theory[END_REF].
and is independent of the shape of the density. This distinction is not a mere technicality: it is the central piece of the mechanism that allows steady-state distributions to emerge in the first place, and it has empirically measurable consequences. By looking at how the local evolution of the wealth density relates to its current shape, I can therefore estimate both parameters.
I estimate the full model, while also accounting for income and the role played by demography, assortative mating, and inheritance. I show that the model correctly reproduces the past evolution of wealth inequality. When I compare my findings to the (albeit scarce and limited) longitudinal survey data on wealth that exists in the United States, I find that my results are consistent. I use the model to decompose the evolution of wealth inequality and to generate simple counterfactuals showing how wealth inequality would have evolved if certain factors (e.g., labor income inequality, capital gains) had stayed at their 1980 levels.
Finally, I include wealth taxation into the model, and derive a simple formula for how the wealth distribution would react to an arbitrary wealth tax in the long run, which connects the field of capital taxation to the theories of the wealth distribution. This formula depends on three factors: mobility across the wealth distribution, tax avoidance, and savings responses.
Mobility is built into the framework and calibrated using this paper's estimate. For tax avoidance and savings responses, I rely on reduced-form expressions that depend on simple, estimable behavioral elasticities, which I calibrate using quasi-experimental evidence from the literature [START_REF] Brülhart | Taxing Wealth: Evidence from Switzerland[END_REF][START_REF] Seim | Behavioral Responses to Wealth Taxes: Evidence from Sweden[END_REF][START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF][START_REF] Zoutman | The Elasticity of Taxable Wealth: Evidence from the Netherlands[END_REF][START_REF] Ring | Wealth Taxation and Household Saving: Evidence from Assessment Discontinuities in Norway[END_REF][START_REF] Londoño-Vélez | Enforcing Wealth Taxes in the Developing World: Quasi-Experimental Evidence from Colombia[END_REF]. A comprehensive tool to simulate the effect of arbitrary wealth taxes in the United States, using this paper's formulas, is available online at https: //thomasblanchet.github.io/wealth-tax/.
Findings After applying the decomposition to the United States since 1962, I find that the main drivers of the rise of the top 1% wealth share since the 1980s have been, in decreasing level of importance, higher savings at the top, higher rates of return on wealth (essentially in the form of capital gains), and higher labor income inequality. Other factors have played a minor role.
Notably, the wealth accumulation process that I estimate features a rather large heterogeneity of savings for a given level of wealth and, therefore, a large degree of mobility across the distribution. That amount of mobility is comparable to what I independently find in longitudinal survey data from the Survey of Consumer Finances (SCF) and the Panel Study of Income Dynamics (PSID), and in a panel of the 400 richest Americans from Forbes. This finding cuts against the "dynastic view" of wealth accumulation, and suggests that existing models of the wealth distribution underestimate the extent of wealth mobility. This has several implications.
First, the large heterogeneity of savings implies that high levels of wealth inequality can be sustained even if the wealthy consume, on average, a sizeable fraction of their wealth every year. Second, high mobility offers a straightforward answer to a puzzle pointed out by [START_REF] Gabaix | The Dynamics of Inequality[END_REF]: that the usual stochastic models that can explain steady-state levels of inequality are unable to account for the speed at which inequality has increased in the United States. 5 But high mobility leads to faster dynamics and, therefore, can account for the dynamic of wealth we observe in practice. 6The study of wealth taxation also yields new insights. First, the degree of mobility across the wealth distribution -which explicitly appears in the formula I derive -generates a mechanical effect that fixes many degeneracy issues from deterministic models. And it is a crucial determinant of the response of the wealth stock to a tax. Indeed, without mobility, an annual wealth tax would affect the same people repeatedly, and absent behavioral responses, the tax base would eventually go to zero. With mobility, new, previously untaxed wealth continuously enters the tax base, which avoids that phenomenon. Saez and Zucman (2019) study this effect, but within a very restrictive setting -my formula extends and operationalizes their idea to more complex and realistic cases, and this affects some of the conclusions. 7 In light of this result, the significant mobility that I find suggests that a wealth tax could raise more revenue, all other things being equal. Second, even in the simplest model, the reduced-form elasticity of wealth with respect to the net-of-tax rate is nonconstant -much higher for low rates than higher ones. (This directly results from the fact that the complete formula depends on both the average and the marginal tax rate.) As a consequence, the revenue-maximizing rate for a linear wealth tax above $50m is high (12% in my benchmark calibration), yet the revenue raised from said tax in the long run is quite low, only about a quarter of what the tax would raise in the absence of response. 8 Third, my simulations suggest that the effects of an annual wealth tax differ in fundamental ways from those of an inheritance tax of seemingly comparable magnitude. I show in a simple model that this distinction is driven by the lifetime of generations: the longer people live, the less potent the estate tax compared to a wealth tax.
2 Literature Review
Models of the Wealth Distribution
The conventional way of studying the determinants of wealth inequality is to construct a model of an economy where people face various individual shocks. The prototypical model for this line of work comes from [START_REF] Aiyagari | Uninsured Idiosyncratic Risk and Aggregate Saving[END_REF] and [START_REF] Huggett | The risk-free rate in heterogeneous-agent incomplete-insurance economies[END_REF], who study the distribution of wealth in [START_REF] Bewley | The permanent income hypothesis: A theoretical formulation[END_REF] type models in which people face idiosyncratic uninsurable labor income shocks. In these models, people accumulate wealth for precautionary or consumption smoothing purposes. But these motives quickly vanish as wealth increases, so they cannot rationalize large wealth holdings. As a result, these models notoriously underestimate the real extent of wealth inequality.
The literature, then, has searched for ways to extend these models and match the data. One possibility involves additional saving motives for the rich, such as a taste for wealth [START_REF] Carroll | Why Do the Rich Save So Much? Tech[END_REF] or bequests [START_REF] Nardi | Wealth Inequality and Intergenerational Links[END_REF]. Another solution is to introduce idiosyncratic stochastic rates of returns in the form of entrepreneurial risk [START_REF] Quadrini | Entrepreneurship, Saving, and Social Mobility[END_REF][START_REF] Cagetti | Entrepreneurship, Frictions, and Wealth[END_REF][START_REF] Benhabib | The Distribution of Wealth and Fiscal Policy in Economies With Finitely Lived Agents[END_REF]). Yet another option introduces heterogeneous shocks to the discount rate [START_REF] Krusell | Income and Wealth Heterogeneity in the Macroeconomy[END_REF].
Recent contributions have zoomed in on specific features and issues. Following the finding from [START_REF] Bach | Rich Pickings? Risk, Return, and Skill in Household Wealth[END_REF] (in Sweden) and [START_REF] Fagereng | Heterogeneity and Persistence in Returns to Wealth[END_REF] (in Norway) that the wealthy enjoy higher returns, several papers have examined this mechanism in the United
States. [START_REF] Xavier | Wealth Inequality in the US: the Role of Heterogeneous Returns[END_REF] finds that heterogeneous returns play a critical role in explaining the current levels of wealth inequality. [START_REF] Cioffi | Heterogeneous Risk Exposure and the Dynamics of Wealth Inequality[END_REF] embeds a portfolio choice model in a wealth accumulation model. In this paper, because of heterogeneous portfolio composition, households have different exposures to aggregate risks. The wealthy invest more in high-risk, high-return assets like equity. As a result, [START_REF] Cioffi | Heterogeneous Risk Exposure and the Dynamics of Wealth Inequality[END_REF] finds that abnormally high stock market returns have been a critical driver of the rise in wealth inequality in the United States. [START_REF] Hubmer | A Comprehensive Quantitative Theory of the U.S. Wealth Distribution[END_REF], on the other hand, use a different model and conclude that changes in tax progressivity, rather than changes in rates of return, explain the rise in wealth inequality.
In general, models of the wealth distribution seek to explain two facts: that wealth inequality is high (that is, higher than income inequality) and that its top tail is shaped like a power law.
Existing models differ across many dimensions, but when it comes to explaining these facts, they usually share a common mechanism: namely, that wealth accumulates through a series of individual multiplicative random shocks, with frictions at the bottom. Assuming that the shocks have adequate properties, such models can generate both high wealth inequality and power-law tails. In discrete time, we can formulate this idea using the theory of [START_REF] Kesten | Random difference equations and Renewal theory for products of random matrices[END_REF] processes.
Assume that the wealth w i t of person i evolves according to a linear recurrence equation with random coefficients: w i,t+1 = a i t w i t + b i t , where a i t is a multiplicative shock (typically reflecting stochastic returns or preference shocks) and b i t is a friction term (typically reflecting labor income). Then, under very general conditions, wealth admits a steady-state distribution with a power-law tail, whose fatness is determined by the properties of the multiplicative shock a i t [START_REF] Kesten | Random difference equations and Renewal theory for products of random matrices[END_REF][START_REF] Grincevićius | One limit distribution for a random walk on the line[END_REF][START_REF] Vervaat | On a stochastic difference equation and a representation of non-negative infinitely divisible random variables[END_REF][START_REF] Goldie | Implicit Renewal Theory and Tails of Solutions of Random Equations[END_REF][START_REF] Grey | Regular Variation in the Tail Behaviour of Solutions of Random Difference Equations[END_REF]. We can formulate a continuous-time version of this mechanism using stochastic calculus [START_REF] Gabaix | Power Laws in Economics and Finance[END_REF].9 This formalism presents major advantages in terms of flexibility and tractability, notably because of the [START_REF] Kolmogorov | Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung[END_REF] equation, which establishes a straightforward relationship between the distribution of the individual random shocks and the evolution of the wealth distribution in the aggregate.
Preferences
Income shocks That these two parameters are sufficient to characterize the dynamics of the distribution can be rigorously proven using Gyöngy's (1986) theorem. Then, I can feed these parameters into the [START_REF] Kolmogorov | Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung[END_REF] equation, which describes how the wealth density evolves.
This paper
This paper takes the reverse path, as characterized by the two red arrows in Figure 2. I start from the data on the evolution of the wealth distribution and then use the [START_REF] Kolmogorov | Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung[END_REF] equation to infer the parameters of the wealth transition equation. Many different microfounda-tions can lead to the same wealth transition equation; therefore, in my approach, the complete model of the economy remains unknown. Yet knowing just the wealth transition equation already opens many possibilities. It is sufficient to study many mechanisms, counterfactuals, and policies. And when more complete models remain necessary, the approach makes it easier to discriminate between them.
We can, in particular, compare this paper to two recent studies that also use a primarily empirical approach within the framework of stochastic wealth accumulation models. [START_REF] Benhabib | Wealth Distribution and Social Mobility in the US: A Quantitative Approach[END_REF] fit a model of stochastic wealth accumulation to data for the United States. They start from an explicitly microfounded model of consumption and use the method of simulated moments to identify the parameters that best replicate observed data. These parameters determine the shape of the utility function and the process for the rate of return. There are four main differences between their approach and mine. First, they estimate consumption by estimating a two-parameter utility function, while I estimate a nonparametric profile of saving rates by wealth. Their approach is more structural and tightly parameterized; my approach is more flexible and reduced form. Second, they use direct information on wealth mobility in a given year, from the 2007-2009 panel of the SCF, to estimate their model; I validate my estimate of mobility against the SCF, but I do not use it directly. Third, their model is nonlinear and estimated using the method of moments; my approach works by fitting a univariate linear equation for each level of wealth, making the source of identification highly explicit and easy to check graphically. Fourth, their primary goal is to replicate the levels rather than the trends in wealth inequality, so their main model assumes that the economy is at its steady state. In a separate exercise, they additionally replicate trends. My approach directly reproduces both levels and trends by construction -and in fact, I use the trends as an essential source of identification. [START_REF] Gomez | Decomposing the Growth of Top Wealth Shares[END_REF] decomposes the evolution of top wealth shares in the United States while accounting for the role of demography and mobility. Like this paper, [START_REF] Gomez | Decomposing the Growth of Top Wealth Shares[END_REF] decomposes changes in the wealth distribution that are caused by the first moment of wealth growth rates (average savings) and by the second moment (mobility).10 Like this paper, [START_REF] Gomez | Decomposing the Growth of Top Wealth Shares[END_REF] finds an important role for mobility. His methodology, however, requires longitudinal data, for which he uses the Forbes 400 ranking of the wealthiest Americans. 11 In contrast, I use historical data on income and wealth to directly infer this parameter, allowing me to analyze the rise of wealth inequality since 1962. In contrast, the Forbes 400 rankings started in 1982. My analysis uses the entire wealth distribution, not only the top of the tail. Finally, I also account for labor income, inheritance taxation, and assortative mating. [START_REF] Gomez | Decomposing the Growth of Top Wealth Shares[END_REF] and this paper provide complementary evidence of the dynamics of wealth accumulation and find a similar result: that mobility across the wealth distribution is sizeable and plays a crucial role in shaping the wealth distribution.
Synthetic Saving Rates
Another way of using wealth distribution data to study the drivers of wealth inequality is to construct synthetic saving rates for the different parts of the wealth distribution (e.g., Saez and Zucman, 2016;[START_REF] Kuhn | Income and Wealth Inequality in America, 1949-2016[END_REF][START_REF] Garbinti | Accounting for Wealth-Inequality Dynamics: Methods, Estimates, and Simulations for France[END_REF]. Synthetic savings are constructed as if each bracket of the wealth distribution was a single infinitely lived individual: for example, if the average wealth of the top 1% is $10m at the start of the year and $11m at the end, then the top 1% had $1m in synthetic savings. As such, synthetic savings are a composite measure that combines the effects of average wealth growth with the effect of mobility. They do not distinguish between them.
Since my approach explicitly accounts for mobility, it lets me disentangle the two effects. In the spirit of [START_REF] Gomez | Decomposing the Growth of Top Wealth Shares[END_REF], I show that synthetic savings are the sum of several terms: one that depends on average wealth growth, one that depends on mobility, and additional terms that account for other factors, such as demography. This decomposition makes it possible to model synthetic savings more realistically.
Notably, I find that the way synthetic saving rates combine the effect of mobility with other factors is endogenous to the wealth distribution itself. That is, synthetic savings will differ depending on whether inequality is high or low, even if people's actual behavior is the same. To eliminate this endogeneity, we would have to assume zero mobility in the wealth distribution.
Wealth and Capital Taxation
A well-known, early result in the theory of optimal taxation is that, in a standard neoclassical model, the optimal linear tax rate on capital is zero, even from the perspective of workers who own no capital [START_REF] Chamley | Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives[END_REF][START_REF] Judd | Redistributive taxation in a simple perfect foresight model[END_REF]. 12 This result has been overturned in more sophisticated models which introduce, for example, uncertainty [START_REF] Aiyagari | Uninsured Idiosyncratic Risk and Aggregate Saving[END_REF], incomplete markets [START_REF] Farhi | Capital Taxation and Ownership When Markets Are Incomplete[END_REF], heterogeneous altruism [START_REF] Farhi | Estate Taxation with Altruism Heterogeneity[END_REF], tax progressivity [START_REF] Saez | Optimal progressive capital income taxes in the infinite horizon model[END_REF] or capital accumulation by workers [START_REF] Bassetto | Redistribution, taxes, and the median voter[END_REF]. More recently, [START_REF] Straub | Positive Long-Run Capital Taxation: Chamley-Judd Revisited[END_REF] have reanalyzed the models of [START_REF] Chamley | Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives[END_REF] and [START_REF] Judd | Redistributive taxation in a simple perfect foresight model[END_REF],
and have overturned many of their conclusion: they show that significant capital taxes are in fact often desirable within these two models.
Overall, the main issue with the theory of capital taxation is that its models tend to behave erratically. Many of its results focus on edge cases and corner solutions, which are highly dependent on the specific primitives of the economy, can be hard to interpret, and imply unrealistic behavior. As a result, the theory has remained of limited use for guiding policy. For example, a common interpretation of the zero-tax result of [START_REF] Chamley | Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives[END_REF] and [START_REF] Judd | Redistributive taxation in a simple perfect foresight model[END_REF] is that their model implicitly assumes the capital stock to be infinitely elastic. 13 This assumption makes capital taxation infinitely costly, so the equity gain from taxation is never worth the efficiency cost. Such a setting arguably constitutes an extreme edge case. In more realistic models featuring a finite elasticity, the usual equity-efficiency trade-off would determine the desirability of capital taxation (Piketty and Saez, 2013a;[START_REF] Saez | Generalized Social Marginal Welfare Weights for Optimal Tax Theory[END_REF]. In line with this view, [START_REF] Saez | Generalized Social Marginal Welfare Weights for Optimal Tax Theory[END_REF] ultimately argued that the long-run elasticity of the capital stock to the net-of-tax rate is a sufficient statistic for the optimal design of capital taxation and developed formulas for optimal tax rates, similar to those that exist for labor income [START_REF] Saez | Optimal progressive capital income taxes in the infinite horizon model[END_REF]Piketty and Saez, 2013b).
However, the value of that elasticity remains elusive. In the short run, several empirical papers have used quasi-experimental settings to estimate it: [START_REF] Seim | Behavioral Responses to Wealth Taxes: Evidence from Sweden[END_REF] in Sweden, Londoño-Vélez and Ávila-Mahecha (2021) in Colombia, [START_REF] Brülhart | Taxing Wealth: Evidence from Switzerland[END_REF] in Switzerland, [START_REF] Zoutman | The Elasticity of Taxable Wealth: Evidence from the Netherlands[END_REF] in the Netherlands and [START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF] in Denmark. With some exceptions, these elasticities tend to be small. This is consistent with the view that a government trying to raise revenue with a one-off, unexpected wealth tax could choose a very high marginal rate. But the policy-relevant parameter is the long-run elasticity, which is far more uncertain and likely larger.
Indeed, the short-run elasticity only captures tax avoidance or short-run saving responses. But over time, wealth taxes also keep people from accumulating wealth, either through mechanical (lower post-tax rates of return) or behavioral effects (lower savings). This process slowly erodes the tax base. But because it takes a long time to materialize, it is hard to cleanly identify it in the data.
This paper provides a useful framework for addressing this issue. We can start from the empirically estimated wealth transition equation, which reproduces the true evolution of the wealth distribution. Then we can modify that equation to incorporate a wealth tax, as well as an arbitrary set of behavioral responses. For this paper, I focus on two effects (tax avoidance and a decrease in savings), but the framework is flexible enough to accommodate many other extensions. Using the modified equation for the evolution of wealth, we can simulate counterfactual evolutions of the wealth distribution and therefore estimate how the tax base would react to any wealth tax. We can fully simulate the model to observe transitory dynamics.
However, if we focus on the long run, we can obtain simple analytical formulas that characterize the alternative steady-state. In my model, the long-run elasticity of capital supply remains finite in all cases because of mobility, but setting the mobility parameter to zero recovers the infinite elasticity as in [START_REF] Chamley | Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives[END_REF] and [START_REF] Judd | Redistributive taxation in a simple perfect foresight model[END_REF].
We can connect the approach in this paper to two recent papers that try to assess the long-run effects of wealth taxes. [START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF] use their short-run elasticity estimates to calibrate a structural model of savings at the top. They indeed find a higher elasticity in the long run. rate over a threshold. In comparison, my model allows for arbitrary tax schedules, including the more standard case of a constant marginal rate above a threshold. Far above the tax threshold, the marginal and the average tax rate are similar, so their model behaves similarly to the mechanical component of my mine. Close to the threshold, however, the distinction between them matters. In particular, I find that under a purely mechanical model, even confiscatory wealth tax rates would raise a non-negligible revenue from people just above the tax threshold who recently entered the tax base. And so the Laffer curve never goes to zero. As a result, unlike Saez and Zucman (2019), I conclude that the mechanical model remains insufficient to characterize revenue-maximizing tax rates: doing so requires a behavioral response.
Theory
Time is continuous, indexed by t. Individuals are indexed by i. Individual wealth w i t evolves stochastically. I model this evolution using an Itô process, which can flexibly model most continuous-time stochastic processes. Such a process is locally characterized by two parameters:
the drift µ i t , and the diffusion σ 2 i t . Over a small period of time dt, the change dw i t in the value of the wealth w i t of individual i at time t has mean µ i t dt and variance σ 2 i t dt. This process is represented in the form of a SDE and commonly written as follows:
dw i t = µ i t dt + σ i t dB i t
This section will explain how the parameters for drift and diffusion at the individual level can be connected to the aggregate evolution of the wealth distribution (while accounting for heterogeneity, demography, etc.), and therefore how they can be inferred from changes in the shape of the wealth distribution.
Income and Consumption
Evolution of Individual Wealth
Individual i has idiosyncratic consumption, labor income, and rate of return on their wealth.
Similarly to [START_REF] Carroll | Buffer-Stock Saving and the Life Cycle/Permanent Income Hypothesis[END_REF] interpretation of [START_REF] Friedman | A Theory of the Consumption Function[END_REF] permanent income hypothesis, labor income is the sum of a permanent (i.e., slow-moving) component y i t , and of transitory shocks with variance υ 2 i t -with y i t and υ 2 i t both being arbitrary bounded stochastic processes. Consumption and rates of return follow a similar model, with slow-moving components c i t and r i t , and transitory shocks with variances γ 2 i t and φ 2 i t . I express all monetary quantities as a fraction of average national income, which grows at rate g t . As a result, wealth w i t evolves according to the SDE:15 dw i t = ( y i t + (r i tg t )w i tc i t )
µ i t (drift) dt + υ 2 i t + φ 2 i t w 2 i t + γ 2 i t 1/2 σ i t (diffusion) dB i t (1)
That is, individual wealth is a stochastic process with a drift µ i t equal to y i t + (r i tg t )w i tc i t and a diffusion σ 2 i t equal to υ 2 i t + φ 2 i t w 2 i t + γ 2 i t . Note that the economy's growth rate g t appears in the drift term due to the normalization of all the quantities by the economy's average income.
Evolution of the Distribution of Wealth
We can directly relate the evolution of individual wealth described by equation (1) to the overall distribution of wealth. A standard result of stochastic calculus states that, if a large number of stochastic processes follow the same SDE, then the probability density function (PDF) that describes the distribution of their value at a given time follows a partial differential equation (PDE) known as the [START_REF] Kolmogorov | Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung[END_REF] forward equation [START_REF] Gabaix | Power Laws in Economics and Finance[END_REF]. This result provides a direct way to connect the evolution of individual wealth (as in equation ( 1)) with the aggregate distribution of wealth (as is observed in historical data). However, I need to account for the possibility that the drift µ i t and the diffusion σ 2 i t vary across individuals i. To that end, I will apply a result from Gyöngy (1986), which states that it is possible to average out the heterogeneity of individual wealth processes and still retrieve the same wealth distribution in the aggregate. After applying that result, I use the [START_REF] Kolmogorov | Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung[END_REF] forward equation to connect equation (1) to the evolution of the wealth distribution.
Reduction to a Single Equation using Gyöngy's (1986) Theorem In simple terms, Gyöngy's (1986) theorem states the following. Consider a large number of stochastic processes w i t , each following a SDE with their own drift µ i t and diffusion σ 2 i t . Then the PDF describing the distribution of the value of these processes will behave exactly as if their drift µ i t and their diffusion σ 2 i t were replaced by the conditional expectations
µ t (w) = E[µ i t |w] and σ 2 t (w) = E[σ 2 i t |w].
This result, plus some basic rules of stochastic calculus, makes it possible to reduce the arbitrarily complex nature of individual wealth accumulation into a single SDE that characterizes the entire wealth distribution. I can state the following result.
Proposition 1. Let y t (w), r t (w) and c t (w) be the average labor income, rate of return and consumption, conditional on wealth w at time t. Similarly, let υ 2 t (w), φ 2 t (w) and γ 2 t (w) be the variance of labor income, rates of return and consumption, conditional on wealth w at time t.
Then, the stochastic process governed by the SDE with deterministic coefficients:
dw i t = ( y t (w i t ) + (r t (w i t ) -g t )w i t -c t (w i t )) µ t (w) (drift) dt + υ 2 t (w i t ) + φ 2 t (w i t )w 2 i t + γ 2 t (w i t )
1/2 σ t (w) (diffusion, or mobility)
dB i t
has the same marginal distribution as the process described by equation (1).
Proof. See Appendix A.1.
Proposition 1 reduces the dynamics of the wealth distribution to a single SDE, in which everyone with the same wealth w faces the same drift µ t (w) and the same diffusion σ 2 t (w). In that equation, the diffusion σ 2 t (w) becomes easily interpretable as a mobility parameter. If it were equal to zero, everyone with the same wealth would face the same wealth growth, so there would be no movement across the distribution. But when that parameter is not zero, the approach can account for the mobility across the wealth distribution, which is a sizeable phenomenon [START_REF] Gomez | Decomposing the Growth of Top Wealth Shares[END_REF]. From now on, I will therefore refer to σ 2 t (w) as a mobility parameter.
The Kolmogorov Forward Equation Using the reduced SDE of Proposition 1, I can now apply the Kolmogorov forward equation. The density f t which describes the distribution of wealth at time t obeys the PDE:
∂ t f t (w) = -∂ w [µ t (w) f t (w)] + 1 2 ∂ 2 w [σ 2 t (w) f t (w)] (2)
This equation describes the evolution of a quantity that is observable in the historical data (the density of wealth f t ) while connecting it to parameters that characterize wealth accumulation at the individual level: the drift µ t (w), and the diffusion σ 2 t (w). Thus, it directly connects individual economic behavior with the distribution of wealth.
Interpretation of the Different Effects
The previous section derived equation (2) using results from stochastic calculus. This section provides a more direct and intuitive understanding of the equation. The purpose is twofold.
First, it provides an understanding of the central equation that does not require familiarity with stochastic calculus. Second, it gives a more transparent basis for how and why the decomposition introduced in this paper works. (3)
Integration of the
Each of these terms captures a different mechanism. I discuss them in turn below. To fix ideas, let us consider w = $1bn and that the population is normalized to one, so that 1 -F t (w) is the number of billionaires, and -∂ t F t (w) is the change in the number of billionaires.
Local Effect of Average Change in Wealth
Assume that wealth growth is a deterministic function of wealth: everyone with the same wealth experiences the same wealth change, so there is no mobility (σ t ≡ 0). Consider the case where wealth growth is positive at the top, so that the number of billionaires increases.
Over a short period, the number of people crossing the $1bn threshold will be proportional to (i) f t (w), the number of people that were initially at the threshold, and (ii) µ t (w), the pace at which their wealth increases. Therefore, we get -∂ t F t (w) = µ t (w) f t (w), which corresponds to equation (3) when σ t ≡ 0.
This formula is known as a transport equation. If wealth growth is uniform (µ t ≡ µ), then it translates the entire wealth distribution by a factor µ dt over a period dt. The general formulation makes it possible to consider non-uniform wealth growth.16
Local Effect of Mobility Now, consider the opposite thought experiment. Everyone, even if they have the same wealth, experiences a different wealth growth. But they are as likely to go up or down, so, on average, wealth growth is zero (µ t ≡ 0). Furthermore, assume that the amplitude of wealth variations is uniform (σ t ≡ σ). Under these conditions, the number of billionaires will still change.
Indeed, some people just below $1bn will see their wealth increase and become billionaires.
This flow is proportional to (i) f t (w -), the number of people right below $1bn, and (ii) the amplitude σ 2 by which their wealth varies. Conversely, some people just above $1bn will see their wealth decrease and drop out of the list of billionaires. This flow is proportional to (i) f t (w + ), the number of people right above $1bn and (ii) the amplitude σ 2 by which their wealth varies.
In general, these two flows will not cancel out because there are not as many people right below and right above $1bn. This difference between the population on each side of the threshold is effectively captured by the derivative of the density f t . Mathematically, this derivative appears from writing the difference between the two flows, and then taking the limit as w + → w and w -→ w. After applying the correct proportionality factor, we get -∂ t F t (w) = 1 2 σ 2 t (w)∂ w f t (w), which corresponds to equation (3) when µ t ≡ 0 and σ t ≡ σ. Unlike the equation modeling the effect of average wealth growth, in this equation, the effect on the number of billionaires depends not on the value of the density but on its gradient. This formula is known as a diffusion equation and is best understood as a transformation that flattens wealth density.
To make sense of this effect, consider two extremes. First, if the wealth density is flat at the top, then the number of people who cross the $1bn threshold from both sides will cancel out. Hence, even though wealth changes at the individual level, the overall effect on the distribution is nil. Now, assume that, on the contrary, the wealth density is infinitely steep, so that there are no billionaires but many people just below $1bn. Some people will see their wealth increase and become billionaires. But since there is initially no one above $1bn, there can be no countervailing flow of people leaving the group. And therefore, the number of billionaires will increase very fast.
Local Effect of the Mobility Gradient
If the amplitude of wealth mobility is not uniform, there is a third effect on the wealth distribution. Indeed, assume that there is more variation in wealth growth above $1bn than below: then, downward mobility will exceed upward mobility. Thus, even with no average growth and a flat density, people who drop out of the billionaire group will outnumber those who enter it. This phenomenon creates an additional effect on -∂ t F t (w), which depends on how mobility varies throughout the distribution. Hence, it is a function of the mobility gradient ∂ w σ 2 t (w), and is equal to -1 2 f t (w)∂ w σ 2 t (w).
Phase Portrait and The Dynamics of Wealth Inequality
Consider the case where the drift and mobility parameters are constant over time:
µ t (w) ≡ µ(w)
and σ 2 t (w) ≡ σ 2 (w). Equation ( 3) is best represented as a curve that relates the current level to the current evolution of inequality, as in Figure 3. This curve -a phase portrait -lets us picture the dynamics of wealth inequality in a simple way.
The term ∂ w f t (w)/ f t (w) on the right-hand side of equation (3) measures the relative slope of the density. In the upper tail of the wealth distribution, the density is sloping downward, so ∂ w f t (w)/ f t (w) < 0. This value is a good proxy for top wealth inequality. If it is high, then the density decreases at a slow rate: i.e., the tail is fat, and inequality is high. Conversely, if its value is low, then inequality is low as well. 17 We can analogously interpret the term
-∂ t F t (w)/ f t (w) 17 If wealth follows a Pareto distribution with coefficient α, then ∂ w f t (w)/ f t (w) = -(1 + α)/w. So, as α → 1, ∂ w f t (w)/ f t (w) high inequality low inequality -∂ t F t (w)/ f t (w) increasing inequality decreasing inequality µ t (w) -1 2 ∂ w σ 2 t (w) -1 2 σ 2 t (w) x 0 • y 0 x 1 • y 1 x 2 • y 2 x 3 • y 3 x 4 • y 4 x ∞ •
Note: This diagram is a phase portrait of the dynamics of inequality for a given wealth level w located towards the top of the wealth distribution. These dynamics can be pictured in a two-dimensional space where each axis represents a quantity associated to the wealth distribution, whose PDF is f t (w) and whose CDF is F t (w). The x-axis is equal to ∂ w f t (w)/ f t (w) and is a proxy for inequality levels (high values mean high inequality). The y-axis is equal to -∂ t F t (w)/ f t (w) and is a proxy for inequality changes (high values mean increasing inequality). The diagram represents the transition from a low inequality level (x 0 ) to a high inequality level (x ∞ ), with constant µ(w) and σ 2 (w). During this transition, the system moves alongside the orange line with slope -1 2 σ 2 (w) and intercept µ(w)-1 2 ∂ w σ 2 (w). When the system reaches the x-axis, where -∂ t F t (w)/ f t (w) = 0, it is a its steady-state. When it exceeds zero, the tail becomes fatter, and inequality increases. 18In Figure 3, the first quantity ∂ w f t (w)/ f t (w) (inequality level, in blue) is on the x-axis and the second quantity -∂ t F t (w)/ f t (w) (inequality change, in green) is on the y-axis. Equation (3) states that, with stable parameters, the data points representing the current state of inequality must lie alongside a straight line (in orange), with intercept µ t (w) -1 2 ∂ w σ 2 t (w) and slope -1 2 σ 2 t (w). Where this line crosses the x-axis, inequality is neither increasing nor decreasing: this is the distribution's steady state.
Dynamics of Wealth Inequality and Convergence to a Steady State Consider the transition
to a high inequality steady state. On Figure 3, start from the inequality level x 0 . This level is below the steady-state level x ∞ , so inequality will change. We can decompose this change into (i) the effect of average wealth growth and of the mobility gradient, which the intercept captures, and (ii) the effect of mobility, which the slope captures. As we can see on the graph, (i) decreases inequality, whereas (ii) increases it. At first, the effect of (ii) is stronger, so inequality increases overall by y 0 . In the next period, inequality is higher, equal to x 1 . A higher inequality means a fatter tail, hence a flatter density. Having a flatter density weakens the effect of (ii).
The effect of (i), on the other hand, remains unchanged. Overall, inequality still increases, but by a lower amount, y 1 . The process repeats. Inequality keeps increasing, which flattens the density, weakens the effect of (ii) but leaves the effect of (i) unchanged. Asymptotically, we reach x ∞ . At this point, the effect of (ii) has been weakened to the point that it perfectly counterbalances (i). We have reached the steady state.
Rationale for the Steady State
This framework provides a robust justification for the emergence of a steady state -one that doesn't preclude but also doesn't require behavioral responses.
Without mobility, the only way to get a steady-state distribution of wealth is for behavioral responses (and general equilibrium effects) to lead to a point where saving rates as a proportion of wealth become identical throughout the distribution. Any deviation from this situation leads to a degenerate steady state in the long run. In contrast, here, the distribution eventually stabilizes because of the mechanical interaction between mobility and drift, so the model remains well-behaved for a wider range of economic behaviors.
Distinction Between Drift and Mobility
This framework shows how drift and mobility have different impacts on the distribution and why that distinction matters. In a model without mobility, the red line in Figure 3 would be flat. The only way to account for inequalities of wealth that are neither increasing nor decreasing at a constant rate is to assume that the intercept µ t (w) changes with each period (which usually implies some form of behavioral change). In contrast, once we introduce mobility, it becomes possible to account for any linear downward-slopping evolution of inequality in the phase portrait of Figure 3 with a much more parsimonious model that only involves two time-invariant parameters: one for drift and one for mobility.
Other Processes Affecting the Wealth Distribution
I account for other phenomena impacting wealth distribution besides drift and diffusion (which capture individual income and consumption). The three phenomena that I consider in this paper are birth and death, inheritance, and assortative mating. I first introduce these effects in equation ( 2) and then gives a version of equation ( 3) that include them as well.
Births and Deaths
At any time t, a fraction δ t of people die. Let g t be the density of their wealth. Simultaneously, a fraction β t of people appears with a random initial endowment drawn from a distribution with density h. The total population grows at a rate n t = β tδ t . This process impacts ∂ t f t (w), i.e., the change in the wealth density at wealth w and time t, by:
ζ t (w) demography ≡ β t h(w) injection -δ t g t (w) deaths -n t f t (w)
normalization and the equation ( 2) becomes:
∂ t f t (w) = -∂ w [µ t (w) f t (w)] + 1 2 ∂ 2 w [σ 2 t (w) f t (w)]
income and consumption
+ ζ t (w)
demography Inheritance I model inheritance as a jump process. With a probability π t (w), people see their wealth jump from w to w + λ where λ is the amount of inheritance received, net of taxes.
Let s t (λ|w) be the density of the value of the inheritance, conditional on the value of wealth, and conditional on receiving an inheritance. We can model the jump process as a death with rate π t (w) and as an injection with rate π t (wλ) f t (wλ)s t (λ|wλ) dλ. So, the effect of inheritance on ∂ t f t (w), i.e., the change in the wealth density at wealth w and time t, is:
ξ(w) ≡ π t (w -λ) f t (w -λ)s t (λ|w -λ) dλ -π t (w) f t (w)
and the equation ( 2) with both demography and inheritance is:
∂ t f t (w) = -∂ w [µ t (w) f t (w)] + 1 2 ∂ 2 w [σ 2 t (w) f t (w)]
income and consumption
+ ζ t (w) demography + ξ t (w)
inheritance Marriages, Divorces and Assortative Mating Finally, I account for the effect of marriages and divorces. I adopt the convention that wealth is split equally among spouses. At any time t, a fraction θ t of people get married. Let p t (w) be the density of their wealth as individuals, and q t (w) be the density of their wealth as couples. The strength of assortative mating is captured by the relationship between p t and q t : at the limit, if people always choose a spouse with identical wealth, then p t (w) = 2q t (2w). The effect of marriages on the wealth distribution is therefore 2θ t q t (2w)θ t p t (w). We can construct an opposite process for divorces. Let χ t (w) be combined effect of marriages and divorces on ∂ t f t (w). Then the equation ( 2) with the effect of demography, inheritance, marriages, and divorces becomes:
∂ t f t (w) = -∂ w [µ t (w) f t (w)] + 1 2 ∂ 2 w [σ 2 t (w) f t (w)]
income and consumption
+ ζ t (w) demography + ξ t (w) inheritance + χ t (w)
marriages and divorces (4) Integrated Version Rewrite equation ( 4) in an integrated form, similar to (3). Define 4) and re-arranging terms, we get:
F t (w) = w -∞ f t (s) ds, Z t (w) = w -∞ ζ t (s) ds, Ξ t (w) = w -∞ ξ t (s) ds and X t (w) = w -∞ χ t (s) ds. After integrating equation (
- ∂ t F t (w) f t (w) + Z t (w) f t (w) + Ξ t (w) f t (w) + X t (w) f t (w) = µ t (w) - 1 2 ∂ w σ 2 t (w) - 1 2 σ 2 t (w) ∂ w f t (w) f t (w) (5)
This equation is similar to (3), with additional correction terms on the left-hand side to account for demography, inheritance, and assortative mating. The fundamental dynamics of wealth inequality remain similar to those pictured in the phase portrait in Figure 3, except that the y-axis needs to be adapted to include the correction terms. This equation will serve as the basis for the decomposition introduced in this paper.
Decomposition of the Different Effects
I will use equation ( 5) to decompose the various factors affecting the distribution of wealth. All the terms in the equation can be directly observed or separately estimated, except those related to consumption. Therefore, these unobserved parameters have to be estimated as a type of "residual." This section explains how.
Estimating Equation for the Complete Model Let us go back to the equation ( 5). Separate the drift µ t (w) and the diffusion σ 2 t (w) into their observed component (income) and their unobserved component (consumption):
µ t (w) total drift (average wealth change) = μt (w) mean income (observed) -c t (w) mean consumption (unobserved) σ 2 t (w) total mobility (variance of wealth change) = σ2 t (w) variance of income (observed) + γ 2 t (w) variance of consumption (unobserved)
Move the observed components of drift and mobility to the left-hand side of equation ( 5). The complete equation for the left-hand side becomes:
LHS t (w) ≡ - ∂ t F t (w) f t (w) inequality change + Z t (w) f t (w) + Ξ t (w) f t (w) + X t (w) f t (w)
demography, inheritance, marriages and divorces
-μt (w) + 1 2 ∂ w σ2 t (w) + 1 2 σ2 t (w) ∂ w f t (w) f t (w)
drift and mobility induced by income
All the components of LHS t (w) are either directly observable or separately estimable. We can therefore rewrite (5) in its final form:
LHS t (w) = c t (w) - 1 2 ∂ w γ 2 t (w) intercept - 1 2 γ 2 t (w) slope × ∂ w f t (w) f t (w) (6)
Identification Equation ( 6) provides the basis for estimating the parameters. It shows a relationship similar to that shown in Figure 3. We require three assumptions to get point estimates of the mean {c t (w)} w∈R and the variance {γ 2 t (w)} w∈R of consumption conditional on wealth over a period [t 0 , t 1 ].
Assumption 1. For all w, we can observe (or separately estimate) LHS t (w) and
∂ w f t (w)/ f t (w) at k distinct times t k ∈ [t 0 , t 1 ] with k ≥ 2.
Assumption 2. For all w, the parameters c t (w) and γ 2 t (w) are constant over time, i.e., c t (w) ≡ c(w) and γ 2 t (w) ≡ γ 2 (w) for t ∈ [t 0 , t 1 ].
Assumption 3. The wealth distribution is changing over time, in the sense that, for all w, we observe the distribution for at least to periods r, s
∈ [t 0 , t 1 ] such that ∂ w f r (w)/ f r (w) = ∂ w f s (w)/ f s (w).
These three assumptions ensure we can meaningfully fit a line through the different points as in Figure 3. Assumption 1 states that we need to observe the wealth distribution and its evolution during two different periods, a consequence of the fact that we need at least two points to be able to fit a line. In practice, and in the presence of statistical noise, more than two points are preferable to get robust estimates. Assumption 2 ensures that the parameters that govern the relationship (6) remain stable over the period of time under consideration.
Finally, Assumption 3 states that the distribution of wealth must change over the period where we observe it. This assumption comes from the fact that we are using local variations in the flatness of the density to disentangle the drift from mobility. This strategy only works if the flatness does, in fact, vary. If these three assumptions are satisfied, then we can proceed with the estimation in two steps:
Step 1 Estimate c * t (w) = c t (w) -1 2 ∂ w γ 2 t (w) and γ 2 t (w) for every w using a line-fitting method.
Step 2 Using the estimate for γ 2 t , estimate the mobility gradient ∂ w γ 2 t , and use it to get the estimate of c t (w) = c * t (w) + 1 2 ∂ w γ 2 t (w).
Ex-ante Estimate of the Effect's Magnitude I use variations in the flatness of the density over time to disentangle the effect of drift from that of mobility. But are these variations large enough to provide reliable empirical estimates? Some rough but simple calculations can give a general idea of the magnitude of the effects at play. Between its high point in the 1970s and today, the Pareto coefficient of wealth in the upper tail went from α ≈ 2 to α ≈ 1.5. Assume a mobility parameter σ 2 ≈ 0.16w 2 at the top, which matches the estimates in this paper (and separate estimates from the SCF and the PSID). The effect of mobility under these conditions is equal to 1 2 σ 2 (1 + α)w. A useful way to interpret this number is to say that mobility has the same effect on the distribution as an average wealth growth of 1 2 σ 2 (1 + α), which went from
1 2 × 0.16 × (1 + 2) = 24% to 1 2 × 0.16 × (1 + 1.5) = 20% of wealth -a 4% change.
The change in the effect of mobility that is attributable to the flattening of the wealth density at the top is, therefore, sizable -equivalent to the mechanical effect that a permanent 4% wealth tax would have.
Potential Limitations
The main limitation of the method is that it requires the parameters for the drift c(w) and the mobility γ 2 (w) induced by consumption to be constant (or at least not to have a downward or upward trend) over sufficiently long periods. Estimates may be biased if this is not the case. For example, assume that the drift decreases over time. On the phase portrait in Figure 3, the linear relationship (in orange) moves down over time. As a result, the observed data points on the phase portrait will lie on a curve that could take many different shapes, but, in general, will appear more downward-sloping than the true relationship.
If we were to estimate the decomposition in that context, we would therefore overestimate the diffusion parameter.
In practice, there are two ways to address this issue. The first one is to check that the empirical phase portrait to which we fit the line remains relatively close to linearity. While this does not guarantee that the assumption of a constant c(w) and γ 2 (w) is verified, it can identify some problematic cases. The second one is to verify that the parameters estimated from the decomposition are consistent with external estimates of savings and mobility. I perform both checks in this paper's empirical section, suggesting that the simple model with a constant c(w) and γ 2 (w) since the 1980s works well. Income and Wealth For income and wealth, I rely on the Distributional National Accounts (DINA) tax-based microdata from Saez and Zucman (2020a). These data distribute the entirety of national income and wealth, as measured by the national accounts, every year since 1962, to adult individuals (20 and older). They infer the distribution of wealth from capital income flows, following the capitalization method of Saez and Zucman (2016). 19 This data does not distribute capital gains since they are not part of national income. I incorporate them in the data by assuming a constant capital gains rate by asset class, as in Robbins (2018). The information on age in the DINA microdata is also limited, so I replace it with information from the Survey of Consumer Finances (SCF). I match DINA and SCF observations one-to-one based on their rank in the wealth distribution. I use this to estimate a rank in the age distribution by sex for each observation and attribute to them the age that matches this rank according to official demographic data. (Haines, 1998). To make projections for the future, I rely on the forecast (medium variant) of the World Population Prospects (United Nations, 2019). For male fertility rates, which are not a standard demographic parameter, I combine the female fertility rates with the joint age distribution of mixed-sex couples in the IPUMS census microdata (Ruggles et al., 2022).
Estimation for the United States
Demography and Intergenerational Linkages
First, I use this data to simulate deaths, assuming that people die at random according to their age and sex-specific mortality rate. I also account for births by assuming that people enter the population at age 20 with a constant exogenous wealth distribution estimated from the data. 20,21 Second, I simulate intergenerational linkages. For every person in the data, I assume this person had children according to their sex, age, year, and birth-order-specific fertility rates.
Then I assume these children experience mortality in line with their sex, year, and age-specific mortality rates. This methodology generates a distribution for the age and sex of the direct descendants of every person in the sample, which I use to distribute inheritances.
Inheritance and Estate Taxation When someone dies, I assume that their wealth is transmitted to their spouse (if any), without estate tax, or to their children, after payment of the estate tax. To calculate the estate tax, I collect complete statutory schedules of the federal estate tax over the second half of the 20th century. I assume that wealth is split equally among descendants, as is the norm in the United States [START_REF] Menchik | Primogeniture, Equal Sharing, and the U.S. Distribution of Wealth[END_REF]. I account for the possibility that wealthier people are more likely to inherit and receive larger inheritances. I use the SCF to estimate the relative probability of receiving an inheritance, as well as the rank in the inheritance distribution, as a function of the rank in the wealth distribution, conditional on age. Within a given age group, I simulate international wealth transmission according to these parameters.
Marriages, Divorces and Assortative Mating I collect data on the aggregate rate of divorce and marriage from the National Vital Statistics System (NVSS) (Center for Disease Control, 2022). In addition, I reconstruct age and sex-specific rates using population data disaggregated by marital status from IPUMS census microdata (Ruggles et al., 2022).
I determine the extent of assortative mating by estimating the joint distribution of the ranks in the wealth distribution at the time of marriage using the Survey of Income and Program Participation (SIPP) panel (2013)(2014)(2015)(2016). I also consider the impact of assortative mating on divorce by estimating the distribution of the share of wealth owned by each couple member before they get divorced using the same data.
Then, I simulate the process of marriage and divorce by randomly selecting people to get wedded in a given year according to age and sex-specific marriage rates, and then marry them to one another to reproduce the joint distribution of wealth ranks at marriage in the SIPP. Similarly, I simulate the effect of divorce by randomly selecting people to get separated according to their age and sex-specific divorce rates, and split the couple's wealth among both spouses according to the SIPP data.
Estimation
Wealth Distribution First, I normalize the distribution of wealth by the average national income per adult. Then, I transform it using the inverse hyperbolic sine function (asinh). 22This transformation makes the distribution of wealth easier to manipulate empirically. And because we operate in a continuous time framework, it creates no practical difficulty: Itô's lemma establishes a direct correspondence between the parameters of the process for w i t and for asinh(w i t ).23 I select a range of values (from -1 to 2000 times average income) which the wealth microdata consistently covers over the entire period.24 I divide this range into bins of equal size, each representing 0.1 units of asinh(wealth). 25 Figure 4a plots the density of wealth, as estimated by the frequency of these bins. I display two years: 1978, which has the lowest inequality, and 2019, which has the highest inequality and is also the most recent data available.
I use the logarithm of the density, so changes in the top tail of the wealth distribution are more clearly visible. It appears linear in the top tail, which follows from the fact large wealth holdings follow a power law. 26 Note that ∂ w log f t (w) = ∂ w f t (w)/w, i.e., the derivative of the logarithm of the density is equal to the quantity of interest on the right-hand side of equation ( 6). Hence, our interest lies in the slopes of the lines, which I estimate by running locally weighted linear regressions through the values of Figure 4a. 27 The top tail has driven most of the changes in the wealth distribution, and indeed this is where we observe most of the variation. In 1978, when inequality was at its lowest, the density at the top was quite steep, with a slope around -2. By 2019, inequality had increased dramatically, leading to a fatter tail and a flatter density, with a slope around -1.5.
F t (w))/ f t (w)
, which is the relevant measure for the evolution of the wealth distribution as it appears on the left-hand side of equation ( 6). Let us begin with log(1 -F t (w)), the logarithm of the numerator (see Figure 4b for the bin corresponding to 500 times the average income).
The raw data shows two clear trends, one on each side of the year 1978, which correspond to the decreasing part and the increasing part of the U-shaped evolution of wealth inequality. I use a parametric approximation to filter out the short-run variations around these trends (which are not my focus here). Using nonlinear least squares, I fit a logistic growth model for each bin, separately on each side of the year 1978. 28,29 From this parametric approximation, I estimate the time derivative of log(1 -F t (w)). Finally, I retrieve the quantity of interest using the fact
that ∂ t F t (w) = (1 -F t (w))∂ t log(1 -F t (w)).
28 The logistic curve can be written as -ρt) where (x 0 , x ∞ , ρ) are the three parameters which capture, respectively, the initial value, the asymptotic value, and the rate of convergence. 29 This model is attractive for three reasons: (i) empirically, it fits the data well, (ii) it captures the features that we expect, i.e., that of a process that grows at first, and then settles to a steady state, and (iii) if we locally approximate the distribution with an exponential distribution with rate parameter λ t , then equation (3) collapses to a logistic differential equation for λ t , whose solution is the logistic curve function. Note that this approximation can only be valid locally and does not provide a global solution to the partial differential equation (3).
L : t → L(t) = x ∞ 1+(x ∞ /x 0 -1) exp(
Other Processes I separately estimate the effects of income, demography, inheritance, marriage, and divorce, using the microdata and, when needed, microsimulations based on the microdata. For income, I directly calculate the mean and the variance within each wealth bin and use this to estimate the drift and the mobility induced by income. For demography, inheritance, marriage, and divorce, I simulate the processes in the microdata and use the difference between the CDFs before and after simulation to estimate the effects.
Estimation of Drift and Mobility
Having estimated all the observable components of equation (6), we can plot the empirical counterpart to the phase portrait in Figure 3, for every wealth bin. The result, for each year and for the bin corresponding to a wealth level of 500 times the average income, is shown in Figure 5a. Due to the inverse hyperbolic sine transform of wealth and to the inclusion of effects besides drift and mobility, the interpretation of the parameters is not as straightforward as in Figure 3. But the fundamental linear relationship should hold.
Two facts stand out. First, there has been a structural break between 1962-1977 and 1978-2019. Indeed, it is impossible to account for wealth's evolution during both periods by assuming the same linear relationship on the phase portrait: the underlying accumulation process (i.e., the parameters of the propensity to consume) must have changed. Second, within each of these periods, the relationship between the left-hand side and the right-hand side of equation ( 6) is indeed linear. Therefore, a parsimonious model, with a constant mean and variance of consumption by wealth, can account for the trajectory of wealth since 1978, and, separately, for the trajectory between 1962 and 1977. If we focus, for example, on the post-1978 period, we can see the dynamics described in Figure 3 at play. We start in 1978, with a low but rapidly increasing inequality level. But as inequality goes up, the pace at which it increases slows down progressively.
Note that, while there is unmistakable evidence that the intercept of the linear relationship (which captures drift) has changed between periods, there is no clear sign that the slope (which captures mobility) is different. 30 In light of this, and to get more robust estimates, I assume the same mobility parameter over both periods and only let the drift vary. I apply the same model within all wealth bins: for each of them, I fit two linear relationships with the same slope. I use [START_REF] Deming | Statistical adjustment of data[END_REF] regressions to account the presence of error terms on both sides of equation ( 6).
Appendix C.1 provides details of the procedure, alongside robustness checks. I extract the coefficients from these regressions and transform them so that they can be interpreted in terms of the mean and variance of consumption (see Step 2 in Section 3.3, as well as Appendix C.3 for additional adjustments to account for the inverse hyperbolic sine transform of wealth). This finally allows me to plot Figure 5b, the profile of the mean and the variance of consumption Source: Author's estimations. Note: This scatter plot shows the empirical counterpart to the phase portrait described by equation ( 6), for a level of wealth that correspond to 500 times the average national income (around $40m in 2022). The two linear lines, fitted for 1962-1977 and 1978-2019, correspond to the relationship defined by equation ( 6).
See main text and Appendix C for details on the estimation procedure. by wealth. This figure also displays 95% confidence intervals, calculated using a bootstrap procedure, which accounts for the presence of error terms on both sides of equation ( 6), as well as autocorrelations across years and wealth bins, described in Appendix C.2.
Several findings emerge from Figure 5b. First, the variance of consumption is large, which implies a significant role for mobility in the wealth distribution. Second, on average, people consume a significant fraction of their wealth, even at the top, and even in periods of increasing wealth inequality. This matters, in particular, for our understanding of the wealth distribution in the steady state. Significant consumption levels at the top -in general exceeding income -create a tendency for large wealth holdings to reverse toward the mean. In the long run, the reversion towards the mean counterbalances mobility's effect, making it possible for a steady-state distribution to emerge. 31 ignoring the other, less important phenomenons (mobility gradient, demography, etc.) Figure 6 summarizes the situation. The various models of the literature can be schematically represented 31 The presence of demographic effects also contributes to the existence of a nondegenerate steady-state.
on a two-dimensional plane, where the x-axis corresponds to the amount of drift (µ), and the y-axis corresponds to the amount of mobility (σ 2 ). In this representation, all models with a nondegenerate steady-state lie on the top left quadrant, pictured in Figure 6. The bottom half is not meaningful because it implies negative mobility; the top right quadrant implies an infinite steady-state inequality because there is no reversion towards the mean at the top. 32 What wealth distribution is implied by the different points? At the steady-state, the derivative of the wealth distribution with respect to time is zero, and therefore equation (3) becomes:
0 = µ(w) - 1 2 σ 2 (w) ∂ w f (w) f (w) (7)
This equation characterizes a set of straight diagonal lines for every distribution of wealth, passing through the origin of the plane. Each of them is an "isoinequality" line, defining the set of parameter values that lead to the same steady-state distribution of wealth. These isoinequality lines indicate that models can attain any long-run inequality level, either using a high-savings/low mobility regime (points close to the origin) or using a low-savings/high mobility regime (points far away from the origin).
The set of lines that roughly match the inequality levels typically seen in the United States is colored in orange in Figure 6. Combinations of parameters above this line correspond to higher inequality; combinations that lie below, to lower inequality. Importantly, models on the same isoinequality line still differ when it comes to dynamics. High-savings/low-mobility regimes feature slow transitions between steady-states, while the opposite holds for low-savings/highmobility regimes. 33 We can now study where the different models in the literature stand and compare them to this paper's estimate. Start the from [START_REF] Aiyagari | Uninsured Idiosyncratic Risk and Aggregate Saving[END_REF] and similar [START_REF] Bewley | The permanent income hypothesis: A theoretical formulation[END_REF] models (item 1, Figure 6). These models notoriously underestimate inequality for two reasons. First, people in these models accumulate wealth only for precautionary or consumption smoothing motives, so they have no reasons to accumulate the type of large wealth holdings we observe in practice. Second, because everyone earns the same rate of return, mobility at the top is only the result of labor income shocks. Since labor income is small compared to wealth at the top of the distribution, there is limited mobility as well. These facts put these models squarely in the 32 The drift term µ is normalized by the economy's growth rate, so it is still possible to have a nondegenerate steady-state if people at the top experience positive wealth growth on average as long as that growth remains below the economy's growth rate. Demography and the mobility gradient are other phenomenons that make it possible to sustain a steady state with positive drift at the top. In any case, it remains true that the emergence of a steady-state requires limited wealth growth at the top. 33 Figure 3 can demonstrate this. Increasing the slope of the line while keeping the same point of intersection with the x-axis leads to the same steady-state but with higher derivatives of the distribution with respect to time (on the y-axis), and therefore faster transitions. bottom left corner of Figure 6. To fix this problem, a second set of models (item 2, Figure 6) introduced additional saving motives [START_REF] Carroll | Why Do the Rich Save So Much? Tech[END_REF][START_REF] Nardi | Wealth Inequality and Intergenerational Links[END_REF]) such as a taste for wealth, or for bequests. These models manage to match observed inequality levels by increasing savings at the top but do not fundamentally change the extent of wealth mobility. This puts them within the area of realistic steady-state inequality levels (orange diagonal) by moving them to the right of [START_REF] Aiyagari | Uninsured Idiosyncratic Risk and Aggregate Saving[END_REF] models in the bottom right corner. A third set of models (item 3, Figure 6) also introduces idiosyncratic stochastic returns [START_REF] Quadrini | Entrepreneurship, Saving, and Social Mobility[END_REF][START_REF] Cagetti | Entrepreneurship, Frictions, and Wealth[END_REF][START_REF] Benhabib | The Distribution of Wealth and Fiscal Policy in Economies With Finitely Lived Agents[END_REF]. This increases mobility at the top because people with the same initial wealth may now move up or down the distribution depending on whether they get high or low returns. That being said, mobility remains quite limited because it is only the result of heterogeneous labor and capital income. Conditional on wealth, however, consumption remains essentially homogeneous because of consumption smoothing. This limited amount of mobility implies slow dynamics, as was identified by [START_REF] Gabaix | The Dynamics of Inequality[END_REF] for income inequality. This paper (item 4, Figure 6) finds that to match the dynamics of inequality that we observe, we need even higher mobility (and consequently lower savings). 34We can also the synthetic savings method (Saez and Zucman, 2016;[START_REF] Kuhn | Income and Wealth Inequality in America, 1949-2016[END_REF][START_REF] Garbinti | Accounting for Wealth-Inequality Dynamics: Methods, Estimates, and Simulations for France[END_REF] to this paper. In equation ( 3), define the synthetic saving as μt (w) ≡ µ t (w) -1 2 σ 2 t (w)∂ w f t (w)/ f t (w), and then apply the change of variable w = Q t (p), where 0 < p < 1 is a fractile and Q t = F -1 t is the quantile function. We get:
∂ t Q t (p) = μt (Q t (p))
which indeed corresponds to the traditional definition of synthetic savings. Note, however, that the definition of μt (w) depends on the distribution of wealth, so a more accurate formula would be ∂
t Q t (p) = μt (Q t (p), ∂ p Q t (p)
). Synthetic saving rates methods can either choose to ignore the dependency on ∂ p Q t (p), or explicitly eliminate it by setting σ t (w) = 0.
Validation
We can assess the validity and consistency of the model in two different ways. First, we can look at its internal consistency. (If we simulate the evolution of the wealth distribution using the estimated parameters, do we reproduce the observed data?) Second, we can look its external consistency. (Are the estimated parameters consistent with external observations?) In this section, I address both questions.
Replication of Observed Wealth Inequality Dynamics
Starting from the distribution of wealth in 1962, and assuming that all the factors which affect the wealth distribution remain at their observed value, I can use the mean and variance of the propensity to consume estimated from the model to simulate the evolution of the wealth distribution. An elementary requirement for the general validity of the approach is that the evolution of the simulated wealth distribution matches the one observed in reality. Source: Author's calculations using the Survey of Consumer Finances (SCF) and the Panel Study of Income Dynamics (PSID). Notes: All numbers are yearly values: data from the SCF and the PSID, which are calculated between several years, are rescaled by a factor ∆t (for the means) and ∆t (for the standard deviation) where ∆t is the number of years, to make estimates comparable. I winsorize the bottom and the top 2.5% of survey-based consumption estimates to limit the impact of measurement error. For the model, values may differ from Table 2 because they are populationweighted, rather wealth-weighted. between income and the variation of wealth between two consecutive interviews. In practice, doing so involves considerable difficulties, and estimates based on that approach only exist to give rough orders of magnitudes. 36 Table 1 nonetheless provides the mean and variance of consumption in the surveys, as a fraction of wealth, for three brackets, estimated so as to be as comparable as possible to the model's parameters. Overall, we observe broadly similar numbers. In particular, this exercise confirms our two notable findings: significant consumption levels (including at the top) and important mobility throughout the distribution. One of the largest discrepancies between the model and the surveys concern the mean consumption of the top 1% which is twice as high in the SCF than according to the model. This difference could be explained by the fact that the survey was conducted over the great recession and that the consumption estimate in the SCF might be polluted by asset price declines.
Wealth Mobility Another way to check the external validity of my approach is to compare the mobility implied by the model with the mobility we find in the survey data. This approach is less detailed, but more robust than attempts to estimate consumption, because it does not require information on income. Figure 8 compares wealth mobility in the model with the SCF and the PSID. On the x-axis, I group observations according to their wealth rank, and on the y-axis, I plot the distribution of the wealth ranks for each group in the following survey wave, using the median rank and the interquartile range. Then I estimate comparable quantities using the model. For the SCF (Figure 8a), I estimate mobility over two years to match the frequency of the survey and show results up to the top 0.1%. For the PSID, the interval is five years, and given the smaller sample size and lack of oversampling at the top, I only go up to the top 5%.
Once again, the model is broadly consistent with the panel survey data. There is a fair amount of persistence in the wealth rank over time. On average, an observation in wave n + 1 remains close to its rank in wave n. But there is noticeable variability around this central tendency, which shows that there is still significant movement in the wealth distribution over time. The magnitude of this variability, as shown by the interquartile ranges in Figure 8, is similar in the data and the model.
The Drivers of Wealth Inequality
Decomposition of Wealth Growth
I can use equation ( 6) of the model to get a straightforward decomposition of the growth of any part of the wealth distribution. This decomposition is similar to that of [START_REF] Gomez | Decomposing the Growth of Top Wealth Shares[END_REF], with the difference being that it is estimated directly from comprehensive wealth data in the United States, and accounts for more factors. To understand the decomposition, define fix the fact that we are discretizing the underlying continuous-time process and that the larger time steps lead to coarser approximations. Taking the derivative of this expression with respect to time, and letting w = Q t (p) we get that ∂ t Q t (p) = -∂ t F t (w)/ f t (w). Therefore, the left-hand side of equation ( 6) is equal to the variation of the p-th quantile. Let p = 99% and let W t (p) be the average wealth of the top 1%. We can write the rate of growth of W t (p) as the average of the growth of the individual fractiles that make up the top 1%, i.e., ∂ t W t (p) = 1 1-p +∞ p ∂ t Q t (r) dr. Therefore, if we average the effects on the right-hand side of equation ( 6), we can decompose the growth of the top percentile of wealth.
Table 2 shows this decomposition separately for the period of decreasing inequality (1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978) and increasing inequality . For each period, I express the total rate of growth of the top wealth percentile as the sum of its different components. One virtue of this presentation is that it puts all the effects we consider here on the same scale -although they are all conceptually very distinct. We can say, for example, that demography over 1979-2019 had the same effect as a 1.8 pp. decrease in the rate of return would have had.
Two facts stand out in Table 2. First, drift and mobility dominate the other factors by far.
Each of them have an opposite effect, so when combined, they partially cancel out. But this should not make us overlook that these effects are each very sizable on their own and that the trajectory of wealth inequality is effectively determined by temporary imbalances between the two. Second, the increase in the drift -which accounts for most of the change in the growth While Table 2 provides a valuable overview of the drivers of top wealth growth, it is limited in its ability to tell us how the different factors have truly impacted wealth inequality. Indeed, many of the terms in equation ( 6) are endogenous to the distribution of wealth itself, and wealth evolves through several feedback loops between each side of equation ( 6). For this reason, we cannot simply calculate a counterfactual growth rate for the wealth of the top 1% by changing specific terms in the decomposition presented in Table 2. The next section goes deeper by performing counterfactual simulations where we change some parameters.
Counterfactuals
In this section, I change various parameters of the wealth accumulation process and observe how the distribution of wealth would have evolved under these different circumstances. I stress that these counterfactuals exist to explore the direct, proximate consequences of the effects under study. In particular, when I consider changes in, say, income or taxation, I do so while explicitly keeping the consumption unchanged, so that I can explore one specific channel at a time. Therefore, this exercise should not be interpreted as a full counterfactual, which would incorporate both direct and indirect effects. It remains a powerful way to explore many mechanisms and can be used to clarify the facts that more exhaustive models would need to match. 1962-1978. This implies that the distribution of labor income is held fixed. I find that the increase in the top 1% wealth since the 1980s would have been significantly lower under those conditions and would be about 6pp. lower in the long run. The distribution of labor income has thus been an important driver of rising wealth inequality, but remains far from explaining the full rise.
Benchmark
Rates of Return of Wealth
In Figure 9b, I maintain the rates of return on capital at their average 1962-1978 value. I find an effect on the top 1% wealth share that is similar to that of labor income. Importantly, capital gains drive the entire effect. Indeed, as shown in 1920 1940 1960 1980 2000 2020 2040 2060 Top 1% share Taxation In Figure 9c, I focus my attention on taxation. Note that tax rates do not explicitly intervene in my decomposition since its parameters depend directly on the post-tax distribution of labor income and post-tax rates of return. To explore the effects of taxation, I therefore construct alternative distributions of labor income and capital return, for which I assume that pretax distributions evolved as they did in real life, but for which average effective tax rates by wealth percentile are maintained at their average 1962-1978 level. Because tax progressivity has decreased since the 1980s (Saez and Zucman, 2020b), the wealthiest people in this scenario end up with less after-tax labor income and lower after-tax rates of return. As a result, they accumulate less wealth. Figure 9c shows the outcome of this process. I find that this direct effect of taxation is real but also significantly more muted than the overall effect of labor income or rates of return. This aligns with the fact that the rise in post-tax income inequality is mostly explained by rising pretax inequality and that the decrease in tax progressivity only plays a secondary role. Economic Growth Economic growth also plays an important role in equation ( 6). Indeed, people at the top of the wealth distribution derive most of their income from capital, so the rate at which they accumulate wealth follows the rate of return r t . On the other hand, people at the bottom of the wealth distribution derive most of their income from labor, so their ability to accumulate wealth follows the growth of labor income which matches g t in the long run. This mechanism explicitly manifests itself through the fact that wealth accumulation in equation ( 1) depends on r i tg t . This dependence of wealth inequality on rg was emphasized by Piketty ( 2014) and [START_REF] Piketty | Wealth and Inheritance in the Long Run[END_REF]. In Figure 9e, I explore the role that the slowdown of growth since the 1980s had on the wealth distribution. To that end, I increase economic growth by a constant factor over 1979-2019 to match the average growth over 1962-1978. I find a limited but noticeable effect: had the economic growth not slowed down, the top 1% wealth share would be about 3pp. lower. wealth every year at a net-of-tax rate 1τ is equivalent to taxing wealth every n years at a rate (1τ) n . This exercise shows this is not the case: taxing wealth once every generation, as the estate tax does, fundamentally alters the nature of the tax. The typical lifetime of a generation is the main determinant of this fact: the longer generations live, the more different the estate tax is from a wealth tax. Section 6.3 develops a simplified model where this finding can be derived analytically.
Savings
Estate Taxation
Other Effects
The other effects considered in the model (demography, assortative mating)
have had a negligible impact on the distribution of wealth.
The Taxation of Wealth
In this section, I use the model of this paper to assess the long-run effect of wealth taxes at the top of the distribution. Understanding this effect is crucial to understanding capital taxation in general and wealth taxation in particular, as recent studies have argued that the long-run elasticity of wealth with respect to the net-of-tax rate is a sufficient statistic for optimal capital taxation [START_REF] Saez | Generalized Social Marginal Welfare Weights for Optimal Tax Theory[END_REF]Piketty and Saez, 2013a). However, while short-run responses to wealth taxation can be measured empirically (e.g., [START_REF] Brülhart | Taxing Wealth: Evidence from Switzerland[END_REF][START_REF] Seim | Behavioral Responses to Wealth Taxes: Evidence from Sweden[END_REF][START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF][START_REF] Zoutman | The Elasticity of Taxable Wealth: Evidence from the Netherlands[END_REF][START_REF] Ring | Wealth Taxation and Household Saving: Evidence from Assessment Discontinuities in Norway[END_REF][START_REF] Londoño-Vélez | Enforcing Wealth Taxes in the Developing World: Quasi-Experimental Evidence from Colombia[END_REF], the long-run responses are more elusive.
This paper provides a powerful framework for addressing that question. Indeed, it can estimate counterfactual steady-state wealth distributions, in which we assume different rates of return or different savings. This problem is, effectively, equivalent to estimating a counterfactual wealth distribution under a wealth tax (which decreases post-tax rates of return) with or without behavioral responses (which modify saving rates). A key advantage of this approach -which is often absent from studies of optimal capital taxation -is the presence of mobility. Not only is mobility a desirable feature on its own, but it also ensures that the model is well-behaved under a wide range of economic behaviors, because it naturally leads to nondegenerate steady-states for the wealth distribution. This makes it a realistic, easy and tractable way to explore the equity-efficiency trade-offs that are involved in wealth taxation. In contrast, many models without mobility generate infinite responses of capital supply to taxation; under these conditions, the efficiency concerns always dominate the equity concerns, which leaves no room to discuss trade-offs.
One way to use this paper's model is to fully simulate the evolution of the wealth distribution under various wealth tax assumptions, similarly to what was done in Section 5.2. This solution is the most complete but also the most computationally demanding. This section pursues an alternative path, where I focus on the long run and use a slightly simplified model. In these conditions, it becomes possible to derive analytical formulas for how the wealth distribution would eventually react to any wealth tax. This makes it easy to draw Laffer curves for wealth taxation and determine what would be the revenue-maximizing wealth tax rate at the top.38
Theoretical Results
Wealth Taxation Without Behavioral Responses
Consider the introduction of a nonlinear wealth tax τ(w) on wealth w. I will first assume that there is no behavioral response following the introduction of the wealth tax, so the evolution of wealth is simply characterized by dw i t = (µ(w i t )τ(w i t )) dt + σ t (w i t ) dB i t . I can state the following result.
Proposition 2. Let f be the steady-state density of wealth without a wealth tax. Then the steady-state density f * with a wealth tax is equal to the steady-state density without a wealth tax, reweighted by a factor θ (w):
f * (w) ∝ θ (w) f (w) where θ (w) = exp - w -∞ 2τ(s) σ 2 (s) ds
In particular, if τ(w) = τ(ww 0 ) + (i.e., the wealth tax is linear with rate τ above a threshold w 0 ), and if σ(w) = σw for w ≥ w 0 (i.e., diffusion is proportional to wealth above w 0 ) then θ (w)
simplifies to:
∀w > w 0 θ (w) = exp - 2τ σ 2 w 0 w -1 w w 0 -2τ/σ 2
and θ (w) = 1 otherwise.
Proof. See Appendix A.2.
That result makes it possible to estimate how the tax base would react to a wealth tax in the long run, effectively by reweighting the steady-state distribution of untaxed wealth using the function θ . The setting mentions the introduction of a new wealth tax where there previously was none, but we could apply the same result to an increase or a decrease of an existing wealth tax by redefining τ as a change in the rate of the wealth tax. 39The result emphasizes the role of mobility, as explained by Saez and Zucman (2019). The impact on the tax base depends on τ(w)/σ 2 (w) and not just τ(w). Doubling the parameter σ(w) quadruples the parameter σ 2 (w), which implies that a tax rate four times as high would lead to the same change in the tax base. The intuition is the same as in Saez and Zucman (2019):
high mobility means that people only get taxed for a short period and that new, previously untaxed wealth keeps entering the tax base. As a result, the tax base does not react too much to wealth taxation. When mobility goes to zero, however, the same wealth from the same people is taxed repeatedly so that the tax base eventually goes to zero.
Proposition 2 carries one important difference compared to the result of Saez and Zucman (2019). In the second part of the result, the reweighting factor is the product of two terms: exp{-2τ(w 0 /w -1)/σ 2 } and (w/w 0 ) -2τ/σ 2 . The first impacts the distribution near the threshold w 0 while the second impacts the distribution away from the threshold. The discrete-time formula of Saez and Zucman (2019) only contains an equivalent of the second term, (w/w 0 ) -2τ/σ 2 . This difference is because they consider the taxation of billionaires not at a marginal rate τ, but at an average rate τ. This distinction is important because the long-run mechanical effect of a wealth tax explicitly depends on the average tax rate and not, like behavioral responses, on the marginal rate. At the very top of the distribution, the distinction between the average and the marginal rate becomes negligible, hence the term (w/w 0 ) -2τ/σ 2 similar to the formula in Saez and Zucman (2019). But close to the threshold w 0 , the distinction does matter and, in fact, has important consequences for the overall response of the tax base to the tax, as we will see in Section 6.2.
Behavioral Responses
Tax Evasion and Tax Avoidance People can react to a wealth tax by hiding some of their wealth, either through tax evasion or tax avoidance. Assume that, in response to a marginal tax rate τ (w), people only report a fraction α(w) = [1τ (w)] of their wealth. The parameter is the elasticity of declared wealth to the marginal net-of-tax rate 1τ (w). For a small rate τ (w) 1, people react by approximately hiding a fraction τ (w) of their wealth. When = 0, people truthfully report all of their wealth. As goes to infinity, people start hiding all of their wealth to avoid paying the tax. With tax avoidance, people that own w in wealth pay τ(w)α(w) instead of τ(w). In effect, this is equivalent to having a wealth tax with a lower rate. Therefore, the results for the purely mechanical model hold with minimal modifications.
Consumption People may also react to a wealth tax by accumulating less wealth. Changes to savings have different implications than tax evasion. Indeed, tax evasion affects both the dynamic of wealth and the tax base. Savings, on the other hand, affect the dynamic of wealth but do not directly reduce the tax base.
Theory provides few constraints regarding how a wealth tax ought to affect saving rates,
given the many settings and mechanisms we could consider. In broad terms, there can be a substitution effect that decreases savings (because a wealth tax makes deferring consumption more expensive). And there can be an income effect that increases savings (because a wealth tax makes people poorer, and they compensate by investing more). In the case of, say, labor supply, the widely accepted view that substitution effects dominate. No such consensus exists for savings.
The following reduced-form specification can nonetheless account for the overall effect in a direct and intuitive way. Assume that, in response to a tax with marginal rate τ (w) on wealth, people increase their consumption by a factor β(w) = [1τ (w)] -η . The parameter η captures the elasticity of consumption with respect to the marginal net-of-tax rate 1τ (w). 40 The drift in the dynamic of wealth now includes an additional term c(w) [1β(w)] where c(w) is the average propensity to consume. The savings behavioral response amplifies the impact of the wealth tax.
40 I will ignore the cases where η < 0 (i.e., income effects dominate), even though they are a theoretical possibility and even though some studies find this result [START_REF] Ring | Wealth Taxation and Household Saving: Evidence from Assessment Discontinuities in Norway[END_REF]. Indeed, it is problematic to assume in a taxation context that the tax base responds positively to the tax. Moreover, the elasticity has to change sign at some point, otherwise, a 100% wealth tax would correspond to infinite savings. However, if true, it would imply that wealth tax rates could be higher.
Complete Model
Behavioral responses change the drift term, which is analogous to a change in the effective rate of the wealth tax. Therefore Proposition 2 can be directly extended to account for behavioral effects. The reweighting factor in the full model becomes:
θ (w) = exp - w -∞ 2τ(s)α(s) σ 2 (s) ds - w -∞ 2c(s)[1 -β(s)] σ 2 (s) ds
Calibrations for the United States
Baseline Wealth Model To illustrate the formulas of this section, I will consider the case of a linear tax on estates above $50m. I consider a model of wealth accumulation with a mobility parameter and an average propensity to consume that match this paper's estimates for the United States over 1979-2019. I directly use the 2019 data as an estimate of the steady-state wealth distribution, given that the simulation in Section 5.2 suggest that wealth inequality is close to its long-run value.
Behavioral Elasticities To calibrate and η, I rely on the recent empirical literature that exploit various quasi-experimental settings to assess behavioral reactions to a wealth tax.
Several of these papers present bunching evidence [START_REF] Seim | Behavioral Responses to Wealth Taxes: Evidence from Sweden[END_REF][START_REF] Londoño-Vélez | Enforcing Wealth Taxes in the Developing World: Quasi-Experimental Evidence from Colombia[END_REF][START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF]. Bunching provides the cleanest estimates of pure tax avoidance elasticity. Indeed, the true value of wealth in the short run tends to follow unpredictable asset movements so it would be hard for a household to precisely bunch at kink points. [START_REF] Seim | Behavioral Responses to Wealth Taxes: Evidence from Sweden[END_REF] finds an elasticity of 0.5 in Sweden, and [START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF] find elasticities that are even lower in Denmark. Londoño-Vélez and Ávila-Mahecha (2021) find a higher estimate (2-3) in Colombia.
As their main identification strategy, Jakobsen et al. ( 2020) pursue a difference-in-difference approach that exploit various tax reforms. This allows them to compute elasticities that incorporate dynamic and saving responses over larger time spans. Over an 8-year time frame, they find a sizable elasticity at the top of about 18 with respect to the net-of-tax rate. The authors argue that most (90%) of it can be attributed to a behavioral effect (as opposed to a mechanical effect). Assuming that the elasticities cumulate multiplicatively over time, this would correspond to a yearly behavioral elasticity of 1.4 for both the saving and tax avoidance response. [START_REF] Zoutman | The Elasticity of Taxable Wealth: Evidence from the Netherlands[END_REF], also using a difference-in-difference strategy, finds a much higher elasticity of almost 14. Seim (2017) also analyzes saving responses to a wealth tax but does not find any. Ring (2020) exploits geographic discontinuities in the exposure to wealth taxation to estimate savings responses and actually finds an increase in savings in response to wealth taxation. [START_REF] Brülhart | Taxing Wealth: Evidence from Switzerland[END_REF] find a much higher overall elasticity (23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34) in Switzerland using both between canton variations of the tax rate and within variation in the Bern canton.
They also look at bunching evidence but find much lower effects there.
Note that the tax avoidance elasticity is not a pure structural parameter, but also results from how strongly a wealth tax is enforced. For the baseline calibration, I will consider a limited tax avoidance response ( = 1), which is around the values found by [START_REF] Seim | Behavioral Responses to Wealth Taxes: Evidence from Sweden[END_REF], Londoño-Vélez and Ávila-Mahecha (2021) and [START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF]. I will also consider a medium savings response (η = 1), in line with [START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF], and around the median of existing studies.
I will also consider an alternative scenario with a higher saving response (η = 3) and a higher tax avoidance response ( = 3). Results Figure 10a shows the Laffer curves for the wealth tax, i.e., the amount of tax revenue raised as a function of the tax rate. As a point of reference, we can start from the short-run, static revenue estimate (in orange). This estimate assumes the tax base to be entirely nonreactive, and as a result, the wealth tax can raise a very large amount of revenue. The second curve (in purple) accounts for how the wealth tax progressively eats away its tax base in the long run, even without behavioral responses. As argued by Saez and Zucman (2019), mobility is the central factor determining that curve's shape. In a model without mobility (e.g., [START_REF] Jakobsen | Wealth Taxation and Wealth Accumulation: Theory and Evidence From Denmark*[END_REF] this curve would be equal to zero for all positive tax rates. This dynamic mechanical effect is sizable and significantly lowers tax revenue in the long run. However, it is not strong enough to produce the usual, inverted U-shaped Laffer curve: the tax base never goes to zero, even with a 100% tax rate. As a consequence, that model is insufficient to generate guidance on the revenue-maximizing rate. 42 The reason behind the result is that the dynamic mechanical effect depends on the average tax rate and not, like behavioral responses, on the marginal rate. Hence, close to the exemption threshold, the impact of the tax remains limited even with very high rates. So, in theory, it remains possible to keep raising revenue by increasing the tax rate ad infinitum. To get a more realistic and actionable model, we need to incorporate behavioral responses, which I do for the two last curves (in blue). In the benchmark calibration, the revenue-maximizing rate remains high (≈ 12%), but the amount of revenue raised is only a fraction of what a static scoring finds.
There is still a fair amount of uncertainty regarding the value of behavioral elasticities, so in Figure 10b, I estimate the revenue-maximizing rates for a wide range of elasticity values.
A notable result here is that the consumption elasticity has a stronger impact than the tax avoidance elasticity. This is because tax avoidance actually has an ambiguous effect on the tax base. On the one hand, it directly lowers the tax base since people underreport their assets.
But on the other hand, it increases the post-tax rate of return, allowing people to accumulate more, which grows the tax base in the long run.
Comparison with Inheritance Taxation
How does the annual taxation of wealth compare to inheritance taxation? In simulations, I found in Section 5.2 that the estate tax has a limited impact on the distribution. To illustrate and clarify this finding, I develop a simple model that compares the mechanical effects of an annual wealth tax to the effects of an inheritance tax. Assume that wealth w evolves according to the following SDE, where wealth is taxed at a constant rate τ, and µ is the rate of wealth growth without tax: dw = (µτ)w dt + σw dB t Assume a reflecting barrier at the bottom, so that wealth cannot go below w 0 . This is the simplest model that generates a Pareto distribution of wealth, with a PDF of the form f (w) = αw α 0 w α+1 [START_REF] Gabaix | Power Laws in Economics and Finance[END_REF].
Assume that people die at a Poisson rate δ. When they do, their only child pays the estate tax, inherits their wealth, and replaces them in the distribution. Let (1χ) 1/δ be the net-of-tax rate of the estate tax. (This specification makes the rates τ and χ a priori comparable.) At the steady-state, the wealth density f satisfies the [START_REF] Kolmogorov | Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung[END_REF] equation with the Poisson process included, and with the time the derivative set to zero:
0 = -(µ -τ)∂ w w f (w) drift + 1 2 σ 2 ∂ 2 w w 2 f (w) mobility -δ f (w) deaths + δ (1 -χ) 1/δ f w (1 -χ) 1/δ births
The steady-state still follows a Pareto distribution. Substituting f with its value f (w) = αw α 0 w α+1 , we get the following equation for the Pareto coefficient α, which is a measure of inequality (higher values mean less inequality):
0 = µ + 1 2 (α -1)σ 2 -τ - δ α [1 -(1 -χ) α/δ ] (8)
Annual tax only (χ = 0)
Inheritance tax only (τ = 0) Equation ( 8) provides the intuition behind that result. For a small estate tax rate χ, a first-order approximation gives δ α [1 -(1χ) α/δ ] ≈ χ, and therefore the estate tax has an effect similar to a wealth tax τ. But for higher rates χ, this is no longer true. In practice, the approximation does not work unless χ is extremely small because the ratio α/δ is high. As the estate tax rate χ increases, the Pareto coefficients converge to a maximum value α 1 , which we obtain by solving (8) for χ = 1:
α 1 = 1 2 α 0 + α 2 0 + 8δ σ 2
where α 0 = 1 -2(µτ)/σ 2 is the Pareto coefficient in the absence of inheritance tax. The estate tax is, therefore, the least efficient when the ratio 8δ/σ 2 is small, in which case the Pareto coefficient for χ = 1 is almost the same as the coefficient for χ = 0. This can happen for two reasons: δ being low and σ 2 being high. When σ 2 is high, there is a lot of mobility within generations, so estate taxation is ineffective because wealth quickly recovers from it. But this effect also exists for annual wealth taxes (see Section 6.1.1), so it does not drive the gap between annual and inheritance taxes. When δ is low, people die and children inherit their wealth at a low rate, so estate taxation is ineffective because it happens too rarely. We can also see this effect by looking directly at (8): taking the estate tax rate χ to 100% has the same effect as an annual wealth tax with a rate δ/α. So for α ≈ 1.5, a 100% estate tax, which occurs every 50 years on average, is similar to a 3% annual wealth tax, and cannot get higher.43
Conclusion
This paper introduces a new way of decomposing the evolution of wealth inequality. This decomposition accounts for the different processes affecting the wealth distribution, including mobility. It only requires repeated cross-sections and can therefore be applied to available historical data.
Applying this decomposition to the United States, I find that the rise of wealth inequality has been driven by higher savings at the top, higher rates of capital gains, and higher labor income inequality, with other factors playing a minor role. Applying the framework to the study of wealth taxation, I derive simple formulas that describe how the wealth distribution would react to a wealth tax in the long run. I use them to estimate revenue-maximizing tax rates for a linear tax on high estates. The website https://thomasblanchet.github.io/wealth-tax/ provides a simulator to apply these formulas in the United States.
The approach developed in this paper offers several venues for future research. One would be to extend the decomposition of this paper alongside several dimensions. Recent work [START_REF] Blanchet | Real-Time Inequality[END_REF] has started to estimate distributional national accountsincluding national wealth -disaggregated by race. This paper's methodology could be applied to such data and provide new insights that combine our understanding of the dynamics of the racial wealth gap [START_REF] Derenoncourt | Wealth of Two Nations: The U.S. Racial Wealth Gap[END_REF] with our understanding of the dynamics of wealth inequality. To the extent that wealth can be meaningfully divided between household members [START_REF] Frémeaux | Inequalities and the individualization of wealth[END_REF], the same could be done for gender.
In future research, the study of capital taxation that is sketched out in this paper could also be further expanded into a full-fledged theory of optimal capital taxation, one that integrate our theories of the wealth distribution with various social objectives [START_REF] Saez | Generalized Social Marginal Welfare Weights for Optimal Tax Theory[END_REF] and with the broader effects of wealth taxation on the macroeconomy [START_REF] Gaillard | Wealth, Returns, and Taxation: A Tale of Two Dependencies[END_REF].
of order dt, and so as dt → 0, the former is negligible compared to the latter.
A.2 Proposition 2
Recall that wealth evolves according to dw i t = (µ(w) -τ(w)) dt + σ t (w) dB i t where τ(w) is the wealth tax. Let f * be the steady-state density of wealth with the wealth tax. It has to obey the Kolmogorov forward equation with the time derivative terms set to zero:
0 = -∂ w [(µ(w) -τ(w)) f * (w)] + 1 2 ∂ 2 w [σ 2 (w) f * (w)]
Solving this differential equation, we can write:
f * (w) ∝ exp -2 w 0 σ(s)σ (s) -µ(s) σ 2 (s) ds exp - w 0 2τ(s) σ 2 (s) ds
Note that the steady-state density f without the wealth tax corresponds to the case τ(w) ≡ 0, and therefore f * (w) ∝ f (w)θ (w) where θ (w) ≡ exp - Capital Gains We can measure capital gains when they accrue to individuals or when they are realized. For our purpose, accrued capital gains are more useful than realized ones, because they are the ones that reconcile changes in the value of the balance sheet with national income and savings. On the other hand, whether a capital gain is realized now or later is the result of various tax and economic incentives that are not relevant here and do not correspond to any meaningful economic aggregate.
The DINA data only records taxable capital gains, which is essentially a measure of realized capital gains. These are a poor proxy for accrued capital gains (Alstadsaeter et al., 2016).
Instead, I estimate them individually using the capitalization approach of Robbins (2018).
I retrieve the capital gains rate by year and asset type from the national accounts (Piketty, Saez, and Zucman, 2018, table TSD1 in appendix). Then, I assume that for a given asset type, everyone gets the same rate of capital gains. By construction, these micro-level capital gains estimates are consistent with macro totals. Their distribution follows the logic of the Saez and
Zucman (2016) capitalization method.1 Robbins (2018) provides a thorough discussion of why that measure is more appropriate to analyze the impact of asset price changes on inequality and the economy.
National income including capital gains can be quite volatile (figure B.1a), but on average their inclusion matters on several fronts. Robbins (2018) shows that their inclusion overturns certain stylized facts about the United States economy (such as the long-run decline of saving rates) and strengthens others (such as the rising capital share and increase in income inequality).
As shown in figure B.1b, capital gains were dampening the top 1% share of post-tax national income during most of the 1970s, but since then, they have consistently increased it.
Wealth by Age The age information in the DINA data is very limited, so I cannot use it. Instead, I import it from the SCF and demographic estimates using constrained statistical matching. I calculate the rank in the wealth distribution in both the DINA and the SCF data, and the rank in the age distribution by sex and household type (single or couple) in the SCF data. Then, I match the DINA observations one by one to SCF observations based on their wealth rank to give them a rank in the age distribution.2 Finally, I use the population structure from the demographic data to attribute an age to every DINA observation. By construction, the method preserves the wealth distribution in DINA, the population by age and sex from demographic sources, and the copula between wealth and age from the SCF. Sometimes, data is only available by age group (e.g., of five years) or a subset of years (e.g., every ten years). Whenever necessary, I interpolate estimates with a monotonic cubic spline [START_REF] Fritsch | Monotone Piecewise Cubic Interpolation[END_REF] to get data for every single year and age.
B.2 Demography
Population by Age and Sex Before 1900, I directly estimate the population pyramid using the decennial census microdata from the IPUMS USA database (Ruggles et al., 2022)
B.3 Inheritance and Estate Tax
Part of the inheritance process is determined by the demographics and the distribution of wealth, while other parts have to be modeled separately. I assume that people die at random, conditional on their age and sex, so the distribution of inheritances corresponds to the distribution of wealth, weighted by mortality rates. I then assume that the wealth of decedents is either redistributed to their spouse (if any) or to their descendants (if they have no living spouse), after payment of the estate tax. The demography gives the age and sex of decedents (see section B.2). I assume that inheritance is split equally between children, as is the norm in the United States [START_REF] Menchik | Primogeniture, Equal Sharing, and the U.S. Distribution of Wealth[END_REF]. Source: Own computations using the Survey of Consumer Finances (SCF) . Gray ribbons correspond to the 95% confidence intervals. In figure B.2d, opacity is proportional to the weight of observations.
Figure B.2: Modeling of Inheritance
Extensive Margin Let D i = 1 if individual i receives inheritance, and D i = 0 otherwise. Let A i be their age and W i their wealth. Assume that:
P{D i = 1|A i = a, W i = w} = P{D i = 1|A i = a}φ(F A i =a (w)) (9)
where F A i =a is the CDF of wealth conditional on age, and 1 0 φ(r) dr = 1. By construction, the expected value of the right-hand side of (9) conditional on age is equal to P{D i = 1|A i = a} so that the specification makes probabilistic sense. 5 Note that F A i =a (w) is the rank of w in the wealth distribution (conditional on age), which is how we can make the formula (9) consistent regardless of the shape of the wealth distribution.
The value of P{D i = 1|A i = a} is determined by demography, so we only need to estimate φ. I start by calculating a rank in the wealth distribution conditional on age by running a nonparametric quantile regression of wealth on age for every percentile (see figure B.2b).
I then regress the dummy D i for having received inheritance on that rank, multiplied by P{D i = 1|A i = a}. I use ordinary least squares (OLS) and a cubic polynomial with coefficients constrained so that its integral over [0, 1] equals one (see figure B.2c). As we can see, even after partialling out the effect of age, wealthier people still experience a higher probability of receiving an inheritance. I use that polynomial as my estimate of φ.
Intensive Margin I account for the intensive margin by modeling the joint distribution of the ranks in the wealth distribution and the inheritance distribution (i.e., the copula), conditional on age and on having received an inheritance. I take the subsample of inheritance receivers and calculate their rank in the wealth and inheritance distribution using a nonparametric quantile regression as I did for the extensive margin.
The dependence between the two ranks is weak but significant (see figure B.2d): their Kendall's τ is equal to 8.3%. I represent this dependency parametrically using a bivariate copula. I select the most appropriate model out of a large family of 15 single-parameter copulas by finding the best fit according to the Akaike information criterion (AIC), which is the Joe copula. 67 I estimate its parameter so as to match the empirical value for Kendall's τ.
Estate Tax I account for the federal estate tax using the complete estate tax schedule and exemption amount for each year. The top marginal estate tax rate has followed a clear inverted 5 It is the direct result of a change of variable r = F A i =a (w) and using the fact that ∂ 6 The list of copulas includes the Gaussian copula, Student's t copula, the Clayton copula, the Gumbel copula, the Frank copula, the Joe copula, and rotated versions of these copulas. 7 The Joe copula has the parametric form reduced the top tax rate and increased the exemption amount so that by 1990, the very top and the upper middle of the wealth distribution were facing lower average tax rates. But $10m estates were actually facing slightly higher average tax rates. By now, however, the estate has been lowered so much that its profile is unambiguously less progressive than in the 1950s.
∂ w F A i =a (w) = f A i =a (w), so that +∞ -∞ φ(F A i =a (w)) f A i =a (w) dw = 1 0 φ(r) dr = 1.
C θ (u, v) = 1 -(1 -u) θ + (1 -v) θ -(1 -u) θ (1 -v) θ 1/θ .
B.4 Marriages, Divorces and Assortative Mating
Aggregate Marriages and Divorce Rates by Age I calculate population pyramids disaggregated by marital status using census microdata from the IPUMS USA database (Ruggles et al., 2022). I use this data to estimate the rate at which people get married and divorced by age and sex. This is possible assuming that mortality rates are the same regardless of marital status, and assuming that marriage rates are the same for people that are divorced are people that have never been married. Under these conditions, we have for each sex:
(marriage rate) I estimate its parameter so as to match the empirical Kendall's τ, which is equal to 28%.
I also estimate the effect of assortative mating at divorce by looking at how wealth is divided between both spouses right before they separate. I also use the SIPP, select people who are married in year t but separated in year t + 1, and extract the distribution of the share of the household's wealth that each spouse has.
C Detailed Estimations and Results
Overall, the estimation procedure in this paper only involves fitting a series of lines. In practice, there are several methods for fitting lines, and this paper uses the approach that makes the most sense given the nature of the problem. This section describes the procedure used and discusses the sensitivity of estimates to the various hyperparameters.
C.1 Estimation Procedure
Recall that the estimation procedure in this paper requires fitting a linear relationship of the form given by equation ( 5), which I will simplify as follows to emphasize the line-fitting aspect:
∀t ∈ {1, . . . T } y i t = a + bx i t (10) for each wealth bin i ∈ {1, . . . , n}. The data are y i t and x i t . The parameters are a i and b i . While fitting this equation using OLS is an option, it is not a natural choice here for two reasons.
First, that this equation does not hold perfectly in practice results from error terms that can be attributed simultaneously to y i t and x i t . In that sense, the setting is closer to an error-in-variable model, for which OLS are known to be biased. Second, and relatedly, there is an invariance in equation ( 10) that OLS breaks. Unlike usual econometric regressions, there is no obvious distinction between the outcome variable and the explanatory variable. The equation would make as much sense if the roles of x i t and y i t were reversed (and the meaning of the parameters a i and b i changed accordingly). But with OLS, regressing y on x yields a different result than regressing x on y.
The solution is to use a [START_REF] Deming | Statistical adjustment of data[END_REF] regression, a generalization of orthogonal regression, which is a standard way of estimating error-in-variable models. The [START_REF] Deming | Statistical adjustment of data[END_REF] where δ is an hyperparameter, defined separately, that captures the relative variance of the measurement error on y i t and x i t . When δ = 0 the estimator collapses to OLS, and when δ = +∞ it collapses to OLS as well, but with the left-hand side and the right-hand side reversed.
This estimator has an analytical solution, which is: b i = s y yδs x x + (s y yδs x x ) 2 + 4δs 2
x y 2s x y a i = yb x where:
x = 1 T x i t y = 1 T y i t s x x = 1 T t (x i t -x) 2 s x y = 1 T t
(x i tx)( y i ty) s y y = 1 T t
( y i ty) 2 I estimate a variance of the measurement error on x i t and y i t , by taking a five-year moving average of these values (in each wealth bin), and calculate the variance of the difference between the values and their moving average. This allows me to estimate δ. This is a rough estimate, so I check that results are robust to it in Section C.4. The choice of δ turns out to have limited impact, because the line actually fits the data closely: error terms all smalls, and therefore assuming that they are due to x i t or y i t makes little difference.
C.2 Estimation of Standard Errors
This paper's setting requires a nonstandard procedure for estimating standard errors for two reasons. First, because I use a [START_REF] Deming | Statistical adjustment of data[END_REF] regression, which imply different standard errors than OLS. Second, because the error terms are not independent: there are correlated both across time and across wealth bins. To deal with these issues, I use a parametric bootstrap procedure that efficiently simulates error terms that reproduce the key features of the data.
First, I estimate an error term for each observation. Note that in geometrical terms, the [START_REF] Deming | Statistical adjustment of data[END_REF] performs an oblique projection of observations onto the fitted line, according to a direction set by the hyperparameter δ. This projection is associated with the following scalar product in R 2 : 〈(x 1 , y 1 ), (x 2 , y 2 )〉 δ → δx 1 x 2 + y 1 y 2 Given this behavior, the error terms that the [START_REF] Deming | Statistical adjustment of data[END_REF] regression estimates for x i t and y i t are perfectly correlated, so it would not make sense to model them separately. Instead, I define a single error term e i t for every observation as the scalar product between (i) the unit vector orthogonal to the fitted line, and (ii) the residual vector (x i tx * i t , y i ty * i t ). The vector that is normal to the fitted line is (b, -δ)/ δ(b 2 + δ), where b is the slope of the fitted line, hence:
e i t =
〈(b, -δ), (x i tx * i t , y i t -
y * i t )〉 δ δ(b 2 + δ) = δ b 2 + δ [b(x i t -x * i t ) -( y i t -y * i t )]
Then, for each wealth bin i, I estimate a coefficient ρ i of autocorrelation across time as:
ρ i = 1 T t e 2 i t -1 1 T -1 t e i t
e i,t-1
And I also estimate a correlation coefficient r of autocorrelation across wealth bins:
r = 1 T t 1 n i e 2 i t
-1 1 n-1 i e i t e i-1,t
Unlike ρ i , I do not vary the coefficient r across time. Indeed, there is no empirical evidence, and also no good a priori reasons, for it to vary as such. Instead, I average the value across all periods, which gives a more robust estimate. I assume that autocorrelations across time and wealth bins follow each follow an AR(1) process. For each wealth bin i, the correlation matrix across time is Ω w i = Toeplitz[1, ρ i , ρ 2 i , . . . , ρ T i ]. The correlation matrix across wealth bins is the same for all periods and is equal to Ω t = Toeplitz[1, r, r 2 , . . . , r n ]. I assume that the correlation matrix Ω between all error terms, across all periods and all wealth bins, has a two-way AR(1) structure, which can be constructed as:
Ω = Ω w 1 Ω w 2 . . . Ω w n 1/2 Ω t Ω t . . . Ω t T times Ω w 1 Ω w 2 . . . Ω w n 1/2
where (•) → (•) 1/2 is the matrix square root for symmetric positive definite matrices. I estimate a standard error σ i in every wealth bin i as: which gives estimates that can directly be interpreted as the mean and the variance of consumption.
σ i = 1
C.4 Robustness Checks
The estimation involves several hyperparameters: they include δ in the [START_REF] Deming | Statistical adjustment of data[END_REF] regression (see Section C.1), as well as a series of bandwidths that control the degree of smoothing involved when estimating specific densities and derivatives. In practice, the amount of noise in the data guides the choice of these hyperparameters. This section lists their default values and, more importantly, provides checks that confirm that the results are robust to variations of these parameters. The average amount of income that is received by each wealth bin can sensibly vary over time with the business cycle. This value is smoothed over time to focus on the long-run dynamics instead.
12.5 years locally constant regression with rectangular kernel
Bandwidth for income variance
The estimate of the variance of income conditional on wealth is smoothed across wealth, both to get a more stable estimate, and to be able to estimate the derivative with respect to wealth. The inverse of the survival function of wealth (F (w)/ f (w)) is computed as an intermediary step to obtain the left-hand side of equation ( 6). This estimate is smoothed over time using this bandwidth parameter.
years locally constant regression with rectangular kernel
Bandwidth for auxiliary effects
The impact of auxiliary effects (demography, etc.) on the CDF of wealth are microsimulated for every year, and then are smoothed over time using this bandwidth.
10 years locally constant regression with rectangular kernel
Bandwidth for estimating measurement error variance
To estimate the hyperparameter δ in the [START_REF] Deming | Statistical adjustment of data[END_REF] regression, I estimate the variance of the measurement error for both variables in the equation, using the variance of the residual between observed values and smoothed version using this bandwidth parameter.
years locally constant regression with rectangular kernel
Bandwidth for derivative of the diffusion
To adjust the value of the drift, I need to estimate the derivative of the diffusion with respect to wealth, using this bandwidth parameter. 6). But the regression we run explains most of the data's variance, so the residuals remain small in all cases. As a result, where we attribute them has a limited impact.
Figure C.5b does a similar exercise, this time focusing on the bandwidth parameters (with δ now being automatically calculated, using these bandwidth parameters). I perform 100 estimations of the parameters, where I separately vary each bandwidth parameter at random by a coefficient α ∈ {1/5, 1/4, 1/3, 1/2, 1, 2, 3, 4, 5}.9 Then, I show the range of final estimates where 95% of estimates fall. These ranges are wider than the ones for the δ parameter alone, but the main results remain valid. This section provides some details on how a wealth tax would impact the distribution of wealth in the long run. For this exercise, I focus on a revenue-maximizing linear tax above
C.5 Wealth
2
Figure 1 :
1 Figure 1: Top 1% Wealth Share in the United States
Figure 2 :
2 Figure 2: The Different Approaches for Studying the Wealth Distribution
Figure 3 :
3 Figure 3: Phase Portrait: Convergence to a Higher-Inequality Steady State
4. 1
1 Data I estimate the parameters in decomposition (6) for the United States. I start in 1962, when sufficiently detailed microdata on the distribution of income and wealth in the United States becomes available. I primarily rely on the tax-based microdata from Saez and Zucman (2020a), which I complement in a number of ways. Using the data collected, I perform microsimulations of mortality, birth, inheritance, marriage, and divorce, which I use to estimate the impact of the corresponding phenomenons on the wealth distribution. I briefly review the data and methodology below and provide more details in Appendix B.
I estimate the entire demographic history of the United States since 1850 by year, age, and sex, including population counts, mortality rates, female and male fertility rates by birth order. I start the estimation in 1850, long before the income and wealth data starts (in 1962), because I use the demographic data to simulate intergenerational linkages between parents and children. So if a centenarian dies in 1962, I must be able to retrace that person's entire fertility history since their birth and retrace the mortality history of that person's children. I construct this data by collecting and harmonizing data from official sources (United States Census Bureau), historical databases (Human Mortality Database, Human Life Table Database, Human Fertility Database, Human Fertility Collection) and academic publications
Log CCDF at 500 Times Average IncomeSource: Own computation using the Distributional National Accounts (DINA) microdata fromSaez and Zucman (2020a). Note: Wealth is always expressed as a multiple of national income. Densities in Figure4aare estimated as histograms with 91 bins of size 0.1 on the asinh(wealth) scale, ranging from -1 to 2000 times average income (-0.9 to 8.3 on the asinh scale).
Figure 4 :
4 Figure 4: Distribution of Wealth and Its Evolution
0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 RHS (~ inequality level) LHS (~ inequality change + other effects) (a) Empirical Phase Portrait
Consumption by WealthSource: Author's estimations. Note: This graph shows the average and the standard deviation of consumption by wealth, as estimated from the slope and the intercept of the linear relationships in the left panel, for every wealth bin. Parameters have been adjusted to account for the data transformations, as explained in Appendix C.3. Areas around the lines indicate 95% confidence intervals, estimated using a bootstrap procedure described in Appendix C.2.
Figure 5 :
5 Figure 5: Estimation of Consumption by Wealth
Figure 6 :
6 Figure 6: Estimated Parameters and Their Relationship to the Literature
PSID (1984PSID ( -2019) ) Source: Own computation using the Panel Survey of Consumer Finances (SCF)(2007)(2008)(2009) and the Panel Study of Income Dynamics (PSID).
Figure 8 :
8 Figure 8: Comparison of Mobility in the Model with Panel Survey Data
Figure9shows the evolution of the top 1% wealth share according to the model under different scenarios. I run the simulation from the beginning of the data (in 1962) to the year 2070. In each case, I compare the result to a benchmark scenario, which estimates the future evolution of wealth assuming the characteristics of the economy are held fixed after 2019. Specifically, I assume that economic growth remains at its 2010-2019 average and that the distribution of labor income and capital rates of return remain at their 2019 value. I simulate the effect of demography in the future using the projections (medium variant) from the World Population Prospects (United Nations, 2019), and otherwise assume that the correlation between age and wealth remains constant after 2019. In this benchmark scenario, the top 1% wealth stabilizes around 37-38% in the long run.Labor Income Inequality In Figure9a, I estimate what the distribution of wealth would look like today if the distribution of labor income had stayed the same after 1978 as it was over the 1962-1978 period. That is, I give people with a given rank in the wealth distribution after 1978 the average mean and variance of labor income from people with the same rank over
See main text for details. The benchmark simulation use the demographic projection (medium variant) from the World Population Prospects (United Nations, 2019) and otherwise assumes that economic parameters remain fixed at their latest observed values. The counterfactual projections change a parameter but leave the others constant. Each model simulation involves randomly simulated values: to filter out the resulting statistical noise, I simulate the model five times and take the median of the simulations.
Figure 9 :
9 Figure 9: Counterfactual Evolution of Wealth Inequality
Figure 9d considers the role of the changes in savings estimated by the model. Of the different effects I consider, this one plays the most prominent role. Assuming the same average consumption as in 1962-1978, the top 1% wealth share would be about 9pp. lower in the long run. This implies that savings have played a large role in explaining today's levels of wealth inequality: the wealthy are wealthier today than in the 1960s and 1970s in large part because they have saved more. 37
Figure 10 :
10 Figure 10: Effects of a Wealth Tax on Estates Above $50m
Figure 11 :
11 Figure 11: Pareto Coefficients under an Annual Wealth vs. an Inheritance Tax of Comparable Magnitude Consider the following calibration: µ = -0.04, σ = 0.4 and δ = 1/50. We can solve this equation numerically to get the steady-state Pareto coefficient under any value of τ and χ.
Figure 11
11 Figure11shows the results from this exercise. It compares the Pareto coefficients for two scenarios: an annual wealth tax (without inheritance tax) and an inheritance tax (of comparable magnitude, without an annual wealth tax). With an annual wealth tax, the Pareto coefficient increases linearly with the tax rate (therefore, inequality decreases). With an inheritance tax,
s) ds , which gives the first part of the result. For the second part, assume that τ(w) = τ(ww 0 ) + and that σ(w) = σw for w > w 0 . Then θ (w) simplifies to: θ (w) = expwealth data, I primarily rely on the DINA microdata fromPiketty, Saez, and Zucman (2018), which are based on the Internal Revenue Service (IRS) individual public-use microdata files, and which have been continuously updated to reflect methodological changes and improvements(Saez and Zucman, 2020a; Saez and Zucman, 2020b). These files are annual(except for 1963 and 1965) since 1962. Each observation corresponds to an adult individual (20 or older), and each variable corresponds to an item of the national accounts distributed to the entire adult population. These files distribute the entirety of the income and wealth of the United States.
Figure B. 1 :
1 Figure B.1: The Impact of Capital Gains on National Income and Its Distribution
Figure
Figure B.3: Estate Tax
vector σ = (σ 1 , . . . , σ n ). In line with the evidence, I assume that the error term is heteroscedastic over wealth bins but homoscedastic over time. The final, complete covariance matrix Σ for error terms is, therefore, equal to Σ = A ΩA, where A = Diagonal(σ, . . . , σ T times ) I simulate the error terms ẽit according to a multivariate normal distribution with mean zero and covariance Σ. The 2-dimensional residual is the product of this value with the unit vector normal to the fitted line, i.e. (b, -δ)/ δ(b 2 + δ). So I obtain bootstrap replications of the sample by adding the simulated error terms to the fitted values as follows:xit = x * i t + ẽit b/ δ(b 2 + δ) ỹit = y * i tẽit δ/ δ(b 2 + δ)I get bootstrap replications of the parameter values by running the[START_REF] Deming | Statistical adjustment of data[END_REF] again regression on the simulated data.
derivative of the log density of wealthThis bandwidth parameter is used to calculate the derivative of the log density of wealth based on the histogram estimate of the density.
(a) δ
δ Hyperparameter in Deming RegressionNote: Full range of estimates obtained by varying the hyperparameter δ in the[START_REF] Deming | Statistical adjustment of data[END_REF] regression by factors α ∈ {1/5, 1/2, 1, 2, 5}. See text for details. 95% interval range of the estimates obtained by varying the bandwidth parameters separately at random by factors α ∈ {1/5, 1/4, 1/3, 1/2, 1, 2, 3, 4, 5}. See text for details.
Figure
Figure C.5: Robustness Checks
a 12% wealth tax above $50m (no rebate) C. With a 12% wealth tax above $50m (with lump-sum rebate) D. With a 12% wealth tax above $1m (with lump-sum rebate)Note: This figure assumes benchmark parameters for behavioral responses (elasticity of tax avoidance ε = 1 and elasticity of consumption η = 1.) For each bracket, the first bar shows the initial distribution of wealth, which corresponds to the year 2019 of the DINA data fromSaez and Zucman (2020a). The second bar corresponds to the long-run distribution, with a wealth tax of 12% above $50m (which maximizes revenue in the benchmark calibration.) The third bar shows the same distribution when the government rebates the tax revenue lump sum every year. The fourth show the same effect for a tax with a much larger base (all wealth above $1m).
Figure C. 6 :
6 Figure C.6: Long-run Changes of the Wealth Distribution with a Wealth Tax and and Lump-sum Rebate
Kolmogorov Forward Equation For empirical
(CDF) of wealth, F t (w). After re-arranging terms, we get:
- ∂ t F t (w) f t (w) = µ t (w) - 1 2 ∂ w σ 2 t (w) - 1 2 σ 2 t (w) ∂ w f t (w) f t (w)
local change in the wealth distribution local effect of average change in wealth local effect of the mobility gradient local effect of mobility
purposes, it is useful to
rewrite equation (2) in its integrated version, which involves the cumulative distribution function
Car- roll, 1998; De Nardi, 2004)
If consumption at the top were too low, then wealth at the top would grow without bounds, and so would inequality. Note that at no point did I restrict the parameter values to force the existence of a steady state: a nondegenerate steady state arises naturally from the data in a model with constant drift and constant mobility. Finally, we see that changes in the average consumption between1962-1978 and 1979-2019 are most significant at the top of the distribution (i.e., wealth above 50 times the average income), which aligns with the view that top wealth holders have been the primary drivers of rising wealth inequality.
↑ f a s t e r t r a n s i t i o n s Aiyagari (1994) and other to get steady-state inequality levels that match reality, models must have parameters close models on that diagonal still feature different transition speeds 1 Bewley (1977) models, underestimate inequality because savings and mobil-ity at the top are too low to this diagonal; ↑ h i g h e r i n e q u a l i t y models with stochastic this paper finds that to match both the levels and the transition speeds of models with bequests motives 4 2 3 returns (Quadrini, 2000; Cagetti and De Nardi, 2006; Benhabib, Bisin, and Zhu, 2011) also feature higher mobility inequality, we actually need lower savings but even higher mobility diffusion parameter σ 2 (→ mobility) degenerate steady-state
l o w e r q n i e u a l i t y ↓ and/or a taste for wealth (achieve higher inequality through higher savings alone s l o w e r t r a n s i t i o n s ↓
drift parameter µ (→ average savings)
ConsumptionTo evaluate the external validity of the model, I now compare the model's estimates of consumption to external data. It is important to note that direct evidence on the
1962-1977 and over 1978-2019.
value of these parameters is scant -which is, in fact, a central motivation for the indirect
approach taken in this paper. Nonetheless, we can use two surveys to shed some light on these
values. First, there is the SCF, which is usually cross-sectional but had a panel wave between
50% 2007 and 2009. Then there is the PSID, which has been recording wealth every five years since 8%
1984. 1978-2019
Top 1% share 45% 30% 35% 40% Mean (% of wealth) 6% 0% 2% 4% 1962-1977 1978-2019 2007-2009 1984-2019 Simulated Observed Simulated Model SCF PSID 50-90% 39% 31% 32% 38% 90-99% 15% 15% 17% 14% Annualized growth rate Top 1% 13% 9% 21% 12%
25% 20% Observed Std. Dev. (% of wealth) 50-90% 90-99% Top 1% 50% 27% 37% -2% -4% 70% 80% 42% 31% 90% 35% 61% 1962-1977 41% 99% 99.9% 99.99% Top 31%
1920 1940 1960 1980 2000 2020 2040 2060 0.001%
year Wealth percentile
(a) Top 1% Share (b) Growth Incidence Curves
2019)
and otherwise assumes that economic parameters remain fixed at their latest observed
values.
Figure 7: Comparison of the Model with Observed Dynamics
Figure 7 confirms that this is the case. Figure 7a compares the evolution of the top 1%
wealth share in real and simulated data, and shows that we reproduce both the decrease in
wealth inequality over 1962-1978, and the increase over 1979-2019. Figure 7b goes further
by showing the growth incidence curves (GICs) for the top 30% (a group that has consistently
owned about 90% of total wealth since the 1960s). 35 Again, we reproduce the observed growth
rates for every percentile (and for fractions of a percentile within the top 1%), both over
Note:
The simulation of the model involves randomly simulated values: to filter out the resulting statistical noise, I simulate the model five times and take the median of the simulations. See main text for details. After 2019, the simulation use the demographic projection (medium variant) from the World Population Prospects (United Nations,
Table 1 :
1 Distribution of the Propensity to Consume: Comparison With Other Sources Because these surveys record wealth and income longitudinally, I can use them to estimate the consumption of each respondent. In principle, I can calculate consumption as the difference
Table 2 :
2
Decomposition of Wealth Growth in the Top 1% rate of the top 1% -is primarily driven by two factors: an increase in capital gains and a decrease in average consumption. The increase in capital gains comes largely from the fact that wealth holders experienced capital losses over 1962-1978. On the other hand, regular capital incomes cannot account for the rise of wealth inequality since they have been lower during
1979-2019 than during 1962-1978.
Table 2
2
,
1970s. The effect seen in Figure
9b
is in fact similar to what we would obtain by assuming zero capital gains since 1978.
Estate taxation has undergone many reforms, leading to a top marginal rate that is half as high today as it was in the 1960s. What role has estate taxation played in the increase in wealth inequality? I explore this issue in Figure9f, in which I freeze the estate tax schedule after 1978. I find virtually no impact of this change on the wealth distribution. Two factors account for this finding. First, the evolution of the overall progressivity of the estate tax over the 20th century has actually been more ambiguous than what the trajectory of the top marginal tax rate would suggest (seeFigure B.3 in appendix). The very high marginal tax rates of the 1960s did not kick in until extremely high levels of wealth. Following the reforms in the 1980s, estates in the order of tens of millions of dollars were actually taxed more heavily.Estates needed to reach hundreds of millions of dollars to benefit from the reforms. It wasn't until the 2000s that the estate tax became unambiguously less progressive.Second, estate taxation has an intrinsically weak impact on the wealth distribution. Weaker, say, than an annual wealth tax of seemingly comparable magnitude. As an illustration, running the model with a radical estate tax (100% tax on estates above $100k) would only reduce the top 1% wealth share by 1.5pp. in the long run. A naive view would suggest that taxing
This data has several advantages. It provides distributional estimates that are consistent with macroeconomic aggregates. It has rather large samples (from about 175 000 in the 1960s to about 300 000 today), with oversampling of the richest. And because it is based on tax data, it captures the top tail of the distribution well. But it does have some drawbacks. First, it has limited socio-demographic information: in particular, age information is only available in the form of very broad age groups. Second, the data does not include capital gains because they are not part of national income as defined by the national accounts. For these reasons, I make some adjustments and imputations to these data, using the SCF and national accounts.I use post-tax national income as my income concept of reference. It corresponds to income after all taxes and transfers. It also distributes government expenditures and the income of the corporate sector to individuals so as to sum up to net national income. Author's computations using the DINA microdata and table TSD1 (online appendix) fromPiketty, Saez, and Zucman (2018). Note: The unit of analysis is the adult individual (20 or older). Income is split equally between members of couples. Capital gains are estimated assuming a constant rate of capital gains by asset type.
net national income per adult (2019 USD) $20k $30k $40k $50k $60k $70k $80k $90k $100k with capital gains without capital gains top 1% share of post-tax national income 5% 10% 15% 20% with capital gains without capital gains
1960 1970 1980 1990 2000 2010 2020 1960 1970 1980 1990 2000 2010 2020
year year
(a) Net National Income and Capital Gains (b) Top 1% Post-tax National Income Share, with
and without Capital Gains
Source:
I compute the entire demography of the United States from 1850 to 2100. Although the income and wealth data does not start until 1962, the model requires demographic data that starts much earlier. Indeed, I need to simulate how wealth gets transmitted from one generation to the next. Therefore, if a supercentenarian dies in the 1960s, I have to be able to simulate their entire life history to know how many live children they have and how old they are. For all years and all ages, I estimate data on the population structure by age and sex, mortality (i.e., life tables), fertility (for both sexes), and intergenerational ties (age and sex of children).
Age-Specific Fertility Rates by Birth Order
. From 1900 to 1932, I use the National Intercensal Tables from the United States Census Bureau. From 1933 to 2016, I use population estimates from the Human Mortality Database. 3 After 2016, I use the projections from the World Population Prospects (United Nations, 2019). I estimate age-specific fertility rates by birth order for both sexes. For women, they are directly available from 1933 to 2016 from the Human Fertility Database. From 1917 to 1932, I use data from the Human Fertility Collection. Thatsame source provides fertility rates until going back to 1895-1899, but without the breakdown by birth order. Therefore, before 1917, I assume that the birth order composition remains constant. Before 1895, there is no age-specific data available, so I use the data on the total fertility rate and rescale the age profile from 1895-1899 to that value.4 Unlike female fertility rates, male fertility rates are not a standard demographic indicator, so they are not directly available from any source. To estimate them, I combine the age-specific female fertility rates with the joint distribution of the age of opposite-sex couples since 1850 calculated using the decennial census microdata from the IPUMS USA database(Ruggles et al., Age and Sex of Children I simulate the distribution of the number, age, and sex of living children for each year after 1962 (when income and wealth data starts). I do this for every age and both sexes, which allows me to realistically model how wealth gets transmitted from one generation to the next. To that end, I combine all the data above. I make every person have children randomly over their past lifetime according to year, age, and sex-specific fertility rate. Because I have the breakdown by birth order, I can take into account how the decision to have another child depends on the number of children that one already has. Then, I make each child go through life and die randomly according to their year, age, and sex-specific mortality rate. As a result, I can tie every individual in the database to fictitious descendants that are, on average, representative of the true composition of descendants.
2022).
Life Tables Before 1900, I use the historical life tables from Haines (1998). From 1900 to
1932, I use the Human Life Table Database, and from 1933 to 2016, life tables from the Human
Mortality Database. After 2016, I rely on projections from the World Population Prospects
(United Nations, 2019). All tables are broken down by sex.
While the demographic aspect of inheritance is endogenously determined by demography, I still need to model separately how wealth gets distributed for a given age and sex. This facet of the problem captures intergenerational wealth mobility, in the sense that wealthier people might also have wealthier parents and thus inherit more. There are two aspects to this question: the extensive margin (how likely are you to receive an inheritance in a given year?)
1.0% 100%
probability of receiving inheritance 0.1% 0.2% 0.3% 0.4% 0.5% 0.6% 0.7% 0.8% 0.9% rank in the total wealth distribution 10% 20% 30% 40% 50% 60% 70% 80% 90% 9th decile 1st decile
0.0% 0%
20 30 40 50 60 70 80 20 30 40 50 60 70 80
age age
(a) Probability of Receiving Inheritance by Age (b) Rank in the Wealth Distribution by Age
relative probability of receiving inheritance 0% 25% 50% 75% 100% 125% 150% 175% 200% 225% percentile in the inheritance distribution 0% 20% 40% 60% 80% 100%
0% 20% 40% 60% 80% 100% 0% 20% 40% 60% 80% 100%
rank in the wealth distribution percentile in the wealth distribution
and the intensive margin (how much inheritance do you receive?) To address this question, I (c) Relative Probability of Inheritance (d) Ranks in the Wealth and in the Inheritance
use data from the SCF, which has been recording inheritance consistently since 1989. Because (Conditional on Age) Distribution (Conditional on Age)
the probability of receiving an inheritance in a given year is very low overall (about half a
percent, see figure B.2a), I have to pool all the 1989-2016 waves to get sufficient sample sizes.
Own computation using the statutory schedules of the federal estate tax.
100% 100%
90% 90%
top marginal tax rate 30% 40% 50% 60% 70% 80% averate estate tax rate 30% 40% 50% 60% 70% 80% 1970 2010 1990 1950
20% 20%
10% 10%
0% 0%
1920 1940 1960 1980 2000 2020 $100k $1m $10m $100m $1bn
year wealth (constant 2020 $)
(a) Top Marginal Estate Tax since 1916 (b) Average Estate Tax Rate, by Wealth
Source:
a,t = 1where a is age, and t is the year. This estimate being quite noisy, I winsorize the bottom 10% and the top 10% of values, and then apply a moving average with a ten-year age and year window. This gives the results of Figure B.4a. For the simulation, we anchor these numbers to the aggregate data on the crude rate of marriage and divorce from the NVSS (Center for Disease Control, 2022)(Figure B.4b). Mating I calculate the extent of assortative mating by estimating the copula between the rank of each spouse in the wealth distribution, conditional on age, at the time they get married. To that end, I use data from the SIPP, which collected data on wealth between
(fraction never married) a+1,t+1
(fraction never married) a,t
(divorce rate) a,t = (fraction divorced) a+1,t+1 -[1-(marriage rate) a,t ]×(fraction divorced) a,t (fraction married) a,t
Assortative
2013 and 2016. I select all people who just got married (i.e., people single in year t -1 but married in year t) and fit a parametric copula to the joint rank of each spouse in the wealth distribution, conditional on age
(Figure B.4c)
. As in Section B.3, I select the best copula out of a large family of 15 single-parameter copulas according to the AIC, which is the Frank copula. 8
Determines the ratio of the variance of the error terms between both sides of the[START_REF] Deming | Statistical adjustment of data[END_REF] regression.
Hyperparameter Role Default value Estimation method and ker-
nel (if applicable)
δ Depends on the wealth bin
(see Section C.1, as well as
line "Bandwidth for estimat-
ing measurement error vari-
ance" in this table)
Bandwidth for mean income
over time
Table 3 :
3 List and Default Values for HyperparametersTable3describes the hyperparameters used in the estimation. Besides δ, all of them are bandwidth parameters used for smoothing out noise or short-term variations in the data. I apply smoothing using locally constant regressions with a rectangular kernel (i.e., moving averages).When I need to estimate derivatives, I use the coefficient from a locally linear regression with a rectangular kernel as well.
100%
90% std. deviation
80%
70%
+ wealth 2 50% 60%
1 40% mean
30% (1962-1978)
20%
10% mean (1979-2019)
0%
0 1 10 100 1000
wealth (multiple of average income)
Saez and Zucman (2020a) is a revision ofSaez and Zucman (2016), which accounts for heterogeneous returns, as well as the more important role of private business wealth at the top found by[START_REF] Smith | Top Wealth in America: New Estimates under Heterogeneous Returns[END_REF]. See alsoSaez and Zucman (2020b).
Other estimates, notably[START_REF] Smith | Top Wealth in America: New Estimates under Heterogeneous Returns[END_REF], also find a similar increase of the top 1% share, although they also find a somewhat more muted increase thanSaez and Zucman (2020a) for the top 0.01% and narrower top groups.
This is for example the case in the reanalysis of the model of[START_REF] Judd | Redistributive taxation in a simple perfect foresight model[END_REF] by[START_REF] Straub | Positive Long-Run Capital Taxation: Chamley-Judd Revisited[END_REF] in the case with no government spending.
As long as this elasticity is finite, positive capital taxes are desirable. We can interpret earlier arguments that capital should never be taxed[START_REF] Chamley | Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives[END_REF][START_REF] Judd | Redistributive taxation in a simple perfect foresight model[END_REF] as the consequence of assuming an infinite elasticity
[START_REF] Gabaix | The Dynamics of Inequality[END_REF] primarily focus on income inequality, but their formal findings apply to wealth as well.
This solution differs from the solutions proposed by[START_REF] Gabaix | The Dynamics of Inequality[END_REF], which involve introducing temporary changes to the drift to accelerate dynamics at the beginning of a transition.
Saez and Zucman (2019) consider a wealth tax at a constant average rate above a threshold. My formulas apply to arbitrary wealth taxes, including the more common case of a constant marginal rate within a bracket.
If the elasticity were constant, a 12% revenue-maximizing rate would be associated with a revenue reduction of 60% compared to the inelastic case, as opposed to 75% here. 6
[START_REF] Saporta | Tail of a linear diffusion with Markov switching[END_REF] rigorously study the the continuous time version of[START_REF] Kesten | Random difference equations and Renewal theory for products of random matrices[END_REF] processes. In this paper I consider simpler -and less literal -continuous time analogs to[START_REF] Kesten | Random difference equations and Renewal theory for products of random matrices[END_REF] processes.
The decomposition in Gomez (2022) also accounts for higher moments of the distribution of wealth growth rates (i.e., skewness, kurtosis, etc.), but he finds that these have a negligible impact on the wealth distribution.
[START_REF] Gomez | Decomposing the Growth of Top Wealth Shares[END_REF] also applies his methodology without direct panel data but with a separate estimate of the variance of wealth growth. However, estimating this variance still requires some form of longitudinal data.
Another famous rationale for not taxing capital comes from[START_REF] Atkinson | The design of tax structure: Direct versus indirect taxation[END_REF]. In their model, there is no heterogeneity of wealth conditional on income. Therefore, assuming income can be taxed, taxing wealth provides no additional equity gains while also distorting intertemporal choices. This justification for not taxing capital would not apply to my model since wealth is heterogeneous conditional on income.
See Auerbach and Hines (2001),Piketty and Saez (2013a), and[START_REF] Saez | Generalized Social Marginal Welfare Weights for Optimal Tax Theory[END_REF], although this interpretation is not universally accepted, see[START_REF] Straub | Positive Long-Run Capital Taxation: Chamley-Judd Revisited[END_REF].
This formulation assumes that the transitory shocks are uncorrelated. We could account for correlated shocks by including additional covariance terms, as in Bienaymé's identity. To simplify the exposition, I focus on the uncorrelated case.
Note that with the change of variable p = F t (w t ), we get ∂ t w t (p) = µ t (p). Hence, µ t (p) is the growth of the pth quantile.
If wealth follows a Pareto distribution with coefficient αt , then -∂ t F t (w)/ f t (w) = -(w log w)(∂ t α t /α t ).Therefore, -∂ t F t (w)/ f t (w) > 0 is associated with a decrease in α t , which corresponds to an increase in inequality.
Their latest revision(Saez and Zucman,
2020a) accounts for heterogeneous returns.
People aged 20 have very low levels of wealth, so in practice, this is close to assuming that people start with zero wealth.
The birth rate is estimated here as a residual between population growth and the crude death rate, so in effect, it also incorporates the effect of immigration.
I.e., asinh : x → log(x + x 2 + 1).
Itô's lemma states that, if a process x t follows the SDE dx t = µ t dt + σ t dB t , then φ t (x t ) follows the SDE dφt (x t ) = ∂ t φ t (x t ) + µ t ∂ x φ t (x t ) + 1 2 σ 2 t ∂ 2 φ t (x, t) dt + σ t ∂ x φ t (x) dB t .See Appendix Section C.3 for details.
The range goes from -0.9 to 8.3 on the asinh scale.
This represents 91 bins.
At the top, the wealth distribution is approximately Pareto, and the inverse hyperbolic sine is approximately logarithmic, so the distribution of transformed wealth in approximately exponential.
In the benchmark specification, I use a rectangular kernel and a bandwidth of 1.5. See Appendix C.4 for robustness checks.
This is partly the result of a smaller sample size over1962-1977.
This is a parsimonious alternative to the solutions suggested by[START_REF] Gabaix | The Dynamics of Inequality[END_REF], which involve the introduction of additional short-run dynamics at the beginning of transition periods.
Average wealth below the 70th percentile is very low, even zero or negative for some percentiles, and therefore it is not meaningful to calculate their growth rates.
First, the surveys do not record the income earned between interview years, so I have to assume that the respondent's incomes have not changed between interviews. Second, some forms of income, especially accrued capital gains, are not always properly recorded in the surveys, so changes in asset prices might affect the results. Third, since we are calculating consumption as a residual, we are more sensitive to measurement error, which can be a significant issue in surveys. Fourth, surveys record their data at time intervals that are less frequent than this paper's yearly data. I rescale the survey estimates of the mean and the variance of consumption by a factor ∆t, where ∆t is the time between interviews, to annualize all numbers and make them comparable. But that doesn't
Naturally, such changes in consumption could themselves be the result of changes to income and taxation. But the current exercise maintains the distinction between the direct and the indirect effects of income and taxes on wealth inequality.
This section does not develop a complete theory of optimal wealth taxation, which would be beyond the scope of this paper. Instead, it focuses on the observable responses of the wealth distribution to a wealth tax. I leave the issue of integrating this framework with a complete optimal tax model for future research.
This result does not consider how the government uses the wealth tax. We can adapt the formula to include the redistribution of a lump-sum amount and then solve an equation numerically to ensure that we redistribute as much as we tax. In practice, the impact would be negligible as long as we focus on the top of the distribution. Indeed, much more wealth is taxed in that part of the distribution than would be redistributed. The lump sum rebate would have an impact at the bottom, however. Section C.5 in appendix addresses this question.
Very large behavioral responses, as found by[START_REF] Brülhart | Taxing Wealth: Evidence from Switzerland[END_REF], are of limited interest for the current exercise, since they make long-run dynamic effects negligible compared to the immediate static effect of behavioral responses.
Saez and Zucman (2019), who consider a somewhat analogous model, do obtain an estimate for the revenuemaximizing rate. This is because in their model, top wealth holders are taxed at an average, not marginal, rate τ (see also Section 6.1.1 for a discussion).
Intuitively, δ creates a cap on the potency of the estate tax: when it is low, wealth is rarely transmitted, and so its effect is limited. The parameter α modulates that cap. When α → 1, inequality tends to infinity. Only one dynasty owns meaningful wealth, and the estate tax is maximally efficient because it can repeatedly affect that one dynasty. On the other hand, when α → +∞, inequality tends to zero. Everyone owns a similar amount of wealth. The estate tax struggles to discriminate between the richest and the poorest in these conditions.
Although the income measure in the DINA data does not include capital gains, it does distribute income from the corporate sector to the owners of capital, which partly accounts for changes in asset prices. My capital gains measure is net of retained earnings, so there is no double counting.
Note that both datasets are weighted, so that observations end up being duplicated and partially matched to one another. When the samples contain M and N observations, respectively, the resulting dataset contains at most M + N -1 observations.
See https://www.mortality.org/hmd/USA/DOCS/ref.pdf for detailed primary sources.
See Gapminder: https://www.gapminder.org/news/children-per-women-since-1800-ingapminder-world/
The formula of the Frank copula isC(u, v) = -1 θ log 1 + (exp(-θ u)-1)(exp(-θ v)-1) exp(-θ )-1.
I make an exception for the bandwidth parameter used to calculate the derivative of the diffusion. I only vary that parameter by a factor α ∈ {1, 2, 3, 4, 5}, because undersmoothing in that part of the estimation leads to unrealistically noisy estimates.
data and replication codes are available at https://github.com/
A Proofs Omitted from Main Text
A.1 Proposition 1
The proof of proposition 1 is a straightforward consequence of Gyöngy's (1986) theorem and of the rules of Itô calculus. Let us start by providing the full statement of Gyöngy's (1986) theorem.
Theorem (Gyöngy, 1986). Let X t be a n-dimensional stochastic process satisfying:
where X t and Y t have the same marginal distributions for each t. We can construct Y t by setting:
To simplify notations, consider all expectations conditional on w i t = w. Write z i t = µ i t dt + σ i t dB i t . For the drift term, we have directly E
Note that the proof rests on an approximation made possible by this paper's continuous-time formalism. Namely, it allows us to identify the average variance of individual shocks E[σ 2
i t ] with the overall variance of wealth growth Var(z i t ). This is because over an interval of time dt, the part of the variance that is explained by w is of order (dt) 2 while the rest of the variance is
C.3 Transformation of Parameters Due to the Inverse Hyperbolic Sine Transform
To facilitate the manipulation of the data and the estimations, I transform wealth using the inverse hyperbolic sine function, which we can define as:
This function is useful because it is bijective, behaves logarithmically at large scales, but still tolerates zero or negative values, for which it behaves linearly. A helpful feature of continuoustime stochastic processes is that the dynamics of wealth and the dynamics of asinh(wealth) are directly related to one another through Itô's lemma. Concretely, assume that wealth follows a SDE with the following drift and diffusion:
where:
are, respectively, the drift and the diffusion induced by income. Then asinh(wealth) follows a SDE as well with the following drift and diffusion:
To perform the estimation, I must first transform the income parameters to conform to the asinh scale:
Then I estimate c(w) and γ(w) using zt (w) as the income-induced drift, and ψt (w) as the income-induced diffusion. I transform these parameters back into the linear scale:
$50m, which in the benchmark calibration corresponds to a linear tax rate of 12%. The tax government rebates the tax lump sum every year, so the dynamics of wealth are now:
where τ is the average tax revenue. The steady-state wealth distribution is now characterized by:
ds and where θ (w) is defined similarly as in the no-rebate case (and, in the current example, accounts for behavioral responses.) The value τ must satisfy the equation:
which can be solved numerically. Figure C.6 shows the results from this exercise. It reports the wealth shares of four brackets (the bottom 50%, the middle 40%, the next 9%, and the top 1%) and four scenarios (no wealth tax, wealth tax without rebate, wealth tax with rebate, broader wealth tax with rebate). The wealth distribution remains highly unequal in all cases:
the bottom 50% has virtually no wealth. Still, the wealth tax operates a significant amount of redistribution. The wealth tax alone primarily redistributes away from the top 1% to the benefit of the middle 40% and the next 9%. But it does not affect the bottom 50%. With the rebate, the bottom 50% more than doubles its share, but from a very low baseline (0.5%, to 1.3%).
The tax with a broader base (all wealth above $1m) increases it a bit more, to 3.6%.
Appendix References
Alstadsaeter |
01287528 | en | [
"spi.meca.ther"
] | 2024/03/04 16:41:22 | 2015 | https://hal.science/hal-01287528/file/An%20adapted%20steady%20RANS%20RSM%20wall-function.pdf | Lucie Merlier
email: [email protected]
Frédéric Kuznik
Gilles Rusaouën
Julien Hans
An adapted steady RANS RSM wall-function for building external convection
Keywords:
Computational Fluid Dynamics (CFD) can improve usual estimates of building external convective heat transfer coefficients (h c,w ) by accounting for the geometry of constructions, the aerodynamic field around them, the nature of convection and providing high resolution data. However, the limitations of usual mass flow descriptions and near wall treatments make the accurate prediction of h c,w challenging. Hence, this paper evaluates the ability of steady RANS Reynolds Stress (RSM) and k-ε realizable models to predict h c,w in case of isolated cubical obstacles. The accuracy of usual CFD methods and turbulence models, as well as fine grid near wall models and usual temperature wall functions (TWFs) are examined by comparison with experimental and detailed numerical data.
When used with a low Reynolds number model (LRNM), both turbulence models accurately predict h c,w on the front and rear faces of the obstacle. However, they show different behaviors on the other faces and highlight issues related to the dynamic behavior of real flows. Moreover, h c,w predictions obtained using standard TWFs substantially deviate from the validated LRNM results. Therefore, a customized TWF suited for use with the RSM and forced convection problems is proposed by extending studies of Defraeye et al. (An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer; Building and Environment, 2011, 46, 2130-2141). Customized TWFs substantially improve WF-based hcw predictions with respect to LRNM results while keeping their cost effectiveness, and provides satisfactory results even for high z * .
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
ρ Gas density [kg • m -3 ] Cp Specific heat [J • kg -1 • K -1 ] C µ Model constant [-] h Surface heat transfer coefficient [W • m -2 • K -1 ] k Turbulent kinetic energy [m
Introduction
The accurate knowledge of building interior and exterior convective heat transfers is required to properly evaluate the building thermal and energy behavior [START_REF] Zhai | Performance of coupled building energy and CFD simulations[END_REF][START_REF] Zhai | Sensitivity analysis and application guides for integrated building energy and CFD simulation[END_REF][START_REF] Emmel | New external convective heat transfer coefficient correlations for isolated low-rise buildings[END_REF][START_REF] Bouyer | Microclimatic coupling as a solution to improve building energy simulation in an urban context[END_REF][START_REF] Allegrini | Analysis of convective heat transfer at building facades in street canyons and its influence on the predictions of space cooling demand in buildings[END_REF][START_REF] Merlier | On the interactions between urban structures and air flows: a numerical study of the effects of urban morphology on the building wind environment and the related building energy loads[END_REF]. Only considering external convective heat transfers, they are also important to study the performance of energy systems [START_REF] Palyvos | A survey of wind convection coefficient correlations for building envelope energy systems modeling[END_REF] including the convective cooling of solar panels, the drying behavior of surfaces [START_REF] Saneinejad | Analysis of convective heat and mass transfer at the vertical walls of a street canyon[END_REF] or the turbulent thermal transfers in cities, which influence the urban heat island [START_REF] Oke | Boundary layer climates, 2nd Edition[END_REF][START_REF] Allegrini | Influence of morphologies on the microclimate in urban neighbourhoods[END_REF].
Convective heat transfers depend on different parameters related to the properties of the fluid, flow and surface. Considering air and a basic boundary layer configuration over a flat plate, they directly depend on the flow velocity and turbulence as well as on the surface roughness. On the contrary, sharp edged obstacles generally involve complex separated flows. As a consequence, convective heat transfers are greatly determined by the different properties of the flow structures that develop next to the obstacles [START_REF] Cole | The convective heat exchange at the external surface of buildings[END_REF][START_REF] Meinders | Experimental study of the local convection heat transfer from a wall mounted cube in turbulent channel flow[END_REF][START_REF] Nakamura | Local heat transfer around a wallmounted cube in the turbulent boundary layer[END_REF]. In particular, heat entrapment into the recirculation phenomena decreases heat transfers whereas high wind speeds and temperature differences between the fluid and the wall improve heat transfers in flow impinging regions. Similarly, the intermittent flow reattachment on the building surfaces improves convective heat transfers. Therefore, convective heat transfers distribution around bluff bodies such as buildings can completely differ from those developing over flat plates.
According to Ref. [START_REF] Defraeye | Convective heat transfer coefficients for exterior building surfaces: Existing correlations and CFD modelling[END_REF], existing correlations linking external convective heat transfer coefficient (h c,w ) with a reference wind speed and that are applicable for building outer walls are either based on reduced-scale experiments undertaken for bluff bodies located in a turbulent boundary layer [START_REF] Nakamura | Local heat transfer around a wallmounted cube in the turbulent boundary layer[END_REF][START_REF] Nakamura | Local heat transfer around a wallmounted cube at 45 to flow in a turbulent boundary layer[END_REF], full-scale measurements taken on building facades [START_REF] Hagishima | Field measurements for estimating the convective heat transfer coefficient at building surfaces[END_REF][START_REF] Liu | Full-scale measurements of convective coefficient on external surface of a low-rise building in sheltered conditions[END_REF] or computational fluid dynamics (CFD) modeling [START_REF] Emmel | New external convective heat transfer coefficient correlations for isolated low-rise buildings[END_REF][START_REF] Blocken | High-resolution CFD simulations for forced convective heat transfer coefficients at the facade of a low-rise building[END_REF][START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF]. Nonetheless, according to [START_REF] Palyvos | A survey of wind convection coefficient correlations for building envelope energy systems modeling[END_REF] and [START_REF] Mirsadeghi | Review of external convective heat transfer coefficient models in building energy simulation programs: Implementation and uncertainty[END_REF], the different models commonly used in building thermal engineering and building energy simulation programs to compute building external convective heat transfers are generally derived from reduced-scale or full-
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
scale experimental studies, and numerous and diverse models are reported. The fact that reduced-scale models often involve flat plates casts doubts on their applicability for building physics problems. Correlations derived from field measurements seem more suitable but their applicability is often limited as they correspond to specific configurations and experimental conditions. Therefore, further detailed experimental and computational studies are necessary to better understand the building energy behavior.
Compared to full-scale and wind-tunnel studies, CFD approaches can provide high resolution information and take into account most of the factors that influence h c,w , including the different features of the approach flow, the effective dimensions, geometry and thermal properties of the built environment or the different natures of convection. However, usual CFD approaches have to deal with their intrinsic drawbacks due to their mathematical and physical assumptions [START_REF] Moonen | Urban Physics: Effect of the micro-climate on comfort, health and energy demand[END_REF][START_REF] Blocken | 50 years of Computational Wind Engineering: Past, present and future[END_REF]. In particular, commonplace steady RANS methods cannot reproduce the intermittent flow behavior and turbulence models consider differently the effects of turbulence on the mean flow. Furthermore, h c,w predictions also greatly depend on the near wall treatment as, in addition to turbulent transport processes in the general mass flow, flow thermal features strongly vary in the wall viscous and buffer layers [START_REF] Launder | Numerical computation of convective heat transfer in complex turbulent flows: time to abandon wall functions?[END_REF][START_REF] Launder | On the Computation of Convective Heat Transfer in Complex Turbulent Flows[END_REF]. Therefore, modeling accurately building external h c,w is even more complicated than it is for building aerodynamics alone, and find an appropriate couple of physical model and model resolution is challenging. Hence, considering isolated cubical case studies, this paper examines and discusses the accuracy of h c,w predictions obtained using (i) a steady RANS approach while dealing with strongly intermittent convective processes; (ii) first or second turbulence models (the k-ε realizable (Rk-ε) or Reynolds stress (RSM) models); and (iii) fine grid near wall models (LRNM) or wall functions (WF). A customized temperature wall-function (CTWF) suitable for use with the RSM and building physics problems is then proposed, extending the methodology proposed by [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF].
To address these different issues, this paper is organized as follows. Sec. 1
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
presents the modeling bases. Sec. 2 examines the accuracy of the LRNM model with respect to experimental data and analyzes the performance of implementing a steady RANS approach together with the Rk-ε or RSM models in predicting h c,w over an isolated cube. Then, Sec. 3 discusses deviations in h c,w predictions observed when using a standard TWF instead of the LRNM. Sec. 4 presents improvements of usual TWFs, and proposes a CTWF suitable for use with the RSM. Finally, Sec. 5 synthesizes the different results and methodological challenges highlighted trough the study and opens perspectives.
Note that this paper only focuses on convective heat transfers predictions. The validation of the different aerodynamic models used in the following can be found in Ref. [START_REF] Merlier | On the interactions between urban structures and air flows: a numerical study of the effects of urban morphology on the building wind environment and the related building energy loads[END_REF][START_REF] Merlier | On the accuracy of steady RANS models and the LBM LES method: Comparison of the predicted flows around an isolated rectangular block[END_REF], in which the predicted flow fields around an isolated rectangular block immersed in a turbulent boundary layer are examined with respect to detailed wind-tunnel measurements of the CEDVAL [START_REF] Hamburg | Compilation of experimental data for validation of microscale dispersion models[END_REF][START_REF] Leitl | Validation data for microscale dispersion modeling[END_REF]. Predictions obtained using the steady RANS Rk-ε and RSM were found satisfactory and comparable next to the front and rear faces of the obstacle in case of a smooth floor.
Assessing building external convective heat transfer coefficients using CFD
Convective heat transfer at building external facade is generally expressed using a coefficient h c,w , as follows:
h c,w = q c,w T w -T ref (1)
Uisng CFD, two main methods can be used to simulate the effects of a wall on the flow [START_REF] Blocken | Sports and buildings aerodynamics[END_REF]. They differ by the complexity of the physical model and the grid resolution required. The more detailed approach is the LRNM. This method applies for z * ≈ 1 as it solves the whole boundary layer, including the laminar region. By contrast WFs require the first node to be located in the fully turbulent region of the boundary layer, i.e. 50 ≤ z * ≤ 500. WFs bridge in a single cell the viscosity affected region of the boundary layer and
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
usually model near wall flow behavior using logarithmic laws. It is important to mention that usual WFs were derived from wall attached near equilibrium flows, which do not correspond to non equilibrium or separated flows [START_REF] Popovac | Compound Wall Treatment for RANS Computation of Complex Turbulent Flows and Heat Transfer[END_REF]. Furthermore, the logarithmic formulation for temperature is even less widely valid than that for the momentum [START_REF] Launder | On the Computation of Convective Heat Transfer in Complex Turbulent Flows[END_REF]. These reasons certainly explain, at least partly, why many studies including [START_REF] Launder | Numerical computation of convective heat transfer in complex turbulent flows: time to abandon wall functions?[END_REF]31] and [32] highlight the superiority of implementing LRNM or two layer models rather than WFs in predicting flow fields or/and convective heat transfers, although fine grid approaches are less cost effective and often reduce the simulation convergence rates. However, as opposed to fundamental CFD studies, using WFs is generally the only option when dealing with building physics problems because of the very different scales characterizing constructions and thermal boundary layers.
In the commercial CFD software Ansys Fluent [START_REF]Fluent inc[END_REF], LRNM-based simulations can be performed using the so called "enhanced wall treatment" (EWT). This model can evolve from a LRNM approach to a enhanced WF formulation as z * increases. For z * ≈ 1 this model behaves as a two layer zonal model. The viscosity affected region is solved using the one equation of Wolfshtein [START_REF] Wolfshtein | The velocity and temperature distribution in one dimensional flow with turbulence augmentation and pressure gradient[END_REF]. For z * ≈ 1, the dimensionless temperature (T * ) is computed as follows:
T * lam = P r z + 1 + α 2 z + + ρu * 2q u 2 (2)
Considering incompressible flows and smooth walls, standard TWFs compute T * as a linear or logarithmic function of z * depending on the thermal sub-layer thickness z * T , as follows [START_REF]Fluent inc[END_REF]:
• if z * < z * T : T * = P r z * (3)
• if z * > z * T : T * = P r t 1 κ × ln (E z * ) + P (4)
with:
P = 9.24 P r P r w 3/4 -1 1 + 0.28e -0.007 P r P r t,w (5) M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT z *
T corresponds to the z * value at which the linear and logarithmic laws intersect.
Given T * , either T w or q w can be computed depending on the specified boundary condition at walls according to
T * = (T w -T P ) ρ C P C 1/4 µ k 1/2 p q w (6)
h w can then be deduced according to Eq.1.
2. Validation of the LRNM model: effect of the turbulence model 2.1. Reference test case and computational model
Experimental configuration
The case study is a wind-tunnel test involving a H = 1.5 cm high heated cube placed in a developing boundary layer [START_REF] Meinders | Experimental study of the local convection heat transfer from a wall mounted cube in turbulent channel flow[END_REF]. This configuration basically addresses more electronic than building physics problems. However, anisotherm experimental data are scarce and this study was also used to validate the LBM LES aerodynamic model addressed by [START_REF] Obrecht | Toward urban scale flow simulations using the lattice boltzmann method[END_REF][START_REF] Obrecht | Towards aeraulic simulations at urban scale using the lattice Boltzmann method[END_REF] 1 as well as the LRNM model of [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF].
The test section of the wind-tunnel used is 40 H wide and 3.3 H high. The approach flow has a bulk velocity of 4.47 m • s -1 and a temperature of 21 ˚C. The obstacle is composed of an internal copper core uniformly heated at 75 ˚C covered by a 1.5 × 10 -3 m thick epoxy layer. The leading face of the cube was located 50 H downwind a trip. Measurements of the external surface temperature were taken using infrared thermography and h c,w was derived from the local heat transfers with an accuracy of 5 to 10 %.
The Reynolds number of the test considered is Re = 4.4 × 10 3 and the Richardson number is Ri ≈ 1.4 × 10 -3 . This is a case of predominant forced convection and buoyancy effects can be neglected.
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT epoxy air others ρ [kg • m -3 ] 1191 1.225 C P [J • kg -1 • K -1 ] 1650 1006.43 adiabatic λ [W • m -1 • K -1 ] 0.237 0.0242 T [K] T w,int = 348 T inlet = 294
Table 1: Thermal properties of the model used.
Coupled aerodynamic and thermal model
Due to incomplete information about the experimental setup and as done in Ref. [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF], the approach flow profile was firstly designed by modeling a 66 H long, 11 H wide and 3.3 H high empty domain. The approach flow profile was recorded 45 H from the inlet plane, i.e. 5 H upstream the actual location of the cube front face. This profile was set as inlet conditions for the actual simulations, for which only a 5 H long fetch was kept as recommended in Ref. [START_REF] Franke | Recommendations of the COST action C14 on the use of CFD in predicting pedestrian wind environment[END_REF][START_REF] Tominaga | AIJ guidelines for practical applications of CFD to pedestrian wind environment around buildings[END_REF]. The epoxy layer was explicitly modeled whereas the copper core was only modeled by an internal wall temperature boundary condition of 75 ˚C. The inflow temperature was set to 21 ˚C. The top and bottom boundaries were specified as adiabatic no slip smooth walls. Tab. 1 synthesizes the thermal features of the model.
The cube surfaces were modeled as zero roughness height (smooth) walls. The mesh was refined near the floor and even more next to the fluid/solid interface and within the epoxy layer, down to 3×10 -4 m. More than 4.4×10 6 cells compose the mesh, including more than 4.8 × 10 5 cells in the volume of epoxy and 3.9 × 10 6 tetrahedral cells in the fluid. As such, z * ≈ 1 on the cube surface. Simulations were performed using Ansys Fluent 15 [START_REF]Fluent inc[END_REF] using a GPU-based calculation server running Linux. Steady RANS RSM as well as Rk-ε simulations were performed to assess the differences in h c,w predictions due to the turbulence modeling. Compared to eddy viscosity models, the RSM can account for the history and transport of the flow by considering individually each Reynolds stress and can thus account for anisotropic turbu-
M A N U S C R I P T A C C E P T E D
ACCEPTED MANUSCRIPT lence effects. However, the implementation of this model is more demanding than eddy viscosity models and less critical studies and developments are therefore available.
The EWT takes care of near wall regions. Second order numerical schemes were used. Pressure and velocity were coupled using the SIMPLE algorithm and the pressure strain correlation of the RSM was chosen linear. Simulations were initialized with 2 × 10 3 iterations accounting for standard WFs instead of the EWT to avoid numerical stability problems. The solution iterative convergence was studied by monitoring T w and h c,w on the vertical and horizontal mid lines of the cube. These profiles were then confronted to the reference experimental data of Meinders et al. [START_REF] Meinders | Experimental study of the local convection heat transfer from a wall mounted cube in turbulent channel flow[END_REF] and steady RANS Rk-ε numerical data of Defraeye et al. [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF].
Results analysis
Fig. 1 compares the different experimental and numerical profiles of h c,w for the two mid-lines circling the cube. The reported numerical h c,w profiles differ either by the turbulence model used or the number of iterations considered.
A first analysis indicates that the EWT/LRNM approach is reasonnable. Profiles show symmetric behaviors where expected and the order of magnitude of h c,w corresponds to the experimental one. In addition, profiles obtained using the Rk-ε model correspond well to the reference ones, which suggests a good implementation of the model.
Nonetheless, Fig. 1(b) does not show stabilized RSM-h c,w profiles as functions of the number of iterations though simulation was stopped after 9.5×10 3 iterations. This is especially the case on the lateral faces of the cube, where the experiment reports a strong unsteady behavior of the flow. Such a behavior is observable in simulations as the wake fluctuates from one side of the theoretical symmetry plane to the other. In fact, coherent vortex shedding structures develop around isolated cubes, and vorticity shed from the cube lateral faces may induce a yaw of the arch vortex and wake. These processes may prevent the flow to be statistically stationary. Consequently, the Reynolds average may not correspond to the time averaging of the solution [START_REF] Iaccarino | Reynolds averaged simulation of unsteady separated flow[END_REF]. Note that such an "unsteady" behavior was also observed for other case studies (see [START_REF] Merlier | On the interactions between urban structures and air flows: a numerical study of the effects of urban morphology on the building wind environment and the related building energy loads[END_REF]), still where the experimentation stresses strong flow unsteadiness or strong vortex shedding effects. This behavior does not occur when using k-ε models as they are more dissipative. Such observations can
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
call into question the ordinary implementation of steady RANS approaches to study complex flows and the related physical processes developing around constructions.
Nonetheless, beyond the fluctuations that occur during RSM computations, Fig. 1(a) shows a satisfactory match between all the simulated h c,w profiles and the experimental measures for the front face of the cube in terms of distribution and averaged value. A slight over-estimation of h c,w by both steady RANS models is however observed. Larger discrepancies occur next to the top edge, where uncertainties on experimental data are also higher.
The accuracy of numerical predictions is also satisfactory for the rear face of the cube with respect to experimental data, but discrepancies between the predictions obtained using the different turbulence models are no more negligible. RSM predictions of h c,w are less than 10 % lower than those we obtained using the Rk-ε model along the horizontal mid-line, which yields predictions that better correspond to experimental data except near the edges of the face. Focusing on the vertical profile, on the one hand, the slope of the RSM profile corresponds well to the experimental one, but the h c,w intensity is under-estimated by about 8 %, which slightly exceeds the experimental uncertainty. On the other hand, the averaged value of h c,w appears well reproduced by the Rk-ε model, but the slope of the profile is steeper than reported by the experiment.
Contrarily to the front and rear faces, numerical predictions substantially deviate from experimental data on the top and lateral faces, where separated bubbles involving high h c,w gradients with maximum values in reattachment regions and minimum values next to recirculation cores [START_REF] Meinders | Experimental study of the local convection heat transfer from a wall mounted cube in turbulent channel flow[END_REF]. These recirculation phenomena are generally poorly reproduced by steady RANS models, which certainly explain the deviation observed. Moreover, predictions also differ depending on the turbulence model. As in the experiment, RSM-based h c,w profile increases streamwise on the top face of the cube and a relatively good, but unstabilized, profile shape is even predicted on the cube lateral face. However, h c,w intensities and gradients are under-estimated compared
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
to experimental data. Rk-ε predictions completely differ from experimental data in distribution on the top and lateral faces of the cube. Although a decreasing profile streamwise is simulated on the top face, line averaged h c,w values match the experimental one on the top face.
As a conclusion, both the steady RANS Rk-ε and RSM accurately predicts h c,w profiles on the front and rear faces of the cube when used with the LRNM. However, large deviations between the experimental and numerical profiles, as well as between the predictions obtained using different turbulence models, occur on the other faces, where complex separated flows develop. The RSM seems to predict h c,w distribution better than the Rk-ε does on the top and lateral faces of the cube. Nonetheless, h c,w intensities are under-estimated and the simulated h c,w profiles fluctuate where the flow is physically strongly unsteady.
Comparison between LRNM and WFs-based predictions of h c,w :
effect of the near-wall treatment 3.1. Reference test case and computational model
Reference LRNM study
The case study is a 10 m high cube located in a turbulent boundary layer, which represents a building. WFs are generally required for such a configuration because of the model dimensions. Given the model validation of Sec.2 and literature results showing the better accuracy of fine grid approaches (Sec.1), LRNM data are selected to evaluate predictions obtained using WFs. These LRNM data are provided by Defraeye et al. [START_REF] Defraeye | LRNM profiles on the front and rear faces of a 10 m high cibical building -unpublished data[END_REF][START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF][START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF] and were computed using the Rk-εmodel.
As recommended in Ref. [START_REF] Tominaga | AIJ guidelines for practical applications of CFD to pedestrian wind environment around buildings[END_REF], the cube is located 5 H from the inlet, lateral and top boundaries of the domain and 15 H from the outlet plane. Lateral boundary conditions were set periodic and the top boundary condition symmetric. A smooth wall boundary condition was specified for the bottom of the domain. An equilibrium ABL profile (z 0 = 0.03 m) was specified as inlet condition according to the guideline given in Ref. [START_REF] Tominaga | AIJ guidelines for practical applications of CFD to pedestrian wind environment around buildings[END_REF]. Although z 0
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
is small and because of the smooth bottom boundary condition, streamwise gradients necessarily develop along the domain. Nevertheless, this happens for the simulations performed using both the LRNM and WFs so that comparison is done under similar conditions. The reference study was performed for U 10 = 0.5 m • s -1 . The inflow and building temperature are 10 ˚C and 20 ˚C respectively. This configuration involves Re > 3×10 5 and Ri = 13.46, which means that the flow is turbulent and that buoyancy would effectively influence the flow in reality. However, only forced convection processes were considered to save computational resources and keep a reasonable size for near wall cells, yet implementing a LRNM approach. Furthermore, Defraeye et al. [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF] showed the relevancy of such an approach to deduce h c,w -U 10 correlations that are relevant for higher wind speeds.
Two different mesh resolutions that only differ next to walls were used. 2.6 × 10 6 cells composed the mesh used to perform the LRNM-based simulation (z * < 3 on the cube edges) and 1.1 × 10 6 cells composed the mesh used to perform the WF-based simulation (10 ≤ z * ≤ 280 on the cube surfaces).
Computational model
The modeling strategy implemented in the current study is almost similar to that described in Ref. [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF], including the domain size, inflow conditions and most of the model settings. Nonetheless, the mesh is unstructured and composed of 2.0 × 10 6 cells, with 1 m cells on the floor and 0.3 m cells on the cube roof. This implies z * ≈ 460 and z * ≈ 230 on average on the cube front and rear faces respectively. The top and lateral boundary conditions were set symmetric. Air was modeled as constant density gas (forced convection). Fig. 2 synthesizes the computational model used.
Similarly to Sec. 2, simulations were performed using both the steady RANS RSM and the Rk-ε model. However, standard WFs take care of near wall regions. Convergence was verified by monitoring h c,w profiles on the mid-lines of the different faces of the cube as well as the overall contours of h c,w on these faces.
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
Results analysis
Fig. 3 compares the simulated h c,w profiles to the reference ones [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF] for the horizontal and vertical mid-lines of the cube. Reference results obtained using the LRNM approach or standard WFs are reported in order to distinguish the respective influence of the near wall treatment and the turbulence model. As expected, predictions obtained using the Rk-ε models together with standard WFs correspond to the reference h c,w profiles computed using similar computational settings, which supports a good implementation of the model.
Standard WF significantly over-estimate LRNM data. This observation is supported by literature studies: a substantial over-prediction of h c,w with differences up to 60 % compared to LRNM predictions are highlighted in Ref. [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF] and [START_REF] Blocken | High-resolution CFD simulations for forced convective heat transfer coefficients at the facade of a low-rise building[END_REF]. In addition, h c,w intensities differ depending on the turbulence model used. h c,w intensities predicted by the RSM are lower than those predicted using the Rk-ε model except on the rear face of the cube. RSM predictions over estimate LRNM results by 35 % while deviation is of 60 % with the Rk-ε model. As the shape of h c,w profiles predicted by the two turbulence models are relatively comparable on the different faces of the cube, differences in h c,w intensities may be due to a better estimate of k by the RSM.
As a conclusion, the current study verifies that h c,w profiles predicted by standard WFs significantly deviate from LRNM results. Given the validation study of Sec. 2 and literature results, SWfs may not be considered sufficiently accurate to be generally used for building physics problems even if deviation is reduced when using the steady RANS RSM instead of the Rk-ε model.
Design of an adapted temperature wall-function for the RSM based on LRNM data
Reference customized temperature wall-function
It is possible to improve the accuracy of usual TWFs with respect to LRNM results by modifying the wall turbulent Prandtl number value (P r t,w ) both for interior or exterior problems [START_REF] Zhang | An adjustment to the standard temperature wall function for CFD modeling of indoor convective heat transfer[END_REF][START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF]. Focusing on external convective heat transfers, Defraeye et al. [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF] proposed a CTWF to be used with
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
the steady RANS Rk-ε model in case of turbulent forced convection, based on the analysis of the T * -z * profiles computed using the LRNM. Fig. 4(a) shows that these profiles follow an universal behavior characterized by logarithmic correlations especially for 10 5 < z * ≤ 4 × 10 3 at least for wind speeds higher than 0.5 m • s -1 . Fitting this relations using Eq.4 is possible in Fluent by modifying the P r t,w from 0.85 to 1.95 (see Fig. 4(b)). With this modification, WF-based predictions deviate by less than 10 % with respect to LRNM data instead of 40 % with standard WF in case of an isolated cube and other cuboids representing buildings. This CTWF was also studied for the cubical building immersed in a turbulent boundary layer in case of mixed convection (0 ≤ Ri ≤ 52) [START_REF] Defraeye | CFD simulation of heat transfer at surfaces of bluff bodies in turbulent boundary layers: Evaluation of a forced-convective temperature wall function for mixed convection[END_REF]. CTWF predictions deviate by less than 16 % compared to LRNM data while the deviation observed with standard wall-functions was of 47 %. Still considering mixed convection cases (0.14 ≤ Ri ≤ 13.7) but a street canyon, the CTWF performs well for mixed convective flows, whereas the standard TWF is more accurate for forced convective flows. As a result, an adaptive TWF was designed to account for both occurrences of flow regimes in a street canyon, by fitting one or the other TWF depending on the local Richardson number [START_REF] Allegrini | An adaptive temperature wall function for mixed convective flows at exterior surfaces of buildings in street canyons[END_REF]. This adaptive TWF generally deviates by less than 10 % compared to LRNM data over the range of Richardson numbers tested.
The different above-mentioned modified TWFs substantially improve the correspondence between WFs wand LRNM predictions. However, their applicability is constrained due to the methodology implemented to determine them as discussed in Ref. [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF]. In particular, focusing on forced convection problems, Fig. 4(b) shows that the CTWF would better apply for 5 × 10 1 ≤ z * ≤ 5 × 10 2 as the fitting was performed for this range of values, which corresponds to the theoretical range for applicability of WFs.
Adapatation of temperature wall-functions for the RSM
Results of Sec. 3.2 show rather comparable repartitions but different intensities of h c,w on the cube faces when using the RSM and Rk-ε turbulence models. Moreover, the expression of standard WFs is the same for both models. Therefore, a modification of the r t,w would also decrease the deviation between the h c,w profiles predicted using WFs and those derived from a LRNM approach when using the RSM. However, T * being a function of k, results may differ depending upon the turbulence model used. Its reliability when used together with other turbulence models deserves further research. Fig. 5 compares the h c,w profiles we obtained accounting for P r t,w = 1.95 together with either the steady RANS RSM or the Rk-ε model to the reference LRNM and CTWF h c,w profiles from Ref. [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF]. The two mid lines circling the 10 m high cubic building are still addressed. Results show a good match between our steady RANS Rk-ε-CTWF results and the reference profiles. However, h c,w profiles simulated by the RSM under-estimate LRNM results by 20 % and 25 % on average on the front and top faces of the cube. h c,w is also under-estimated on the lateral face, and its distribution is predicted more constant compared to LRNM data. As this behavior occurs also when using the Rk-ε model, it may be explained by WF effects rather than turbulence model effects.
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT h c,w FFF[W/3m 2 .KA] Rk-ε,FPr t,w =1.95F RSM,FPr t,w =1.95F Ref.FRk-ε,FCTWF Ref.FRk-ε,FLRNMF RSM,FPr t,w =1.55F D0 C0 B0 h c,w FFF[W/3m 2 .KA] xFF[m]
According to Eq. 6 and Eq. 4, q w,c , and likewise h c,w , is a decreasing function of P r t,w . Therefore, different P r t,w values lower than 1.95 were tested to fit LRNM results when using WFs and the RSM. Fig. 5 shows a good match between simulations outputs and LRNM h c,w profiles for P r t,w = 1.55, with a correspondence comparable to that of the Rk-ε-CTWF.
To further verify the accuracy of the RSM-CTWFs (P r t,w = 1.55), several simulated h c,w profiles were compared to the LRNM predictions of Defraeye [START_REF] Defraeye | LRNM profiles on the front and rear faces of a 10 m high cibical building -unpublished data[END_REF] 2 for different vertical lines on the front and rear faces of the cube, i.e. where LRNM data are validated (Sec. 2). As the problem is symmetric, Fig. 6 reports the results obtained using both turbulence models and the appropriate P r t,w value for three vertical profiles located 1 m and 3 m from the edge and on the symmetry axis of the faces.
Not considering border locations, customized TWF-based h c,w profiles match very well with each other and with LRNM results on the front face of the cube. However, the simulated h c,w profiles deviate from each other
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 0 5 10 xM=M1m xM=M3m zM[m] 2 2 4M/M0 0 2 4M/M0 4 xM=M5m 0 1 2M/M0 5 10 1 0 zM[m] 2M/M0 1 2 hM c,w M[W/(m 2 .K)] Rk-ε,MPr t,w =1.95M RSM,MMPr t,w =1.55M Ref.MRk-ε,MLRNMM hM c,w M[W/(m 2 .K)] xM=M1m xM=M3m xM=M5m
Figure 6: Comparison between the simulated h profiles using the R-k-ε model with P r t,w = 1.95 or the RSM -CTWF with P r t,w = 1.55 with reference LRNM [START_REF] Defraeye | LRNM profiles on the front and rear faces of a 10 m high cibical building -unpublished data[END_REF] on the front (top) and rear (bottom) faces of the cubical building and for U 10 = 0.5 m • s -1 .
and from LRNM results on the rear face. RSM predictions deviate by up to 20 % at mid height, which is not negligible, but is much less than the 70 % observed in Fig. 3 for standard WFs. Hence, accounting for the RSM-CTWF instead of standard WFs gives satisfactory results on the front and rear faces of cubical buildings with respect to LRNM data.
U 10 0.5 m • s -1 5 m • s -1 0.5 m • s -1 5 m • s -1 h c,
[W • m -2 • K -1
]computed using the reference h c,w -U 10 correlation of [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF] or estimated using CFD simulations for the cube in cases of U 10 = 0.5 m • s -1 and U 10 = 5 m • s -1 . -: Simulation not performed.
U 10 = 5 m • s -1 does not change the repartition of z * on the cube front and rear faces but the surface averaged z * equals 4.6 × 10 3 on the front face. This z * value exceeds the usual range for applicability of WFs and CTWFs, although Ref. [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF] suggest that the latter may also be relevant for high z * .
To evaluate the applicability of both the Rk-ε and RSM-CTWFs, additional simulations were performed for the 10 m high cubical building considering U 10 = 5 m • s -1 . This modification does not alter the distributions of h c,w on the cube faces but multiplies the surface averaged h c,w values by a factor 7.3. In such a configuration, R ≥ 3 × 10 6 and Ri ≈ 0.13, which means that the flow is turbulent and in predominant forced convection regime.
Considering U 10 = 0.5 or 5 m • s -1 , Tab. 2 compares surface averaged h c,w intensities estimated using the h c,w -U 10 correlation of Defraeye et al. [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF] or simulated using the Rk-ε and the RSM together with either the standard or the CTWFs. The reference correlation is based on LRNM data ans is formulated as follows [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF]: Considering the RSM simulations, the relative deviation is more than a factor 1.5 greater in case of U 10 = 5 m • s -1 than in case of U 10 = 0.5 m • s -1 , and reaches nearly 60 % on the rear face. On the contrary, the accuracy of CTWFs is generally improved in case of U 10 = 5 m • s -1 except on the rear face of the cube when using the RSM. However, the deviation is smaller than 10 % in comparison with the reference data, which remains acceptable considering the uncertainties linked with steady RANS and usual building physics models. Still considering U 10 = 5 m • s -1 , refining the mesh improves the accuracy of the RSM-CTWF predictions on the front face of the cube, but reduces it on the rear face. Nevertheless, the loss of accuracy is of 0.38 W • m -2 • K -1 (4.5 points), which is negligible for building physics applications. Fig. 7 compares CTWF h c,w profiles for the same vertical profiles as in Fig. 6 with LRNM data [START_REF] Defraeye | LRNM profiles on the front and rear faces of a 10 m high cibical building -unpublished data[END_REF], but considers U 10 = 5 m • s -1 . Results show a very good match between LRNM and CTWF -based h c,w profiles on both faces when using the Rk-ε model. RSM-CTWF results match LRNM data on the front face but over-estimates them on the rear face. As expected from Tab. 2, a loss of accuracy of these predictions occurs compared to the configuration with U 10 = 0.5 m • s -1 . Deviation decreases towards the symmetry axis. Predictions typically deviate from LRNM results by 30 % and 15 % 1 m from the lateral edge of the face and on the symmetry axis respectively. Nevertheless, Sec. 2 shows that the RSM predicts more constant horizontal profiles of h c,w on the rear face of the obstacles than the Rk-ε does, which better corresponds to experimental data. As a consequence, the deviation observed between the RSM-based profiles and the reference Rk-ε LRNM data may be not only due to the TWF used, but also (and even mainly) to the turbulence model.
h c;w = 5.01 × U 0.
Hence, Rk-ε and the RSM-CTWFs appear accurate both in terms of distribution and intensity on the front face of the cube for high z * values. Contrarily to Rk-ε results, RSM results deviate from Rk-ε/LRNM predictions on the rear face. Nevertheless, the predicted surface averaged h c,w values are satisfactory, and the accuracy of predictions is significantly improved with respect to LRNM data compared to that obtained using standard TWFs.
Discussion and outlooks
Because CFD models can account for any built environment, they are valuable to provide estimates of h c,w suitable for use in building physics. However, their accuracy is very dependent on the CFD method used. Although real atmospheric flows are generally not statistically stationary, steady RANS models are mostly used in environmental wind and building energy engineer-
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
ing because other methods are still too computationally expensive. As a consequence, the accuracy of the turbulence models used is of challenging concern as it determines the physical accuracy of the computed flow field as well as convective heat transfer predictions (Sec. 2). In particular, second order turbulence models may perform better than usual two equation models in predicting complex flows, where turbulence anisotropy is important [START_REF] Emmel | New external convective heat transfer coefficient correlations for isolated low-rise buildings[END_REF][START_REF] Panagiotou | City breathability as quantified by the exchange velocity and its spatial variation in real inhomogeneous urban geometries: An example from central London urban area[END_REF][START_REF] Merlier | On the interactions between urban structures and air flows: a numerical study of the effects of urban morphology on the building wind environment and the related building energy loads[END_REF].
In addition to the effects of the turbulence model, the determination of h c,w at building outer walls is very dependent on the near wall treatment. Standard WF substantially devaite from LRNM data for complex separated flows around constructions (Sec. 3). Nonetheless, although being more accurate, performing LRNM simulations is rarely possible, still because of the computational costs involved. Therefore, improved TWFs were developed, but they mainly address k-ε models [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF][START_REF] Allegrini | An adaptive temperature wall function for mixed convective flows at exterior surfaces of buildings in street canyons[END_REF].
Extending the studies reported in Ref. [START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF] and [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF], the RSM-CTWF developed in Sec. 4.2 aims at enlarging the scope for applicability of CTWF to more detailed turbulence models. Due to the implemented design methodology, this RSM-CTWF has almost the same advantages, drawbacks and scope of applicability as the Rk-ε-CTWF proposed in Ref. [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF] though the applicability of both CTWFs was studied for high z * . In particular, its use is relevant for forced convection problems and the front (and rear faces) of an isolated sharp edged building.
Further than simply highlighting differences between predictions obtained using the Rk-ε or the RSM models, the LRNM or WFs and proposing a version of the CTWF suitable for use with the RSM, the current study also arises other substantial modeling challenges. Essentially, intermittent convective processes are identified as a significant issue for steady RANS methods, as they greatly impacts of convective heat transfers. Because of their transient formulation, unsteady RANS approaches and even more large eddy simulation (LES) methods should improve the physical accuracy of simulations [31,[START_REF] Tominaga | Comparison of various revised k eps models and LES applied to flow around a high rise building model with 1:1:2 shape placed wityhin the surface boundary layer[END_REF][START_REF] Moonen | Evaluation of the ventilation potential of courtyards and urban street canyons using RANS and LES[END_REF].
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
Moreover, this study only considers one wind direction, forced convection and isolated sharp edged obstacles with smooth walls. To come closer to real building physics problems, further studies should extend the approach to more realistic environments by considering different wind directions and speeds [48], buoyancy effects [START_REF] Defraeye | Convective heat transfer coefficients for exterior building surfaces: Existing correlations and CFD modelling[END_REF][START_REF] Allegrini | An adaptive temperature wall function for mixed convective flows at exterior surfaces of buildings in street canyons[END_REF], multi-obstacles configurations [49] and wall roughness effects [50]. Furthermore, this approach might advantageously be extended to lower z * by blending a near wall model for viscosity affected regions, which would be useful when automatic grid generation algorithms place first nodes in such regions [START_REF] Popovac | Compound Wall Treatment for RANS Computation of Complex Turbulent Flows and Heat Transfer[END_REF]. This approach might also advantageously be integrated in a coupling between CFD and building energy models to reciprocally provide appropriate boundary conditions as it has been done for the interior of buildings [51, 1, 2], thus improving our understanding of the interactions between the building and its environment.
Conclusion
This paper firstly examined the accuracy of the steady RANS methods and RSM and Rk-ε turbulence models in predicting convective heat transfers around a sharp edged obstacle by performing a LRNM approach. Steady RANS approaches show difficulties to predict convective heat transfers in complex and unsteady separated flows and the predicted h c,w distribution deviates depending on the turbulence model used. Nonetheless, both turbulence models provide accurate h c,w estimates on the front and rear face of the isolated obstacle. Switching to a more realistic -but still theoreticalbuilding model, it is verified that standard TWF substantially over-predicts convective heat transfers at building outer walls compared to LRNM results either when used with the Rk-ε or the RSM. Assuming LRNM results accurate given the validation study and literature findings, and given the necessity of using WFs in building physics CFD simulations, this study aimed to provide a TWF able to reproduce LRNM results for the RSM. This model may be more accurate than eddy viscosity models in complex configurations. Based on suggestions of Ref. [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF], a calibration of the P r t,w was done. The
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
RSM-CTWF implies to modify the P r t,w in Ansys Fluent to 1.55 instead of 0.85 by default in the software [START_REF]Fluent inc[END_REF] and 1.95 for the Rk-ε-CTWF [START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF].
To conclude, the use of CTWFs together with the appropriate turbulence model substantially improves h c,w predictions as compared to standard TWF on the front and rear faces of isolated sharp-edged obstacles lying perpendicular to the wind at least. Nonetheless, to even more improve the accuracy of h c,w estimates, a more detailed CFD approach should be used. Indeed accurately modeling convective heat transfers implies modeling more accurately the aerodynamic field in terms of dynamic features as well as the near wall region. Therefore, methods such as the cost effective LBM LES [START_REF] Obrecht | Towards aeraulic simulations at urban scale using the lattice Boltzmann method[END_REF] appear promising as they allow a detailed computation of the different scales of the flow field on a very fine spatial an temporal discretization.
-εnTLRNM Experiment (a) Influence of the turbulence model (steady RANS RSM vs. Rk-ε).
,nitn7x103 RSM,nEWT,nitn8x103 RSM,nEWT,nitn7.5x10 3 RSM,nEWT,nitn8.5x10 3 RSM,nEWT,nitn9.5x10 3 RSM,nEWT,nitn9x10 3 ,titt8x103 RSM,tEWT,titt7.5x10 3 RSM,tEWT,titt8.5x10 3 RSM,tEWT,titt9.5x10 3 RSM,tEWT,titt9x103 (b) Convergence study for the steady RANS RSM configuration.
Figure 1 :
1 Figure1: Comparison of numerical and experimental h c,w profiles around the cube. Slight grey strips represent regions of higher experimental uncertainties. Experimental data are taken from Ref.[START_REF] Meinders | Experimental study of the local convection heat transfer from a wall mounted cube in turbulent channel flow[END_REF] and reference (Ref.) R-k-ε LRNM data are taken from[START_REF] Defraeye | CFD analysis of convective heat transfer at the surfaces of a cube immersed in a turbulent boundary layer[END_REF].
Figure 2 :
2 Figure 2: Computational model.
Figure 3 :
3 Figure 3: Comparison of the h c,w profiles around the cube: effect of the turbulence model and near-wall treatment. Reference (Ref.) R-k-ε LRNM and SWF data are taken from [25].
5 T
5 -the-wall SWFNprofilesN(RNk-ε) LRNMNprofilesN(RNk-ε) (a) T * -z * universal profile[START_REF] Defraeye | Convective heat transfer coefficients for exterior building surfaces: Existing correlations and CFD modelling[END_REF]. The second log law can be deduced as T * = 8.87 ln z * + 2.-the-wall LRNMWprofilesW(RWk-ε) CWFWapproximation (b) Fitting of the R-k-εCTWF for 50 ≤ z * ≤ 500[START_REF] Defraeye | An adjusted temperature wall function for turbulent forced convective heat transfer for bluff bodies in the atmospheric boundary layer[END_REF]
Figure 4 :
4 Figure 4: Bases of the CTWF.
,FPr t,w =1.95F RSM,FPr t,w =1.95F Ref.FRk-ε,FCTWFF Ref.FRk-ε,FLRNMF RSM,FPr t,w =1.55F
Figure 5 :
5 Figure 5: Comparison of the h profiles obtained using customized P r t,w values with reference (Ref.) R-k-ε LRNM data [41].
4. 3 .
3 Pertinence of the RSM customized temperature wall-function for high z * Assuming cell sizes of about 0.5 m, the CTWFs relevantly apply for relatively low wind speeds (50 ≤ z * ≤ 500). Considering the case study of Sec. 3, increasing the reference wind speed from e.g. U 10 = 0.5 m • s -1 to
85
10
10
( 7 )
7 Accordingly to literature results and the observations reported in Sec. 3, M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT the use of standard TWFs significantly over-estimates reference h c,w values.
Figure 7 :
7 Figure 7: Comparison between the simulated h profiles using the R-k-ε or RSM -CTWF with reference LRNM data [41] on the front (top) and rear (bottom) faces of the cubical building and for U 10 = 5 m • s -1 .
Table 2 :
2 Comparison of the surface averaged h c,w
w -U 10 2.78 19.68 1.28 8.63
h c,w d.(%) h c,w d.(%) h c,w d.(%) h c,w d.(%)
Rk-ε, SWF 4.0 43.9 - -1.6 27.3 - -
Rk-ε, CWF 2.4 12.2 17.8 9.7 1.1 16.4 7.5 12.6
RSM, SWF 3.5 24.8 27.3 38.8 1.7 35.9 13.7 58.5
RSM, CWF 2.5 11 18.4 6.7 1.3 1 9.4 9.2
RSM, CWF M+ - - 18.6 5.5 - - 9.8 13.7
In complement to wind-tunnel data, this model was used as reference for the validation of the aerodynamic model[START_REF] Merlier | On the interactions between urban structures and air flows: a numerical study of the effects of urban morphology on the building wind environment and the related building energy loads[END_REF].
These LRNM-based h c,w profiles were kindly provided to us by T. Defraeye.
Acknowledgments
Authors thanks T. Defraeye for providing the useful reference LRNM data.
[48] T. Defraeye, J. Carmeliet, A methodology to assess the influence of local wind conditions and building orientation on the convective heat transfer at building surfaces, Environmental Modelling & Software 25 (12) (2010) 1813-1824.
[49] J. Liu, M. Heidarinejad, S. Gracik, J. Srebric, The impact of exterior surface convective heat transfer coefficients on the building energy consumption in urban neighborhoods with different plan area densities, Energy and Buildings.
[50] K. Suga, T. Craft, H. Iacovides, An analytical wall-function for turbulent flows and heat transfer over rough walls, International Journal of Heat and Fluid Flow 27 (5) (2006) 852-866.
[51] Z. Zhai, Q. (Yan) Chen, Numerical determination and treatment of convective heat transfer coefficient in the coupled building energy and CFD simulation, Building and Environment 39 (8) (2004) 1001-1009. |
04104750 | en | [
"math.math-at"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04104750/file/2305.14308.pdf | Haldun Özg
Ü R Bayindir
ALGEBRAIC K-THEORY OF THE TWO-PERIODIC FIRST MORAVA K-THEORY
Using the root adjunction formalism developed in an earlier work and logarithmic THH, we obtain a simplified computation of T (2) * K(ku) for p > 3. Our computational methods also provide T (2) * K(ku/p), where ku/p is the 2-periodic Morava K-theory spectrum of height 1.
Introduction
One of the central problems in homotopy theory is the computation of the algebraic K-theory of the sphere spectrum, K(S). This is due to the fact that K(S) contains the smooth Whitehead spectrum of the point as a summand which approximates the concordance spaces of highly connected compact smooth manifolds c.f. [START_REF] Waldhausen | Spaces of PL manifolds and categories of simple maps[END_REF]. A program initiated by Waldhausen [START_REF] Waldhausen | Algebraic K-theory of spaces, localization, and the chromatic filtration of stable homotopy[END_REF] and later carried forward by Ausoni and Rognes [START_REF] Ausoni | Algebraic K-theory of topological Ktheory[END_REF] aims at studying K(S) via étale descent through K(E n ) where E n is the Morava E-theory spectrum of height n. This turns our attention to the computation of K(E n ).
Motivated by this plan, Ausoni and Rognes compute V (1) * K(ℓ p ) in [START_REF] Ausoni | Algebraic K-theory of topological Ktheory[END_REF] for p > 3. Here, ℓ p is the Adams summand of the connective cover ku p of the p-completed complex K-theory spectrum KU p ≃ E 1 . Later, Ausoni improves this to a computation of the V (1)-homotopy of K(ku p ) [START_REF]On the algebraic K-theory of the complex K-theory spectrum[END_REF]. Another interest in K(ku) stems from the fact that it classifies virtual 2-vector bundles, a 2-categorical analogue of ordinary complex vector bundles [START_REF] Baas | Stable bundles over rig categories[END_REF].
As an outcome of his computations, Ausoni observes that the relationship between V (1) * K(ℓ p ) and V (1) * K(ku p ) through the map V (1) * K(ℓ p )
V (1) * K(ku p ) resembles a height 2 analogue of K * (Z p ; Z/p) K * (Z p [ζ p ]; Z/p) for the cyclotomic extension Z Z p [ζ p ] where ζ p is a primitive pth root of unity; the computation of the former is due to Hesselholt and Madsen [START_REF] Hesselholt | On the K-theory of local fields[END_REF]Theorem D]. For instance, K(Z p [ζ p ]; Z/p) is essentially given by adjoining a p -1-root to v 1 in K(Z p ; Z/p). On the other hand, Ausoni proves the following for ℓ p ku p .
Theorem 1.1 ([Aus10], Theorem 7.18). Let p > 3 be a prime. There is an isomorphism of graded abelian groups:
T (2) * K(ku) ∼ = T (2) * K(ℓ)[b]/(b p-1 + v 2 ).
where |b| = 2p + 2.
Remark 1.2. Indeed, this isomorphism can be improved to that of F p [b]-algebras. We discuss this in Remark 7.20.
Remark 1.3. Since T (2) * K(ℓ) is known due to Ausoni and Rognes [AR02, Theorem 0.3], the theorem above provides an explicit description of T (2) * K(ku).
Following this comparison, Ausoni, the author and Moulinos construct a root adjunction method for ring spectra and study the algebraic K-theory, THH and logarithmic THH of ring spectra obtained via root adjunction [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF]. Let A be an E 1 -ring spectrum and let a ∈ π mk A. Under suitable hypothesis, this construction provides another E 1 -ring A( m √ a) for which the homotopy ring of A( m √ a) is precisely given by a root adjunction:
π * A( m √ a) ∼ = π * A[z]/(z m -a).
Furthermore, A( m √ a) is an E 1 -algebra in Fun(Z/(m) ds , Sp) equipped with the Day convolution symmetric monoidal structure; we say A( m √ a) is an m-graded E 1ring. Roughly speaking, this structure may be considered as a splitting A( m √ a) ≃ ∨ i∈Z/m A( m √ a) i , which we call the weight grading on A( m √ a), for which the multiplication on A( m √ a) is given by maps respecting this grading over Z/m:
A( m √ a) i ∧ A( m √ a) j A( m √ a) i+j .
Furthermore, we have A( m √ a) i = Σ ik A for 0 ≤ i < m. This results in a canonical splitting of THH(A m √ a) into a coproduct of m-cofactors as an S 1 -equivariant spectrum. It follows by [ABM22, Theorem 1.9] that at the level of algebraic K-theory, the map
K(A) K(A( m √ a))
is the inclusion of a wedge summand whenever A is p-local and p ∤ m. The authors prove in [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF] that there is an equivalence of E 1 -rings ku p ≃ ℓ p ( p-1 √ v 1 ). This equips ku p with the structure of a p -1-graded E 1 -ring through ku p ≃ ∨ 0≤i<p-1 Σ 2i ℓ p and one obtains that THH(ku p ) and the logarithmic THH of ku p in the sense of [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF] admit S 1 -equivariant splittings into p -1 summands. In this work, our first objective is to obtain a simplified computation of T (2) * K(ku), i.e. a simplified proof of Theorem 1.1, by showing that these splittings carry over to TC(ku p ) in a way that provides the graded abelian group T (2) * K(ku) as a p -1fold coproduct of shifted copies of T (2) * K(ℓ) as given in Theorem 1.1. For this, we work with the logarithmic THH (in the sense of Rognes [START_REF]Topological logarithmic structures, New topological contexts for Galois theory and algebraic geometry[END_REF]) of ku p computed in [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF].
In [Aus10, Sections 3 and 4], Ausoni constructs what he calls the higher bott element b ∈ V (1) 2p+2 K(ku p ) and identifies the image of this element in V (1) * THH(ku p ) under the trace map. These are the only results that we take from Ausoni's work as input. In particular, our computation avoids the low dimensional computations and the infinite spectral sequence argument of [Aus10, Sections 5, 6 and 7]. We provide an outline of our computation in Section 2 below.
Remark 1.4. Currently, Christian Ausoni, the author, Tommy Lundemo and Steffen Sagave are working on generalizing the methods of this work and [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF] to obtain a higher height analogue of Theorem 1.1 that relates the algebraic K-theory of E n to that of the truncated Brown-Peterson spectrum BP n .
In a later work [START_REF]Algebraic K-theory of the first Morava K-theory[END_REF], Ausoni and Rognes compute V (1) * K(ℓ/p) where ℓ/p is the connective Morava K-theory spectrum k(1) of height one. Our computational approach to T (2) * K(ku p ) naturally provides a computation of T (2) * K(ku/p); see Section 2 for an outline. Namely, we obtain the first computation of T (2) * K(ku/p).
Note that, ku/p is also called the connective 2-periodic Morava K-theory of height one.
Theorem 1.5 (Theorem 7.21). Let p > 3 be a prime. There is an isomorphism of graded abelian groups:
T (2) * K(ku/p) ∼ = T (2) * K(ℓ/p) ⊗ Fp[v 2 ] F p [b]
with |b| = 2p + 2 and in the tensor product above, we take v 2 = -b p-1 .
Together with [AR12, Theorem 1.1], the theorem above provides a complete description of T (2) * K(ku/p). In particular, similar to Theorem 1.1, T (2) * K(ku/p) is given by a p -1-fold coproduct of shifted copies of T (2) * K(ℓ/p).
Indeed, Ausoni and Rognes compute V (1) * K(ℓ/p) with the goal of investigating how localization and Galois descent techniques that have been used for (local) number rings or fields can be applied in studying the algebraic K-theory of ring-spectra, in particular ℓ p and ku p . For this, they define K(ff (ℓ p )), what they call the algebraic Ktheory of the fraction field of ℓ p , as the cofiber of the transfer map K(L/p) K(L p ), i.e. there is a cofiber sequence
K(L/p) K(L p ) K(ff (ℓ p )).
Note that K(ff (ℓ p )) is not claimed to be the algebraic K-theory of an E 1 -ring. Ausoni and Rognes continue their discussion in [START_REF]Algebraic K-theory of the fraction field of topological K-theory[END_REF] where they state a conjectural formula [AR09, Section 3]:
(1.6) T (2) * K(ff (ku p )) ∼ = T (2) * K(ff (ℓ p )) ⊗ Fp[v 2 ] F p [b],
here, K(ff (ku p )) is defined to be the cofiber of the transfer map below.
K(KU/p) K(KU p ) K(ff (ku p ))
In Theorem 7.23, we verify the conjectural formula in (1.6) under a suitable hypothesis.
Notation 1.7. For p > 3, we let V (1) denote the spectrum given by S/(p, v 1 ). Due to [START_REF] Oka | Multiplicative structure of finite ring spectra and stable homotopy of spheres[END_REF], this is a homotopy commutative ring spectrum. Inverting the self map v 2 of V (1), we obtain T (2
) := V (1)[v ±1 2 ]. Since L T (2) V (1) ≃ T (2) [Hov97, Section 1.5],
we obtain that T (2) is also a homotopy commutative ring spectrum and V (1) T (2) is a map of a homotopy commutative ring spectra.
We let CycSp denote the ∞-category of cyclotomic spectra as in [AMMN22, Definition 2.1]; this is a slight variation of what is called p-cyclotomic spectra in [START_REF] Nikolaus | On topological cyclic homology[END_REF]. In particular, an object of CycSp is an S 1 -equivariant spectrum E with an S 1 -equivariant map E E tCp . For a given small symmetric monoidal ∞-category C and a presentably symmetric monoidal ∞-category D, we let Fun(C, D) denote the corresponding functor ∞category equipped with the symmetric monoidal structure given by Day convolution [START_REF] Glasman | Day convolution for ∞-categories[END_REF][START_REF] Day | On closed categories of functors[END_REF]. For a simplicial set K, we let D K denote the symmetric monoidal ∞category given by the simplicial set of maps from K to D equipped with the levelwise symmetric monoidal structure [START_REF] Lurie | Higher algebra[END_REF]Remark 2.1.3.4].
For m ≥ 0, we when we mention the abelian group Z/m as a symmetric monoidal monoidal ∞-category, we mean the corresponding discrete symmetric monoidal ∞category. This is often denoted by (Z/m) ds in the literature.
Acknowledgements. I would like to thank Christian Ausoni for introducing me to this subject and for the valuable discussions I have had with him that led to this project. I also would like to thank Tasos Moulinos and Maximillien Peroux for answering my various questions regarding this work. Furthermore, I also benefited from discussions with Gabriel Angelini-Knoll, Andrew Baker, Jeremy Hahn, Yonatan Harpaz, Eva Höning, Thomas Nikolaus, Tommy Lundemo, Birgit Richter and Steffen Sagave; I would like to thank them as well. We acknowledge support from the project ANR-16-CE40-0003 ChroK and the Engineering and Physical Sciences Research Council (EPSRC) grant EP/T030771/1.
Outline
Here, we provide an outline of the proofs of Theorems 1.1 and 1.5. In Section 4, we use the Sp-linear Fourier transform developed in [START_REF] Carmeli | Chromatic cyclotomic extensions[END_REF] to show that the action of the p-adic Adams operations on ku p equips ku p with the structure of a p -1-graded E ∞ -ring, i.e. an E ∞ -algebra in Fun(Z/(p -1), Sp), compatible with the splitting
ku p ≃ ∨ 0≤i<p-1 Σ 2i ℓ p .
Furthermore, we show that the resulting p -1-graded E 1 -ring structure on ku p agrees with that provided by the root adjunction methods of [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF], i.e. that provided by the equivalence ku p ≃ ℓ p ( p-1 √ v 1 ).
In Section 5, we discuss the resulting grading on THH(ku p ) and TC(ku p ) with further details provided in Appendix A where we show that TC(-) is a lax symmetric monoidal functor from the ∞-category of p -1-graded E 1 -rings to the ∞-category of p -1-graded spectra. We deduce that THH(ku p ) is an E ∞ -algebra in p -1-graded cyclotomic spectra and that TC(ku p ) and TC -(ku p ) are p -1-graded E ∞ -rings.
Since ku p ≃ ℓ p ( p-1 √ v 1 ), it follows from the results of [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF] that TC(ku p ) 0 ≃ TC(ℓ p ) and TC -(ku p ) 0 ≃ TC -(ℓ p ).
Therefore, to obtain Theorem 1.1, it suffices to show that there is a unit
b ∈ T (2) 2p+2 TC(ku p )
of weight 1. For this, we use the element b ∈ V (1) 2p+2 K(ku p ) constructed in [Aus10, Section 3]. We show in Section 6 that logarithmic THH of ku p (as in [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF]) also admits an S 1 -equivariant splitting compatible with that on THH(ku p ). After this, we obtain that b represents a unit in T (2) * TC -(ku p ) by using the logarithmic THH computations of Rognes, Sagave and Schlichtkrull [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF], see Section 7. From this, it follows easily that b is indeed a unit of weight 1 in T (2) * TC(ku p ). This provides T (2) * TC(ku p ), i.e. Theorem 1.1, since
T (2) * TC(ku p ) 0 ∼ = T (2) * TC(ℓ p )
and that T (2) * TC(ku p ) is periodic in the weight direction due to the unit b of weight 1 in T (2) * TC(ku p ).
To compute T (2) * K(ku/p), i.e. to prove Theorem 1.5, we construct ku/p as an algebra over ku p in the ∞-category of p -1-graded spectra in Section 4, i.e. ku/p is a p -1-graded ku p -algebra. Furthermore, we show that ku/p ≃ ℓ/p( p-1 √ v 1 ) as p -1-graded E 1 -rings. As a result, we obtain that TC(ku/p) is a p -1-graded TC(ku p )-module (i.e. a module over TC(ku p ) in Fun(Z/(p -1), Sp)) and that TC(ku/p) 0 ≃ TC(ℓ/p).
After this, Theorem 1.5 follows by noting that T (2) * TC(ku p ) contains a unit of weight 1 and therefore, every p-1-graded module over it, such as T (2) * TC(ku/p), is periodic in the weight direction, see Section 7.
Graded ring spectra
Here, we set our conventions for graded objects in a presentably symmetric monoidal stable ∞-category C. We start by noting that there is an equivalence of ∞-categories
Fun(Z/m, C) ≃ i∈Z/m C.
We call an object C of Fun(Z/m, C) an m-graded object of C and let
C i denote C(i). If C is an E k -algebra in Fun(Z/m, C), we say C is an m-graded E k -algebra in C. If C ′ is an E k-1 C-algebra in Fun(Z/m, C), we say C ′ is an m-graded C-algebra.
For an M ∈ Fun(Z/m, Sp), we say M is an m-graded spectrum and an E k -algebra in Fun(Z/m, Sp) is called an m-graded E k -ring. For m = 0, we drop m and talk about graded spectra, graded E k -rings and so on.
The map Z/m 0 provides a symmetric monoidal left adjoint functor
D : Fun(Z/m, C) Fun(0, C) ≃ C
given by left Kan extension [Nik16, Corollary 3.8]. We call D(C) the underlying object of C and this is given by the formula
D(M) ≃ i∈Z/m M i .
We often omit D in our notation.
We say an m-graded object C of C is concentrated in weight 0 if C i ≃ 0 whenever i = 0. The inclusion 0 Z/m provides another adjunction:
C ≃ Fun(0, C) Fun(Z/m, C) F 0 G 0
where the left adjoint F 0 is symmetric monoidal and given by left Kan extension and G 0 is given by restriction, i.e. G 0 (C) = C 0 . For C ∈ C, F 0 (C) provides C as an m-graded object concentrated in weight 0. We often omit F 0 and for a given C ∈ Fun(Z/m, C), we denote the m-graded object F 0 (C 0 ) by C 0 .
For an m-graded E k -ring A, the counit of this adjunction provides a map A 0 A of m-graded E k -rings. If A is concentrated in weight 0, the counit of F 0 ⊣ G 0 provides an equivalence
(3.1) F 0 G 0 (A) ≃ A. 4. Complex K-theory spectrum as a p -1-graded E ∞ -ring
Here, we use the results of [START_REF] Carmeli | Chromatic cyclotomic extensions[END_REF] to obtain a p -1-graded E ∞ -ring structure on ku p . Furthermore, we show that the resulting p -1-graded E 1 -ring structure on ku p agrees with that provided by the root adjunction methods of [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF].
Similarly, we construct a 2-periodic p -1-graded Morava K-theory spectrum of height 1, i.e. ku p /p, as a p -1-graded E ∞ ku p -algebra. 4.1. Complex K-theory spectrum. Recall that L p KU p is a Galois extension with Galois group ∆ := Z/(p -1) in the sense of [Rog08, Section 5.5.4]. Taking connective covers, one obtains that ku p is a ∆-equivariant E ∞ ℓ p -algebra and there is an equivalence: ℓ p ≃ ku h∆ p . We consider ∆ as the cyclic subgroup Z/(p -1) of Z × p . Let δ denote a generator of ∆ and α denote the corresponding element in Z p . On π
* ku p ∼ = Z p [u 2 ], we have π * (δ)(u i 2 ) = α i u i 2 .
Note that since |∆| = p -1 and since ku p is p-local, the homotopy fixed points above can be computed by taking fixed points at the level of homotopy groups.
The p -1-graded E ∞ ℓ p -algebra structure on ku p is a consequence of the Fourier transform developed in [CSY21, Section 3]. Due to [CSY21, Corollary 3.9], ℓ p admits a primitive p -1-root of unity in the sense of [CSY21, Definition 3.3] which we can choose to be α ∈ Z p ∼ = π 0 ℓ p above. Let ∆ * denote the Pontryagin dual
∆ * := hom(∆, Z/(p -1))
of ∆ for which we have ∆ * ∼ = Z/(p -1). In this situation, [CSY21, Proposition 3.13] provides a symmetric monoidal functor: from the ∞-category of ∆-equivariant ℓ p -modules to the ∞-category of p -1-graded ℓ p -modules. Indeed, this functor is an equivalence of ∞-categories. For the E ∞algebra ku p in LMod B∆ ℓp , we will show that F(ku p ) provides the desired p -1-graded E ∞ ℓ p -algebra structure on ku p .
First, we describe F(ku p ) as a p-1-graded ℓ p -module. Indeed, F provides the underlying eigenspectrum decomposition as described in [CSY21, Remark 3.14]. Namely, by [CSY21, Definition 3.12] we have
F(ku p ) i ≃ (ℓ p (-ϕ i ) ∧ ℓp ku p ) h∆
where ℓ p (-ϕ i ) is given by ℓ p as an ℓ p -module but δ acts through multiplication by α -i on π * (ℓ p (-ϕ i )) [CSY21, Definition 3.10]; here, ϕ i is the map Z/(p -1) Z/(p -1) that multiplies by i. Again, homotopy fixed points may be computed by taking fixed points at the level of homotopy groups and one observes that
π * F(ku p ) i ∼ = π * (ℓ p (-ϕ i ) ∧ ℓp ku p ) h∆
is precisely given by the eigenspace corresponding to α i in π * ku p . This eigenspace is π * (Σ 2i ℓ p ) ⊆ π * ku p . In particular, π * F(ku p ) i is free of rank 1 as a π * ℓ p -module and therefore, we obtain equivalences of ℓ p -modules
F(ku p ) i ≃ Σ 2i ℓ p .
Therefore, the underlying ℓ p -module of the p -1-graded ℓ p -module F(ku p ) is given by
(4.2) D(F(ku p )) ≃ 0≤i<p-1 Σ 2i ℓ p ≃ ku p
as desired. The following proposition identifies the underlying E ∞ ℓ p -algebra of F(ku p ) with ku p .
Proposition 4.3.
There is an equivalence of E ∞ ℓ p -algebras
D(F(ku p )) ≃ ku p .
Proof. Using the strong monoidality of F, one observes that there is an isomorphism
π * D(F(ku p )) ∼ = π * ku p of π * ℓ p -algebras. Inverting v 1 ∈ π * ℓ p , we obtain the following commuting diagram of E ∞ -rings.
(4.4)
ℓ p D(F(ku p )) L p D(F(ku p ))[v -1 1 ]
There is an isomorphism of π * L p -algebras
π * D(F(ku p ))[v -1 1 ] ∼ = π * KU p .
Since π * L p π * KU p is an étale map of Dirac rings [HP22, Example 4.32], we deduce by [HP22, Theorem 1.10] that the isomorphism above lifts to an equivalence of E ∞ L p -algebras
D(F(ku p ))[v -1 1 ] ≃ KU p . Alternatively, this also follows by [BR07, Proposition 2.2.3].
Since the right hand vertical arrow in Diagram (4.4) is a connective cover, the universal property of connective covers (in E ∞ ℓ p -algebras) provide an equivalence
D(F(ku p )) ≃ ku p of E ∞ ℓ p -algebras. Theorem 4.5. The E ∞ ℓ p -algebra ku p admits the structure of a p -1-graded E ∞ ℓ p -algebra such that (ku p ) i ≃ Σ 2i ℓ p .
Remark 4.6. From this point, when we mention ku p as a p -1-graded E ∞ ℓ p -algebra, we mean F(ku p ).
Remark 4.7. We would like to thank Tommy Lundemo for pointing out that it should also be possible to construct a p -1-graded E ∞ -algebra structure on ku (p) using [Sag14, Proposition 4.15] which states that ku (p) can be obtained from ℓ via base change through the polynomial like E ∞ -algebras of [Sag14, Construction 4.2]. 4.2. Adjoining roots to ring spectra. Here, we summarize the root adjunction method developed in [ABM22, Construction 4.6].
Let k > 0 be even and let S[z k ] denote the free E 1 -ring spectrum on S k (this is denoted by S[σ k ] in [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF]). Taking z k to be of weight 1, S[z k ] admits the structure of a graded E 2 -algebra [ABM22, Construction 3.3]. By left Kan extending S[z k ] through Z Z/m, one obtains an m-graded E 2 -ring that we also call S[z k ] with z k in weight 1.
Let A be an E 1 S[z mk ]-algebra where z mk acts through a ∈ π mk A. Using F 0 , we obtain a map of m-graded E 2 -rings concentrated in weight 0:
S[z mk ] A.
Furthermore, [ABM22, Proposition 3.9] provides a map
S[z mk ] S[z k ] of m-graded E 2 -rings carrying the weight 0 class z mk to z m k in homotopy. Finally, the m-graded E 1 - ring A( m √ a
) is defined via the following relative smash product in m-graded spectra:
(4.8) A( m √ a) := A ∧ S[z mk ] S[z k ].
This comes equipped with a map A A( m √ a) of m-graded E 1 -rings given by the counit of the adjunction F 0 ⊣ G 0 . It follows by the Künneth spectral sequence that at the level of homotopy rings, one obtains precisely the desired root adjunction:
(4.9) π * A( m √ a) ∼ = π * (A)[y]/(y m -a).
The class y above comes from z k ∈ π * S[z k ] and therefore it is of weight 1. Furthermore, Proof. This follows as in the proof of [ABM22, Lemma 3.6]. The E 2 -ring S[z k ] admits an even cell decomposition [ABM22, Proposition 3.4], i.e. it is given by a filtered colimit of graded E 2 -rings starting with the free graded E 2 -algebra on S k and the later stages given by attaching an even E 2 -cell to the former. Note that left Kan extension through Z Z/m is left adjoint and symmetric monoidal. Therefore, it preserves free algebras, even cell attachments and filtered colimits. This provides the m-graded E 2 -ring S[z k ] with an even cell decomposition. 4.3. Complex K-theory spectrum via root adjunction. Here, we show that the p -1-graded E 1 ℓ p -algebra structure on ku p provided by Proposition 4.3 agrees with that obtained by adjoining a root to v 1 ∈ π * ℓ p .
π * A ⊆ π * A( m √ a)
Let S[z 2 ] be the p -1-graded E 2 -algebra with z 2 in weight 1 as mentioned earlier. Since π * ℓ p is concentrated in even degrees, Lemma 4.10 provides a p-1-graded E 2 -ring map
(4.11) S[z 2 ] ku p that carries z 2 to u 2 in homotopy. The aforementioned p-1-graded E 2 -map S[z 2(p-1) ] S[z 2 ] carrying the weight 0 class z 2(p-1) to z p-1 2 induces the second equivalence below. S[z 2(p-1) ] ≃ F 0 G 0 (S[z 2(p-1) ]) ≃ F 0 G 0 (S[z 2 ])
The first equivalence follows by (3.1) and these are equivalences of p -1-graded E 2rings.
Similarly, the p -1-graded E ∞ -ring map ℓ p ku p (with ℓ p concentrated in weight 0), provides an equivalence ℓ p ≃ F 0 G 0 (ku p ) of p -1-graded E ∞ -rings concentrated in weight 0. We obtain the following commuting diagram of p -1-graded E 2 -rings by applying the natural transformation F 0 G 0 id to (4.11) and using the last two equivalences we mentioned above.
(4.12)
S[z 2(p-1) ] ℓ p S[z 2 ] ku p
In particular, the map ℓ p ku p is a map of p -1-graded S[z 2(p-1) ]-algebras. The extension/restriction of scalars adjunction induced by the left hand vertical map provides a map
ℓ p ∧ S[z 2(p-1) ] S[z 2 ] ≃ -ku p of p -1-graded E 1 S[z 2 ]
-algebras. Note that the left hand side above is a form of ℓ p ( p-1 √ v 1 ) as in (4.8). Considering (4.9), one observes that the map above is an equivalence as desired. This proves the following. Proposition 4.13. Let ku p denote a p -1-graded E 1 -ring provided by Theorem 4.5. Then there is an equivalence of p -1-graded E 1 -rings
ku p ≃ ℓ p ( p-1 √ v 1 )
for the form of ℓ p ( p-1 √ v 1 ) constructed above.
4.4. Two periodic Morava K-theory as a p -1-graded ku p -algebra. Using the even cell decomposition of S[z 0 ], we obtain a map S[z 0 ] ℓ p of E 2 -rings that carries z 0 to p in homotopy. Through this, we define the connective first Morava K-theory k(1) ≃ ℓ/p as an E 1 ℓ p -algebra as follows:
ℓ/p := S ∧ S[z 0 ] ℓ p .
Here, we make use of the E ∞ map S[z 0 ] S sending z 0 to 0; this is the weight 0-Postnikov truncation of S[z 0 ] ([HW20, Lemma B.0.6]). Using F 0 , we consider ℓ/p as a p -1-graded ℓ p -algebra concentrated in weight 0.
We define the connective two periodic first Morava K-theory ku/p as a p-1-graded ku p -algebra as follows.
ku/p := ℓ/p ∧ ℓp ku p Proposition 4.14. There is an equivalence of p -1-graded E 1 -rings
ku/p ≃ ℓ/p( p-1 √ v n ) for some form of ℓ/p( p-1 √ v n ).
Proof. There is a map of p -1-graded ℓ p -algebras ℓ/p ku/p := ℓ/p ∧ ℓp ku p .
The target carries a p -1-graded ku p -algebra structure compatible with its p -1graded ℓ p -algebra structure. Forgetting through Diagram (4.12), this is a map of p -1-graded S[z 2(p-1) ]-algebras where the target carries a compatible p -1-graded S[z 2 ]-algebra structure. Extending scalars, we obtain a map of p -1-graded E 1 -rings:
ℓ/p ∧ S[z 2(p-1) ] S[z 2 ]
≃ -ku/p, which can easily shown to be an equivalence.
Graded THH, TC -and TC
Let X be a p-local m-graded E k -ring for m > 0. In [AMMN22, Appendix A], the authors prove that in this situation, THH(X) is an m-graded E k-1 -algebra in Sp BS 1 . In particular, THH(X) admits an S 1 -equivariant splitting into a coproduct of m-cofactors. Since the homotopy fixed points functor and the Tate construction commute with finite coproducts, this splits THH(X) hS 1 and (THH(X) tCp ) hS 1 as well. However, these splittings may not carry over to TC(X) in general since the canonical map is given by maps THH(X) hS 1 i (THH(X) tCp i ) hS 1 that preserve the m-grading whereas the Frobenius map is given by maps THH(X) i THH(X) tCp pi . In particular, the fiber sequence defining TC(X)
TC(X) i∈Z/m THH(X) hS 1 i ϕ hS 1 p -can ------ i∈Z/m (THH(X) tCp i ) hS 1
may not split. On the other hand, if p = 1 in Z/m, the Frobenius map also respects the m-grading. This results in a splitting of the fiber sequence defining TC(X) and hence a splitting of TC(X) into m-factors. Since p = 1 in Z/(p -1), this applies to our examples. In this section, we make this precise and deduce that TC(ku p ) is a p -1-graded E ∞ -ring and that TC(ku/p) is a p -1-graded TC(ku p )-module.
For a given spectrum F , we let F triv denote the cyclotomic spectrum with trivial S 1 -action and the Frobenius map given by the composite F F hCp F tCp where the first map comes from the fact that F has the trivial action and the second map is the canonical map.
Definition 5.1. Since CycSp is a stable and presentably symmetric monoidal ∞category, it follows by Shipley's theorem that there is a unique cocontinuous symmetric monoidal functor (-) triv : Sp CycSp given by the trivial cyclotomic structure described above.
The right adjoint to (-) triv is the lax symmetric monoidal functor TC : CycSp Sp given by TC(-) ≃ Map CycSp (S triv , -).
For the rest of this section, assume that m is a positive integer such that p = 1 in Z/m. Using the results of [AMMN22, Appendix A] we prove in Appendix A below that there is a symmetric monoidal functor
Alg E 1 (Fun(Z/m, Sp)) THH ---Fun(Z/m, CycSp).
Furthermore, it follows by [Nik16, Corollary 3.7] that the levelwise application of TC provides a lax symmetric monoidal functor: TC : Fun(Z/m, CycSp) Fun(Z/m, Sp), that we also call TC. In Appendix A, we prove that the following diagram of lax symmetric monoidal functors commutes.
(5.2)
Alg E 1 (Fun(Z/m, Sp)) Fun(Z/m, CycSp) Fun(Z/m, Sp) Alg E 1 (Sp) CycSp Sp THH TC THH TC
The vertical maps above are given by left Kan extension along Z/m 0, i.e. they provide the underlying objects.
Remark 5.3. The composite TC • THH at the bottom row above may not in general give the correct result since we only consider one prime in CycSp. However, this is not an issue since we only work with p-complete objects in our applications.
Construction 5.4. Since ku p is a p -1-graded E ∞ -ring, we obtain that THH(ku p ) is a p -1-graded E ∞ -algebra in cyclotomic spectra and that TC(ku p ) is a p -1-graded E ∞ -ring.
Furthermore, in Section 4.4, we defined ku/p as a p -1-graded ku p -algebra. In particular, this implies that ku/p is a right module over ku p in the ∞-category of p -1-graded E 1 -rings, see [ABM22, Construction 4.11]. Therefore, THH(ku/p) is a right THH(ku p )-module in the ∞-category of p -1-graded cyclotomic spectra and that TC(ku/p) is a right TC(ku p )-module in the ∞-category of p -1-graded spectra.
Remark 5.5. Furthermore, the levelwise application of the symmetric monoidal functor CycSp
Sp BS 1 that forgets the Frobenius map shows that THH(X) is an m-graded E k-1 -algebra in Sp BS 1 whenever X is an m-graded E k -ring. In particular, THH(X) hS 1 and (THH(X) tCp ) hS 1 also admit the structures of m-graded E k-1algebras.
5.1. Weight zero splitting of THH for root adjunctions. In Section 4.3, we show that the p -1-graded E 1 -ring structure on ku p agrees with that given by the root adjunction method of [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF]. The reason we do this is so that we can make use of Theorem 4.17 of [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF] which states that for a p-local A, THH(A) THH(A( m √ a)) 0 is an equivalence whenever p ∤ m. Furthermore, this equivalence carries over to topological cyclic homology due to [ABM22, Theorem 5.5]. We obtain the following.
Logarithmic THH of the complex K-theory spectrum
Here, we use the p -1-grading on THH(ku p ) to obtain a splitting of the logarithmic THH of ku (p) as an S 1 -equivariant spectrum. We identify the resulting splitting at the level of V (1)-homotopy by using the logarithmic THH computations of Rognes, Sagave and Schlichtkrull in [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF]. For the rest of this section, let p > 3.
Remark 6.1. For the rest, we consider V (1) T (2) as a map of commutative monoids in the homotopy category of p -1-graded cyclotomic spectra with the trivial cyclotomic structure concentrated in weight 0 (using F 0 • (-) triv ).
Remark 6.2. In the following, we move freely between THH(ku (p) ) and THH(ku p ) since ultimately, we are interested in the V (1)-homotopy of these objects for which we have an equivalence V (1) ∧ THH(ku (p) ) ≃ V (1) ∧ THH(ku p ). Similarly, we move freely between V (1) ∧ THH(ℓ) and V (1) ∧ THH(ℓ p ).
Let THH(ku (p) | u 2 ) denote the logarithmic THH of ku (p) with respect to the Bott class u 2 ∈ π * (ku (p) ) ∼ = Z (p) [u 2 ] in the sense of [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF]. In [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF], this is denoted by THH(ku (p) , D(u)). This is an S 1 -equivariant E ∞ -algebra and there is a cofiber sequence of S 1 -equivariant spectra:
(6.3) THH(ku (p) ) THH(ku (p) | u 2 ) Σ THH(Z (p) ),
where the first map is a map of E ∞ -algebras in S 1 -equivariant spectra, see the discussion after [RSS15, Definition 4.6].
Here, our goal is to prove the following proposition where THH(ℓ | v 1 ) denotes the logarithmic THH of ℓ with respect to the class v 1 ∈ π * ℓ as defined in [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF] where it is denoted by THH(ℓ, D(v)). Proposition 6.4. There is an equivalence of S 1 -equivariant spectra:
V (1) ∧ THH(ku (p) | u 2 ) ≃ V (1) ∧ THH(ℓ | v 1 ) ∨ i∈Z/(p-1)|i =0 V (1) ∧ THH(ku p ) i ,
given by the coproduct of the map
V (1) ∧ THH(ℓ | v 1 ) V (1) ∧ THH(ku (p) | u 2 ) with the composite: i∈Z/(p-1)|i =0 V (1) ∧ THH(ku p ) i V (1) ∧ THH(ku p ) V (1) ∧ THH(ku (p) | u 2 ),
where the first map is given by the inclusion of the given summands of the p-1-graded spectrum THH(ku p ) and the second one is the canonical one.
Remark 6.5. Since ku p ≃ ℓ p ( p-1 √ v 1 ), this is an immediate consequence of the results of [ABM22, Section 6] if we assume that the definition of logarithmic THH in [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF] agrees with that used in [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF]. This compatibility result is not available at the moment, and therefore, we will not assume it. On the other hand, Devalapurkar and Moulinos prove this compatibility result in their upcoming work.
Proof. Due to [RSS18, Theorem 4.4], there is a map of homotopy cofiber sequences of S 1 -equivariant spectra:
V (1) ∧ THH(Z (p) ) V (1) ∧ THH(ℓ) V (1) ∧ THH(ℓ | v 1 ) V (1) ∧ THH(Z (p) ) V (1) ∧ THH(ku p ) V (1) ∧ THH(ku (p) | u 2 )
as mentioned in [RSS18, Equation (8.1)]. Here, the left hand vertical map is an equivalence. Therefore, the bottom left horizontal map factors as
V (1)∧THH(Z (p) ) V (1)∧THH(ℓ) V (1)∧THH(ku p ) ≃ i∈Z/(p-1)
V (1)∧THH(ku p ) i .
The second map above is the inclusion of the weight 0 summand due to Proposition 5.6. In particular, the cofiber sequence given by the bottom row splits through the splitting of THH(ku p ). Namely, this cofiber sequence is given by a coproduct of the cofiber sequence given by the top row and the cofiber sequence
* i∈Z/(p-1)|i =0 V (1) ∧ THH(ku p ) i ≃ - i∈Z/(p-1)|i =0 V (1) ∧ THH(ku p ) i .
This identifies the cofiber, i.e. V (1) ∧ THH(ku (p) | u 2 ) as stated in the proposition.
We will identify the homotopy groups of the summands of V (1) ∧ THH(ku (p) | u 2 ) given by the splitting above. For this, we start by recalling the computations of V (1) * THH(ℓ | v 1 ) and V (1) * THH(ku (p) | u 2 ) from [START_REF]Logarithmic topological Hochschild homology of topological Ktheory spectra[END_REF]. For what follows, E(x, y), P (x) and P k (x) denote the exterior algebra over F p in two variables, the polynomial algebra F p [x] and the truncated polynomial algebra F p [x]/x k respectively. Theorem 6.6. [RSS18, Theorems 7.3 and 8.1] There are ring isomorphisms:
V (1) * THH(ℓ | v 1 ) ∼ =E(λ 1 , d log v 1 ) ⊗ P (κ 1 ) V (1) * THH(ku (p) | u 2 ) ∼ =P p-1 (u 2 ) ⊗ E(λ 1 , d log u 2 ) ⊗ P (κ 1 )
where
|λ 1 | = 2p -1, |κ 1 | = 2p, |d log v 1 | = |d log u 2 | = 1 and |u 2 | = 2. Furthermore, the map V (1) * THH(ℓ | v 1 ) V (1) * THH(ku (p) | u 2 )
is given by the ring map that carries d log v 1 to -d log u 2 , λ 1 to λ 1 and κ 1 to κ 1 .
Recall that there is an action of the group ∆ := Z/(p -1) on ku p through Adams operations. Let δ ∈ ∆ be a chosen generator and we choose a β ∈ F × p such that π * (S/p ∧ δ)(u 2 ) = βu 2 ; here,
π * (S/p ∧ δ) : π * (S/p ∧ ku p ) π * (S/p ∧ ku p ) ∼ = F p [u 2 ].
A given x ∈ V (1) * THH(ku p ) is said to have δ-weight i ∈ Z/(p -1) if the automorphism of V (1) * THH(ku p ) induced by δ carries x to β i x [Aus05, Definition 8.2]; the δ-weights of the generators of V (1) * THH(ku p ) are given in [Aus05, Proposition 10.1]. One defines δ-weight in a similar way for V (1) * K(ku p ), V (1) * TC(ku p ) etc. 7.1. Higher Bott element. In [Aus10, Section 3], Ausoni constructs a non-trivial class b ∈ V (1) 2p+2 K(ku p ) of δ-weight 1, that he calls the higher Bott element, by considering the units of ku p . Namely, b is constructed using a map
K(Z, 2) GL 1 (ku p )
and it originates from K(2) * K(Z, 3) which is known due to Ravenel-Wilson [START_REF] Ravenel | The Morava K-theories of Eilenberg-Mac Lane spaces and the Conner-Floyd conjecture[END_REF]. Let b ∈ V (1) 2p+2 TC(ku p ) also denote the image of this class under the map V (1) * K(ku p )
V (1) * TC(ku p ); this is also a non-trivial class due to the following.
Proposition 7.2. The classes b mentioned above satisfies the following properties.
(1) The map V (1) * TC(ku p )
V (1) * THH(ku p ) carries b to a δ-weight 1 class denoted as b 1 in [Aus05], see [Aus10, Lemma 4.4]. Since b 1 is of δ-weight 1, we have b 1 ∈ V (1) * THH(ku p ) 1 .
(2) The map V (1) * THH(ku p )
V (1) * THH(ku (p) | u 2 ) carries b 1 to u 2 κ 1 [RSS18, Theorem 8.5]. (3) In V (1) * K(ku p ), we have b(b p-1 + v 2 ) = 0 [Aus10, Proposition 2.7].
We prove the following.
Proposition 7.3. The higher Bott element b ∈ V (1) * TC(ku p ) is a homogeneous element of weight 1 in the p -1-grading. In other words, b ∈ V (1) * TC(ku p ) 1 . Similarly, the corresponding element b ∈ V (1) * THH(ku p ) hS 1 is also of weight 1.
Proof. Indeed, we show that all the elements in V (1) * TC(THH(ku p ) i ) are of δ-weight i. Since ∆ = Z/(p -1) is an abelian group, the map δ : ku p ku p induced by the chosen generator δ ∈ ∆ is a map of E ∞ -algebras in the ∞-category of ∆-equivariant ℓ p -modules (not just a map of E ∞ ℓ p -algebras). Therefore, using F in (4.1), δ : ku p ku p can be considered as a map of p -1-graded E ∞ ℓ p -algebras.
As a result, the induced map THH(δ) is a map of p -1-graded cyclotomic objects. In particular, it preserves weight at the level of TC, TC -and TP. Recall that each x ∈ V (1) * THH(ku p ) i is of δ-weight i. Therefore, the map induced by δ at the level of the homotopy fixed point spectral sequence for V (1) * THH(ku p ) hS 1 i is given by multiplication by β i ∈ F × p . Since V (1) * THH(ku p ) i is finite at each degree, this spectral sequence is strongly convergent [Boa99, Theorem 7.1]. We deduce that every class in V (1) * THH(ku p ) hS 1 i with defined δ-weight have δ-weight i. On the other hand, V (1) * THH(δ) hS 1 i is diagonalizable (since its p -1st power is identity), i.e. V (1) * THH(ku p ) hS 1 i have a basis for which δ-weight is defined for each basis element. Therefore, we deduce that all the classes in V (1) * THH(ku p ) hS 1 i are of δ-weight i. The same argument shows that every class in V (1) * THH(ku p ) tS 1 i is of δ-weight i. The fiber sequence defining TC also shows that each class in V (1) * TC(THH(ku p ) i ) either have δ-weight i or have undefined δ-weight, but since this action is again diagonalizable, we deduce that every class in V (1) * TC(THH(ku p ) i ) is of δ-weight i. Since b is of δ-weight 1, the result follows. 7.2. Topological cyclic homology. As mentioned earlier, we need to show that b ∈ T (2) * TC(ku p ) is a unit. For this, we construct multiplication by b as a self map of the cyclotomic spectrum V (1) ∧THH(ku p ) and show that it induces a self equivalence of T (2) ∧ TC(ku p ). We first show that b provides a unit in T (2) * THH(ku p ) hS 1 by comparing it with the corresponding multiplication in T (2) * THH(ku p | u 2 ) hS 1 . Construction 7.4. We start with the map S 2p+2 TC(V (1) ∧ THH(ku p )) representing b. Using the adjunction (-) triv ⊣ TC mentioned in Definition 5.1, one obtains a map of cyclotomic spectra
b 1 : Σ 2p+2 S triv V (1) ∧ THH(ku p )
representing the class b 1 . We define
m b : Σ 2p+2 V (1) ∧ THH(ku p ) V (1) ∧ THH(ku p )
as the following composite map of cyclotomic spectra.
m b : Σ 2p+2 V (1) ∧ THH(ku p ) ≃ V (1) ∧ THH(ku p ) ∧ Σ 2p+2 S triv id∧b 1 --- V (1) ∧ THH(ku p ) ∧ V (1) ∧ THH(ku p ) V (1) ∧ THH(ku p )
Here, id denotes the identity map of V (1) ∧ THH(ku p ) and the second map above is given by the multiplication maps of THH(ku p ) and V (1).
We construct a similar map for logarithmic THH of ku (p) which is compatible with the one constructed above.
Construction 7.5. The first map below is the underlying S 1 -equivariant map of the map b 1 in Construction 7.4; the second map is the usual one.
u 2 κ 1 : Σ 2p+2 S triv b 1 -V (1) ∧ THH(ku p ) V (1) ∧ THH(ku (p) | u 2 )
This composite is a map of S 1 -equivariant spectra. Furthermore, it represents u 2 κ 1 in homotopy due to Proposition 7.2. As in Construction 7.4, we define an S 1 -equivariant map:
m u 2 κ 1 : Σ 2p+2 V (1) ∧ THH(ku (p) | u 2 ) V (1) ∧ THH(ku (p) | u 2 ), through the following composite. m u 2 κ 1 : Σ 2p+2 V (1) ∧ THH(ku (p) | u 2 ) ≃ V (1) ∧ THH(ku (p) | u 2 ) ∧ Σ 2p+2 S triv id∧u 2 κ 1 ---- V (1) ∧ THH(ku (p) | u 2 ) ∧ V (1) ∧ THH(ku (p) | u 2 ) V (1) ∧ THH(ku (p) | u 2 ) Since V (1)∧THH(ku p ) V (1)∧THH(ku (p) | u 2
) is a map of monoids in the homotopy category of S 1 -equivariant spectra, the following canonical diagram of S 1 -equivariant spectra commutes up to homotopy.
(7.6) Σ 2p+2 V (1) ∧ THH(ku p ) Σ 2p+2 V (1) ∧ THH(ku (p) | u 2 ) V (1) ∧ THH(ku p ) V (1) ∧ THH(ku (p) | u 2 ) m b mu 2 κ 1
Proposition 7.7. The map THH(ℓ) THH(ℓ | v 1 ) induces an equivalence
L T (2) THH(ℓ) hS 1 ≃ -L T (2) THH(ℓ | v 1 ) hS 1 .
Proof. There is an E ∞ -map K(Z (p) ) THH(Z (p) ) hS 1 and we have L T (2) K(Z (p) ) ≃ 0 due to [START_REF] Land | Purity in chromatically localized algebraic K-theory[END_REF]Purity Theorem]. This implies that
L T (2) THH(Z (p) ) hS 1 ≃ 0.
Since the cofiber of the map THH(ℓ) hS 1 THH(ℓ | v 1 ) hS 1 is given by Σ THH(Z (p) ) hS 1 , this provides the desired result.
Remark 7.8. As mentioned earlier, the map V (1)
T (2) is given by the T (2)localization
V (1) L T (2) V (1) ≃ T (2).
For a given spectrum E, T (2) ∧ E is a homotopy T (2)-module and therefore,
T (2) ∧ E is T (2)-local. Furthermore, V (1)∧E T (2)∧E is a T (2)-equivalence as V (1) T (2) is. Therefore, V (1) ∧ E T (2) ∧ E is
given by the T (2)-localization:
V (1) ∧ E L T (2) (V (1) ∧ E) ≃ T (2) ∧ E.
Proposition 7.9. For the composite S 1 -equivariant map:
f : Σ 2p+2 V (1) ∧ THH(ℓ | v 1 ) Σ 2p+2 V (1) ∧ THH(ku (p) | u 2 ) mu 2 κ 1 --- V (1) ∧ THH(ku (p) | u 2 ) V (1) ∧ THH(ku p ) 1 , L T (2) (f hS 1
) is an equivalence. Here, the first and the last maps are those provided by Proposition 6.4; indeed, the last map above is the projection to the factor of V (1) ∧ THH(ku (p) | u 2 ) corresponding to 1 ∈ Z/(p -1).
Proof. Due to Theorem 6.6 and Proposition 6.7, π * f can be given by the composite map
π * f : E(λ 1 , d log v 1 ) ⊗ P (κ 1 ) {1} ⊗ E(λ 1 , d log u 2 ) ⊗ P (κ 1 ) •u 2 κ 1 ---{u 2 } ⊗ E(λ 1 , d log u 2 ) ⊗ P (κ 1 )
where the first map above sends d log v 1 to -d log u 2 and fixes the other generators and the second map multiplies by u 2 κ 1 . The first map is an isomorphism and the second map above is an isomorphism in sufficiently large degrees. We deduce that π * f is an isomorphism in sufficiently large degrees.
In particular, the cofiber of f , lets call it C, is bounded from above in homotopy. Therefore, C hS 1 is also bounded from above in homotopy since (-) hS 1 preserves coconnectivity. In particular, L T (2) (C hS 1 ) ≃ 0. This means that L T (2) (f hS 1 ) is an equivalence as desired.
Remark 7.10. In the construction of the map f above, if we used V (1) ∧ THH(ku p ) (together with its weight splitting and m b ) instead of V (1) ∧ THH(ku (p) | u 2 ), the proof above would fail to go through. This is because the cofiber of f would not be bounded from above. This is precisely the reason why we use logarithmic THH for our computations.
The following is the non-logarithmic analogue of the the proposition above.
Proposition 7.11. For the composite S 1 -equivariant map:
g : Σ 2p+2 V (1) ∧ THH(ku p ) 0 Σ 2p+2 V (1) ∧ THH(ku p ) m b - V (1) ∧ THH(ku p ) V (1) ∧ THH(ku p ) 1 , L T (2) (g hS 1
) is an equivalence. Here, the first and the last maps are given by the p -1-grading on THH(ku p ).
Proof. For this, we consider the following (up to homotopy) commuting diagram of S 1 -equivariant spectra.
Σ 2p+2 V (1) ∧ THH(ku p ) 0 Σ 2p+2 V (1) ∧ THH(ℓ | v 1 ) Σ 2p+2 V (1) ∧ THH(ku p ) Σ 2p+2 V (1) ∧ THH(ku (p) | u 2 ) V (1) ∧ THH(ku p ) V (1) ∧ THH(ku (p) | u 2 ) V (1) ∧ THH(ku p ) 1 V (1) ∧ THH(ku p ) 1 g f m b mu 2 κ 1 id
The sequence of vertical maps on the left hand side is the composite defining g and the sequence of vertical maps on the right hand side is the composite defining the map f in Proposition 7.9. The upper horizontal map is given by the passage from THH to log THH by noting THH(ku p ) 0 ≃ THH(ℓ p ) (see Proposition 5.6). The lower horizontal map is the identity map. The inner square above is given by Diagram (7.6) which commutes up to homotopy. By the definition of the rest of the maps, one observes that the diagram above commutes. Due to Proposition 7.9, L T (2) (f hS 1 ) is an equivalence. Furthermore, the top horizontal map is also an equivalence after applying L T (2) (-hS 1 ) due to Propositions 7.7 and 5.6. Since the lower horizontal map is also an equivalence, we deduce that L T (2) (g hS 1 ) is an equivalence as desired.
For the rest, we also let b ∈ V (1) * THH(ku p ) hS 1 denote the image of the higher Bott element b ∈ V (1) * K(ku p ) under the trace map V (1) * K(ku p )
V (1) * THH(ku p ) hS 1 .
Corollary 7.12. After restricting and corestricting, multiplication by b provides an isomorphism
•b : T (2) * Σ 2p+2 THH(ku p ) hS 1 0 ∼ = -T (2) * THH(ku p ) hS 1 1
between the sets of weight 0 and weight 1 classes in T (2) * THH(ku p ) hS 1 .
Proof. By Remark 7.8 and the lax monoidal structure of the fixed points functor -hS 1 , this map is given by π * L T (2) (g hS 1 ) which is an isomorphism due to Proposition 7.11.
Corollary 7.13. In T (2) * THH(ku p ) hS 1 , we have b p-1 = -v 2 . In particular, b ∈ T (2) * THH(ku p ) hS 1 is a unit.
Proof. By Proposition 7.2, we have b(b p-1 + v 2 ) = 0 in T (2) * K(ku p ). Using the ring map T (2) * K(ku p ) T (2) * THH(ku p ) hS 1 , we we obtain that
(7.14) b(b p-1 + v 2 ) = 0 in T (2) * THH(ku p ) hS 1 . Due to Proposition 7.3, b is of weight 1 in V (1) * THH(ku p ) hS 1 .
In particular, b p-1 + v 2 is of weight 0 as v 2 is of weight 0. However, multiplication by b does not annihilate any non-trivial weight 0 classes in T (2) * THH(ku p ) hS 1 due to Corollary 7.12. This, together with (7.14) implies that b p-1 + v 2 = 0 in T (2) * THH(ku p ) hS 1 as desired.
We are going to use the following two propositions to deduce that L T (2) TC(m b ) is an equivalence; i.e. that b is a unit in T (2) * TC(ku p ).
Proposition 7.15. The map
L T (2) (m hS 1 b ) : Σ 2p+2 T (2) ∧ THH(ku p ) hS 1 ≃ -T (2) ∧ THH(ku p ) hS 1 is an equivalence.
Proof. Using the lax structure of the homotopy fixed points functor (-) hS 1 and Remark 7.8, one observes that the map
π * L T (2) (m hS 1 b ) is precisely the map T (2) * THH(ku p ) hS 1 T (2) * THH(ku p ) hS 1
given by multiplication by b. This is an isomorphism due to Corollary 7.13.
Proposition 7.16. The map L T (2) (m tS 1 b ) is an equivalence. Proof. Since m b is an S 1 -equivariant map, we have the following commuting diagram given by the canonical natural transformation in [NS18, Corollary I.4.3].
Σ 2p+2 T (2) ∧ THH(ku p ) hS 1 Σ 2p+2 T (2) ∧ THH(ku p ) tS 1 T (2) ∧ THH(ku p ) hS 1 T (2) ∧ THH(ku p ) tS 1 can ≃ L T (2) (m hS 1 b ) ≃ L T (2) (m tS 1 b ) can ≃
The maps can above are equivalences since their fibers are given by
T (2) ∧ Σ THH(ku p ) hS 1 ≃ 0 due to [NS18, Corollary I.4.3].
The left hand vertical map is an equivalence due to Proposition 7.15; and therefore, the right hand vertical map is also an equivalence.
Proposition 7.17
Σ 2p+2 T (2) ∧ TC(ku p ) T (2) ∧ TC(ku p ) Σ 2p+2 T (2) ∧ THH(ku p ) hS 1 T (2) ∧ THH(ku p ) hS 1 Σ 2p+2 T (2) ∧ THH(ku p ) tS 1 T (2) ∧ THH(ku p ) tS 1 L T (2) TC(m b ) ϕ hS 1 p -can L T (2) (m hS 1 b ) ≃ ϕ hS 1 p -can L T (2) (m tS 1 b ) ≃
The middle and the bottom horizontal maps are equivalences due to Propositions 7.15 and 7.16. Since this is a map of fiber sequences, we deduce that the top horizontal map is an equivalence as desired.
Theorem 7.18 (Theorem 1.1). Let p > 3 be a prime. There is an isomorphism of graded abelian groups:
T (2) * K(ku) ∼ = T (2) * K(ℓ)[b]/(b p-1 + v 2 ),
where |b| = 2p + 2.
Proof. Due to [LMMT20, Purity Theorem] and the Dundas-Goodwillie-McCarthy theorem, we have
T (2) * K(ku) ∼ = T (2) * TC(ku p ) and T (2) * K(ℓ) ∼ = T (2) * TC(ℓ p ).
Therefore, it is sufficient to prove the corresponding statement:
T (2) * TC(ku p ) ∼ = T (2) * TC(ℓ p )[b]/(b p-1 + v 2 ),
at the level of topological cyclic homology. Since T (2) * TC(ku p ) is a p -1-graded ring with a unit in weight 1 (Propositions 7.3 and 7.17), it is periodic in its weight direction. In other words, multiplication by b i provides an isomorphism
•b i : T (2) * Σ (2p+2)i TC(ku p ) 0 ∼ =
-T (2) * TC(ku p ) i for each 0 < i < p -1. Furthermore, T (2) * TC(ku p ) 0 ∼ = T (2) * TC(ℓ p ) due to Proposition 5.6. This proves the desired isomorphism.
Remark 7.19. Blumberg, Gepner and Tabuada prove that there is a lax symmetric monoidal transformation from algebraic K-theory to topological cyclic homology [START_REF] Andrew | Uniqueness of the multiplicative cyclotomic trace[END_REF] given by the cyclotomic trace. On the other hand, we defined the lax symmetric monoidal structure on TC(-) using cyclotomic spectra as in [START_REF] Nikolaus | On topological cyclic homology[END_REF] whereas the older definition of topological cyclic homology is used in [START_REF] Andrew | Uniqueness of the multiplicative cyclotomic trace[END_REF]. On connective E 1 -rings, the two definitions of topological cyclic homology provide the same spectrum [START_REF] Nikolaus | On topological cyclic homology[END_REF]. From this, one obtains a map of spectra K(-) TC(-) on connective E 1 -rings for TC(-) as in Definition 5.1. To our knowledge, a lax symmetric monoidal comparison of the two definitions of topological cyclic homology is not currently available in the literature. Therefore, we do not assume the existence of a lax symmetric monoidal transformation K(-) TC(-) unless we explicitly state otherwise. Note that we only make this assumption in Remarks 7.20 and 7.22 and Theorem 7.23.
On the other hand, it is highly expected that the agreement of the two definitions of TC can be improved to that of lax symmetric monoidal functors and it should be possible to obtain a lax symmetric monoidal transformation K(-) TC(-) for TC as in Definition 5.1.
Remark 7.20. If we assume that the trace map K(-) TC(-) is lax symmetric monoidal (see Remark 7.19), then we have a T (2)-equivalence K(ku) TC(ku p ) of E ∞ -rings. This shows that b is a unit in T (2) * K(ku). Since b(b p-1 + v 2 ) = 0 in T (2) * K(ku) (Proposition 7.2), one obtains that b p-1 = -v 2 in T (2) * K(ku). In particular, the isomorphism in Theorem 1.1 improves to an isomorphism of graded rings.
7.3. Algebraic K-theory of the 2-periodic Morava K-theory. At this point, Theorem 1.5 follows easily from our previous arguments.
Theorem 7.21 (Theorem 1.5). Let p > 3 be a prime. There is an isomorphism of graded abelian groups:
T (2) * K(ku/p) ∼ = T (2) * K(ℓ/p) ⊗ Fp[v 2 ] F p [b]
with |b| = 2p + 2 and in the tensor product above, we take v 2 = -b p-1 .
Proof. As before, it is sufficient to prove the same identity at the level of topological cyclic homology, i.e. we need to show that
T (2) * TC(ku/p) ∼ = T (2) * TC(ℓ/p) ⊗ Fp[v 2 ] F p [b].
Recall from Construction 5.4 that TC(ku/p) is a module over TC(ku p ) in p -1graded spectra. By Proposition 5.7, we have T (2) * TC(ku/p) 0 ∼ = T (2) * TC(ℓ/p). Furthermore, there is a unit b ∈ T (2) * TC(ku p ) of weight 1, (Propositions 7.3 and 7.17). Therefore, multiplying by powers of b induces isomorphisms
•b i : T (2) * Σ (2p+2)i TC(ku/p) 0 ∼ =
-T (2) * TC(ku/p) i for each 0 < i < p -1. This provides the desired result. Theorem 7.23. Let p > 3 be a prime. Assume that the natural transformation K(-) TC(-) on connective E 1 -rings given by the cyclotomic trace is lax symmetric monoidal (see Remark 7.19). There is an isomorphism of F p [b]-modules:
(7.24) T (2) * K(ff (ku p )) ∼ = T (2) * K(ff (ℓ p )) ⊗ Fp[v 2 ] F p [b]
where v 2 = -b p-1 and |b| = 2p + 2.
Proof. The trace map K(-) TC(-) is a T (2)-equivalence for ku p , ku/p, ℓ p and ℓ/p [LMMT20, Corollary E]. Therefore, we obtain that T (2) ∧ K(ku p ) is a monoid in the homotopy category of p -1-graded spectra and T (2) ∧ K(ku/p) is a left module over T (2) ∧ K(ku p ) in the homotopy category of p -1-graded spectra.
Let
τ : T (2) ∧ K(ku/p) T (2) ∧ K(ku p )
denote the map induced by transfer along ku p ku/p. Since
T (2) ∧ K(ku/p) ≃ 0≤i<p-1 T (2) ∧ K(ku/p) i ,
it is sufficient to understand the restriction of τ to T (2) ∧ K(ku/p) i for each i. For i = 0, we consider the commuting diagram of E 1 -rings:
ℓ p ℓ/p ku p ku/p.
We obtain a commuting diagram of spectra:
(7.25) K(ℓ/p) K(ℓ p ) K(ku/p) K(ku p ) by using the following equivalence of the corresponding functors induced at the level of module categories:
ku p ∧ ℓp -≃ (ku p ∧ ℓp ℓ/p) ∧ ℓ/p -≃ ku/p ∧ ℓ/p -.
Let τ ′ : T (2) ∧ K(ℓ/p) T (2) ∧ K(ℓ) denote the map induced by transfer along ℓ ℓ/p. Diagram (7.25) provides that the restriction of τ to T (2) ∧K(ku/p) 0 is given by the following map.
T (2) ∧ K(ku/p) 0 ≃ T (2) ∧ K(ℓ/p) τ ′ - T (2) ∧ K(ℓ p ) ≃ T (2) ∧ K(ku p ) 0 T (2) ∧ K(ku p )
Here, the last map is the inclusion of the weight 0-component and the equivalences above are provided by Propositions 5.6 and 5.7. Let τ 0 denote the map
τ 0 : T (2) ∧ K(ku/p) 0 T (2) ∧ K(ku p ) 0
in the composite above. Let 0 < i < p-1. To describe the restriction of τ to T (2)∧K(ku/p) i , we use the fact that τ is a map of T (2)∧K(ku p )-modules in the stable homotopy category, see [AR09, Section 3]. By Propositions 7.3 and 7.17, there is a unit b i ∈ T (2) * K(ku p ) of weight i. Omitting the suspension functor, let m 1 : T (2) ∧ K(ku/p) 0 T (2) ∧ K(ku/p) i and m 2 : T (2)∧K(ku p ) 0 T (2)∧K(ku p ) i denote the equivalences given by multiplication with b i ∈ T (2) * K(ku p ). Abusing notation, let m 1 and m 2 also denote the respective endomorphisms of T (2) ∧ K(ku/p) and T (2) ∧ K(ku p ). We have the following up-to homotopy commuting diagram of spectra.
T (2) ∧ K(ku p ) 0 T (2) ∧ K(ku/p) 0 T (2) ∧ K(ku/p) T (2) ∧ K(ku p ) T (2) ∧ K(ku p ) i T (2) ∧ K(ku/p) i T (2) ∧ K(ku/p) T (2) ∧ K(ku p ) m 2 ≃ τ 0 m 1 ≃ m 1 τ m 2 τ
Here, the unmarked arrows are the canonical inclusions. The bottom left hand square and the right hand diagram commute since T (2) ∧ K(ku/p) and T (2) ∧ K(ku p ) are modules over T (2) ∧ K(ku p ) in the homotopy category of p -1-graded spectra. The inner square commutes as τ is a map of modules over T (2) ∧ K(ku p ) in the homotopy category of spectra. The top left diagram commutes due to our previous identification of τ 0 . The commuting diagram above shows that the restriction of τ to T (2) ∧ K(ku/p) i is given by the composite:
T (2) ∧ K(ku/p) i m -1 1 -- ≃ T (2) ∧ K(ku/p) 0 τ 0 - T (2) ∧ K(ku p ) 0 m 2 - ≃ T (2) ∧ K(ku p ) i T (2) ∧ K(ku p ).
Letting τ i : T (2)∧K(ku/p) i T (2)∧K(ku p ) i be as in the composite above, we obtain that τ is given by τ ≃ 0≤i<p-1 τ i and that each τ i is equivalent to τ 0 up to a suspension. Since τ 0 is equivalent to τ ′ , this provides the desired splitting of the cofiber of τ as a coproduct of shifted copies of T (2) ∧ K(ff (ℓ p )). This proves (7.24) as an isomorphism of abelian groups. Due to the argument above, the resulting cofactors of T (2) ∧ K(ku/p) are connected through multiplication by b and this shows that (7.24) is an isomorphism of F p [b]-modules.
Appendix A. Graded cyclotomic spectra
For this section, let m be a positive integer such that p = 1 in Z/m. Here, our goal is to construct a symmetric monoidal functor THH : Alg E 1 (Fun(Z/m, Sp))
Fun(Z/m, CycSp) and show that the resulting diagram:
(5.2)
Alg E 1 (Fun(Z/m, Sp)) Fun(Z/m, CycSp) Fun(Z/m, Sp) Alg E 1 (Sp) CycSp Sp THH TC D D ′ THH TC
of lax symmetric monoidal functors commutes. Note that this diagram is also stated as Diagram (5.2) in Section 5. Recall that the vertical functors above are given by left Kan extending through Z/m 0, i.e. they provide the corresponding underlying objects. Furthermore, the upper right hand horizontal arrow TC is given by levelwise application of TC : CycSp Sp. We first prove the following proposition which states that the right hand square in Diagram (5.2) commutes. Proof. Let R and R ′ denote the right adjoints of D and D ′ respectively. The functors R and R ′ are given by restriction along Z/m 0. Here, we denote the top horizontal arrow by TC level to distinguish it from the bottom horizontal arrow TC.
First, we show that there is a lax symmetric monoidal natural transformation φ : D ′ TC level TC D, later, we complete the proof by showing that φ is an equivalence. By adjunction, it is sufficient to obtain a lax symmetric monoidal transformation TC level R ′ TC D.
Since precomposition followed by postcomposition agrees with postcomposition followed by precomposition, we have R ′ TC ≃ TC level R. Therefore, it is sufficient to obtain a lax symmetric monoidal transformation TC level TC level RD and this is given by the unit of the adjunction D ⊣ R. This provides φ above. Indeed, φ is given by the canonical map
φ X : ∨ i∈Z/m TC(X i ) TC(∨ i∈Z/m X i ).
Since m = 0, the coproducts above are finite and due to [NS18, Corollary II.1.7], colimits of cyclotomic spectra agree with those of the underlying spectra. Furthermore, both CycSp and Sp are stable and therefore these coproducts are the corresponding products. As TC commutes with finite products, we obtain that φ is an equivalence as desired.
What remains is to construct the left hand square in Diagram 5.2 and show that it commutes.
For the rest, we let Gr m (C) denote Fun(Z/m, C) for a given presentably symmetric monoidal ∞-category C. Abusing notation, let (-) tCp : Gr m (Sp BS 1 ) Gr m (Sp BS 1 ) also denote the lax symmetric monoidal functor given by levelwise application of (-) tCp . Slightly diverting from the notation of [START_REF] Nikolaus | On topological cyclic homology[END_REF], we let Leq Gr m (Sp BS 1 ), (-) tCp denote the ∞-category defined as the lax equalizer of the identity endofunctor and the endofunctor (-) tCp on Gr m (Sp BS 1 ) in the sense of [START_REF] Nikolaus | On topological cyclic homology[END_REF]Definition II.1.4]. The ∞-category Leq Gr m (Sp BS 1 ), (-) tCp is defined via the following pullback square.
Leq Gr m (Sp BS 1 ), (-) tCp Gr m (Sp BS 1 ) ∆ 1
Gr m (Sp BS 1 ) Gr m (Sp BS 1 ) × Gr m (Sp BS 1 ) ev 0 ×ev 1 (id,(-) tCp )
In particular, the objects of this pullback ∞-category are given by an object of E ∈ Gr m (Sp BS 1 ) and a morphism E E tCp . In [AMMN22, Appendix A], the authors construct THH as a functor on graded ring spectra and show that it fits into the following commuting diagram of symmetric monoidal functors [AMMN22, Proposition A. Let R and R ′′ denote the right adjoints of D and D ′′ respectively. These are given by the corresponding restriction functors along Z/m 0. Since T is an equivalence, T -1 R ′′ is a right adjoint to D ′′ T and therefore, it is sufficient to show that R ≃ T -1 R ′′ , i.e. the right adjoints of D and D ′′ T agree. For this, it is sufficient to show that T R ≃ R ′′ . This follows by the fact that the functor F and the lax transformation F (-) tCp • F are defined levelwise.
Lemma A.6. Let η : T S be a lax symmetric monoidal transformation of lax symmetric monoidal functors between presentably symmetric monoidal ∞-categories C and D. Then applying η levelwise induces a lax symmetric monoidal transformation between the induced lax symmetric monoidal functors from Gr m (C) to Gr m (D).
/(p -1), LMod ℓp )
is the subring of weight 0 elements. Lemma 4.10. Let k ≥ 0 be even. The m-graded E 2 -ring obtained from the graded E 2ring S[z k ] by left Kan extending through Z Z/m admits an even cell decomposition.
Proposition 5. 6 .
6 The canonical maps THH(ℓ p ) ≃ -THH(ku p ) 0 and TC(ℓ p ) ≃ -TC(ku p ) 0 are equivalences. Proposition 5.7. The canonical maps THH(ℓ/p) ≃ -THH(ku/p) 0 and TC(ℓ/p) ≃ -TC(ku/p) 0 are equivalences.
.
The higher bott element b ∈ T (2) 2p+2 TC(ku p ) is a unit. Proof. Recall that m b is a map of cyclotomic spectra by construction. Furthermore, the map π * L T (2) TC(m b ) : T (2) * Σ 2p+2 TC(ku p ) T (2) * TC(ku p ) is given by multiplication by b. Therefore, it is sufficient to show that L T (2) TC(m b ) is an equivalence. Since m b is cyclotomic, this induces a map of fiber sequences as follows, see [NS18, Lemma II.4.2].
Remark 7.22. As in Remark 7.20, if we assume that the trace map provides a lax symmetric monoidal transformation K(-) TC(-) (see Remark 7.19), then the isomorphism in Theorem 1.5 improves to an isomorphism of F p [b]-modules. Now we prove Theorem 7.23 verifying the the conjectural formula of Ausoni and Rognes [AR09, Section 3] that we stated in (1.6). For this, note that due to [LMMT20, Purity Theorem], T (2) ∧ K(ff (ku p )) and T (2) ∧ K(ff (ℓ p )) are given by the cofibers of the transfer maps T (2) ∧K(ku/p) T (2) ∧K(ku p ) and T (2) ∧K(ℓ/p) T (2) ∧K(ℓ p ) respectively, see [AR09, Diagrams 3.1 and 3.10].
Proposition A. 1 .Fun
1 Let m > 0 such that p = 1 in Z/m. Then the following diagram: monoidal functors commutes. In other words, the right hand side of Diagram (5.2), commutes.
5 and Corollary A.15]. (A.2) Alg E 1 (Gr m (Sp)) Leq Gr m (Sp BS 1 ), (-) tCp Alg E 1 (Sp) CycSp THH D ′′ THH As usual, the vertical arrows are induced by left Kan extension through Z/m 0. Here, we omit the functor L p (given by left Kan extension through •p : Z/m Z/m) since L p is the identity functor whenever p = 1 in Z/m. Construction A.3. Let Alg E 1 (Gr m (Sp)) Gr m (CycSp) be the composite of the upper horizontal arrow in Diagram (A.2) with the equivalence provided in the proposition below. This provides the left hand upper horizontal arrow in Diagram (5.2) and the commuting diagram in the following proposition, together with Diagram (A.2) ensures that the left hand square in Diagram (5.2) commutes. What remains is to prove the following proposition. Proposition A.4. Let m > 0 such that p = 1 in Z/m. There is an equivalence of symmetric monoidal ∞-categories: (A.5) Leq Gr m (Sp BS 1 ), (-) tCp ≃ -Gr m (CycSp) such that the following diagram commutes. Leq Gr m (Sp BS 1 ), (-) tCp Gr m (CycSp) CycSp ≃ D ′′ D Proof. We construct an equivalence in the opposite direction. Due to [NS18, Construction IV.2.1], giving a symmetric monoidal functor T : Gr m (CycSp) Leq Gr m (Sp BS 1 ), (-) tCp is equivalent to giving a symmetric monoidal functor F : Gr m (CycSp) Gr m (Sp BS 1 ) together with a lax symmetric monoidal transformation F (-) tCp • F . Applying [NS18, Construction IV.2.1] to the identity functor of CycSp, one obtains a symmetric monoidal functor:H : CycSp Sp BS 1 ,together with a lax symmetric monoidal transformation H (-) tCp • H. By [Nik16, Corollary 3.7], this provides the desired symmetric monoidal functor F above. Furthermore, the lax symmetric monoidal transformation H (-) tCp • H applied to the following lemma provides the desired lax symmetric monoidal transformation F (-) tCp •F . This natural transformation and F provides the symmetric monoidal functor T above. Since Fun(Z/m, -) commutes with limits, it commutes with the pullback square defining CycSp as a lax equalizer; therefore, T is an equivalence as desired. The functor claimed in (A.5) is now given by T -1 .For the second statement in the proposition, it is sufficient to show that the following diagram commutes.Gr m (CycSp)Leq Gr m (Sp BS 1 ), (-)
Proof.
We follow closely [Nik16, Section 3]. Since the ∞-category of lax symmetric monoidal functors is a full subcategory of the ∞-category of functors over NFin * , it is sufficient to show that η provides a map ∆ 1Map NFin * (Gr m (C) ⊗ , Gr m (D) ⊗ ) of simplicial sets where the vertices of ∆ 1 correspond to the lax symmetric monoidal functors induced by T and S. Using the universal property defining hom /NFin * (-, -) in [Nik16, Section 3], one obtains the second map below:∆ 1 Map NFin * (C ⊗ , D ⊗ ) Map NFin * (hom /NFin * (Z/m ⊗ , C ⊗ ), hom /NFin * (Z/m ⊗ , D ⊗ )),where the first map represents η. Using the definition of Gr m (-) as a full simplicial subset of hom /NFin * (Z/m ⊗ , -) and [Nik16, Corollary 3.7], we deduce that the 1simplex in the composite above lies in Map NFin * (Gr m (C) ⊗ , Gr m (D) ⊗ ) with vertices corresponding to the lax symmetric monoidal functors induced by T and S as desired.
For this, note that the classes a i , b i and u
are of δ-weight 1 and the classes λ 1 and µ 2 are of δ-weight 0 in V (1) * THH(ku p ).
It follows by [START_REF] Ausoni | Adjunction of roots, algebraic K-theory and chromatic redshift[END_REF]Proposition 8.2] that V (1) * THH(ku p ) i is precisely given by the classes of δ-weight i in V (1) * THH(ku p ). In other words, δ-weight and our weight gradings agree for V (1) * THH(ku p ). Proposition 6.7. For 0 < i < p -1, the image of the inclusion
is given by:
). Here, the maps ψ i are given by Proposition 6.4.
Proof. It follows from Theorem 6.6 that the image of the inclusion
is given by
we say V 0 = im ψ 0 . Also, let
. It follows by inspection on [RSS18, Theorem 8.5] that every F p -module generator of V i given above gets hit by an element of δ-weight i under the map
Since δ-weight i elements of V (1) * THH(ku p ) correspond to the F p -submodule
we deduce that V i ⊆ im ψ i for every i. Since i∈Z/p-1
and since all the vector spaces involved are finite dimensional at each homotopy degree, we deduce that V i = im ψ i as desired. Note that the second isomorphism above follows by Proposition 6.4.
Topological cyclic homology of complex K-theory
Let p > 3 for the rest of this section. Here, we compute T (2) * K(ku p ) and T (2) * K(ku/p).
Remark 7.1. Since V (1) is a finite spectrum, V (1)∧-commutes with all constructions involving colimits and limits. For instance, it commutes with homotopy fixed points and one has TC(V (1) ∧ E) ≃ V (1) ∧ TC(E) for every cyclotomic spectrum E. |
03341905 | en | [
"phys.astr.ga",
"phys.phys.phys-class-ph",
"phys.phys.phys-comp-ph"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-03341905v2/file/Arminjon_V2_of_ISRF_as_MaxwellField_Applied_to_SED_.pdf | Mayeul Arminjon
email: [email protected]
Interstellar radiation as a Maxwell field: improved numerical scheme and application to the spectral energy density
The existing models of the interstellar radiation field (ISRF) do not produce a Maxwell field. Here, the recent model of the ISRF as a Maxwell field is improved by considering separately the different frequencies at the stage of the fitting. Using this improved procedure: (i) It is checked in detail that the model does predict extremely high values of the spectral energy density (SED) on the axis of a galaxy, that however decrease very rapidly when ρ, the distance to the axis, is increased from zero. (ii) The difference between the SED values (with ρ = 1 kpc or 8 kpc), as predicted either by this model or by a recent radiation transfer model, is reduced significantly. (iii) The slower decrease of the SED with increasing altitude z, as compared with the radiation transfer model, is confirmed. We also calculate the evolutions of the SED at large ρ. We interpret these evolutions by determining asymptotic expansions of the SED at large z, and also ones at large ρ.
Introduction
The interstellar radiation field (ISRF) in a galaxy is an electromagnetic (EM) field in a very high vacuum, hence it should be a solution of the Maxwell equations. However, the existing models for the ISRF do not take into account the full EM field with its six components coupled through the Maxwell equations. Consider, for example, the model of Chi & Wolfendale [START_REF] Chi | The interstellar radiation field: a datum for cosmic ray physics[END_REF]. It assumes an axisymmetric distribution of the volume emissivities j i (λ, ρ, z) of four stellar components (i) (i = 1, ..., 4): j i decreases exponentially with both the distance ρ to the galactic axis and the altitude z over the galactic central disk. The contribution of component (i) to the energy density of the ISRF at some position (ρ , z ) and wavelength λ is obtained by integrating j i (λ, ρ, z)g/l 2 over the whole galactic volume. Here l is the distance between the studied position and the running point in the galactic volume; g describes the dust absorption and is obtained by integrating the visual extinction per unit path length over the linear path joining the studied position and the running point in the galactic volume. Other models, e.g. by Mathis, Mezger and Panagia [START_REF] Mathis | Interstellar radiation field and dust temperatures in the diffuse interstellar matter and in giant molecular clouds[END_REF], Gordon et al. [START_REF] Gordon | The DIRTY model. I. Monte Carlo radiative transfer through dust[END_REF], Robitaille [START_REF] Robitaille | HYPERION: an open-source parallelized three-dimensional dust continuum radiative transfer code[END_REF], Popescu et al. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF], are based on similar principles: all of these models consider quantities such as the stellar emissivity and luminosity, and the dust opacity, and they evolve the light intensity emitted by the stars by taking into account (in addition to the distance) the radiative transfer, in particular by dust absorption/ reemission. Clearly, those models do not produce an EM field, hence even less one that would be a solution of the Maxwell equations.
In a recent work [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF], we proposed a model applicable to the relevant ideal case of an axisymmetric galaxy, and that provides for the ISRF such an exact solution of the Maxwell equations -a feature which, as discussed above, and to the best of our knowledge, appears to be fully new. This is indeed needed to study the relevance of a possible candidate for dark matter that emerges [START_REF] Arminjon | On the equations of electrodynamics in a flat or curved spacetime and a possible interaction energy[END_REF] from an alternative, scalar theory of gravity. However, it is also of astrophysical interest independently of the latter, since, as we noted, the ISRF must be an exact Maxwell field and this condition is not fulfilled by the existing models. As a step in checking the model proposed in Ref. [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF], its application to predict the variation of the spectral energy density (SED) in our Galaxy has been subjected to a first test [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]. To this purpose, the model has been adjusted by asking that the SED predicted for our local position in the Galaxy coincide with the SED determined from spatial missions by Henry, Anderson & Fastie [START_REF] Henry | Far-ultraviolet studies. vii. The spectrum and latitude dependence of the local interstellar radiation field[END_REF], Arendt et al. [START_REF] Arendt | The COBE Diffuse Infrared Background Experiment Search for the Cosmic Infrared Background. III. Separation of Galactic Emission from the Infrared Sky Brightness[END_REF], Finkbeiner et al. [START_REF] Finkbeiner | Extrapolation of Galactic Dust Emission at 100 Microns to Cosmic Microwave Background Radiation Frequencies Using FIRAS[END_REF], and Porter & Strong [START_REF] Porter | A new estimate of the galactic interstellar radiation field between 0.1µm and 1000µm[END_REF]. It has been found in that most recent work [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF] that the spatial variation of the SED thus obtained with our model does not differ too much in magnitude from that predicted by the recent radiation transfer model of Ref. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF], but that the SED predicted by our model: (i) is extremely high on the axis of the Galaxy -i.e., on the axis of the axial symmetry that is assumed for the model of the Galaxy; (ii) has rather marked oscillations as function of the wavelength; and (iii) seems to decrease more slowly when the altitude z increases (or rather when |z| increases), as compared with the radiation transfer model.
The aim of this paper is to present an improved numerical scheme to operate that "Maxwell model of the ISRF", and to apply this improved scheme to check the findings (i)-(iii) above. Section 2 provides a summary of the model. Section 3 describes the improvement of the numerical scheme. In Sect. [START_REF] Robitaille | HYPERION: an open-source parallelized three-dimensional dust continuum radiative transfer code[END_REF], we check whether the model really predicts extremely high values of the SED on the axis of the Galaxy. Section 5 studies the spatial variation of the SED and compares it with results of the literature. In Sect. [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF], asymptotic expansions are used to interpret the findings of the foregoing section. The Conclusion section 7 is followed by Appendix A, which discusses the relation between the discrete and continuous descriptions of the SED.
Short presentation of the model
This model has been presented in detail in Ref. [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF]. An axisymmetric galaxy is modelled as a finite set of point-like "stars", the azimuthal distribution of which is uniform. Those points x i (i = 1, ..., i max ) are obtained by pseudorandom generation of their cylindrical coordinates ρ, φ, z with specific probability laws, ensuring that the distribution of ρ and z is approximately that valid for the star distribution in the galaxy considered, and that the set {x i } is approximately invariant under azimuthal rotations of any angle φ [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF]. In the present work, as in Refs. [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF][START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF], 16 × 16 × 36 triplets (ρ, z, φ) were thus generated, so that i max = 9216, and the distribution of ρ and z is approximately that valid for the star distribution in the Milky Way.
The ISRF is also assumed axisymmetric, and thus depends only on ρ and z. Since we want to describe, not the field inside the sources and in their vicinity, but instead the smoothed-out field at the intragalactic scale, we search for a solution of the source-free Maxwell equations. In the axisymmetric case, any time-harmonic source-free Maxwell field is the sum of two Maxwell fields: (i) one deriving from a vector potential having just the axial component A z non-zero, with A z obeying the standard wave equation, and (ii) one deduced from a solution of the form (i) by EM duality [START_REF] Arminjon | An explicit representation for the axisymmetric solutions of the free Maxwell equations[END_REF]. We consider for simplicity a model ISRF that has a finite frequency spectrum (ω j ) j=1,...,Nω , hence we may apply the foregoing result to each among its timeharmonic components (j), and then just sum these components. Moreover, we envisage the ISRF as being indeed an EM radiation field, thus excluding from consideration the purely magnetic part of the interstellar EM field [START_REF] Beck | Magnetic fields in the Milky Way and in galaxies[END_REF]. Hence the ISRF is made of "totally propagating" EM waves, i.e., ones without any "evanescent" component [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF][START_REF] Garay-Avendaño | Exact analytic solutions of Maxwell's equations describing propagating nonparaxial electromagnetic beams[END_REF]. Specifically, we assume that the two scalar potentials A j z and A j z that define the decomposition (i)-(ii) of each time-harmonic component (j), mentioned above, are themselves totally propagating. In that case, both A j z and A j z have the explicit form [START_REF] Zamboni-Rached | Structure of nondiffracting waves and some interesting applications[END_REF][START_REF] Garay-Avendaño | Exact analytic solutions of Maxwell's equations describing propagating nonparaxial electromagnetic beams[END_REF]:
ψ ω j S j (t, ρ, z) = e -iω j t +K j -K j J 0 ρ K 2 j -k 2 e ik z S j (k) dk, (1)
with ω j the angular frequency, K j := ω j /c, J 0 the first-kind Bessel function of order 0, and where S j is some (generally complex) function of k ∈ [-K j , +K j ]. For a totally propagating, axisymmetric EM field, but otherwise fully general, the two potentials A j z and A j z may be different, i.e., may correspond with different "spectra" in Eq. ( 1), say S j and S j [START_REF] Arminjon | An explicit representation for the axisymmetric solutions of the free Maxwell equations[END_REF].
To determine these potentials, that is, to determine the spectrum functions S j , we use a sum of potentials emitted by the "stars". We assume that every "star", each at some point x i , contributes to the global potential A j z of a given frequency ω j (j = 1, ..., N ω ) by a spherically symmetric scalar wave of the same frequency ω j , whose emission center is its spatial position x i -in order that all the directions starting from the star be equivalent. Thus, consider time-harmonic spherically symmetric solutions of the wave equation that have a given angular frequency ω. It is easy to check by direct calculation that they can be either an outgoing wave, an ingoing wave, or the sum of an ingoing wave and an outgoing one, and that, up to an amplitude factor, the following is the only outgoing wave:
ψ ω (t, x) = e i(Kr-ωt) Kr , K := ω c , r := |x| . (2)
Clearly, only that outgoing solution is relevant here, given that the point-like "stars" must be indeed sources of radiation. 1 Thus, the contributions of the i-th star to the potentials A j z and A j z can differ only in amplitude, since both must be a multiple of
ψ x i ω j (t, x) := ψ ω j (t, x -x i ) = e i(K j r i -ω j t) K j r i , (3)
where
K j := ω j c , r i := |x -x i |.
But there is no apparent physical reason to affect different amplitudes to the contribution of the i-th star to A j z and to A j z , hence we assume both of them to be equal to ψ x i ω j . To determine the global potentials A j z and A j z (j = 1, ..., N ω ), that generate the axisymmetric model ISRF with a finite frequency spectrum (ω j ), the sum of the spherical potentials (3) emanating from the point stars is fitted to the form (1). As noted in Ref. [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF], this is not assuming that the ISRF is indeed the sum of the radiation fields emitted by the different stars (which is not correct, due to the radiation transfers) -because (a) the equalities ( 4), [START_REF] Dieudonné | Calcul infinitésimal[END_REF] or [START_REF]Méthode de la phase stationnaire[END_REF] below are not exact equalities but ones in the sense of the least squares, and (b) nothing is really assumed regarding the EM field of the "star" itself, in particular we actually do not need to assume that it has the form (i)-(ii) above (e.g. the one corresponding with two equal potentials A i j z = A i j z = ψ x i ω j ).
In the previous works [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF][START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF], this fitting was done for all frequencies at once. That is, the following least-squares problem was considered:
Nω j=1 imax i=1 w j ψ x i ω j ∼ = Nω j=1 ψ ω j S j on G, (4)
where the sign ∼ = indicates that the equality is in the sense of the least squares (the arguments of the functions varying on some spatio-temporal grid G), and where the numbers w j > 0 are the weights affected to the different frequencies. In view of the axial symmetry, the spatial position x is restricted to the plane φ = 0, so x = x(ρ, z) and
G = {(t l , ρ m , z p ), 1 ≤ l ≤ N t , 1 ≤ m ≤ N ρ , 1 ≤ p ≤ N z }. ( 5
)
spherically symmetric solution of the wave equation that satisfies the Sommerfeld radiation condition [START_REF] Sommerfeld | Die Greensche Funktion der Schwingungsgleichung[END_REF]. However, the Sommerfeld condition aims precisely at selecting a boundary condition in order to find only "physical", i.e., outgoing solutions for the Helmholtz equation. The latter equation applies to general time-harmonic solutions of the wave equation.
In the spherically symmetric case, the time-harmonic solutions are easy to find and the outgoing solutions are immediate to recognize.
Since the contributions of the i-th star to A j z and to A j z have both been assumed to be equal to ψ x i ω j , there is no possibility to distinguish between A j z and A j z , either -whence ψ ω j S j = A j z = A j z on the r.h.s. of (4). The unknowns of the problem are the spectrum functions S j , j = 1, ..., N ω . We determine S j by the (generally complex) values
S nj := S j (k nj ) (n = 0, ..., N ), (6)
where
k nj = -K j + nδ j (n = 0, ..., N ), (7)
with δ j := 2K j /N , is a regular discretization of the interval [-K j , +K j ] for k in the integral [START_REF] Chi | The interstellar radiation field: a datum for cosmic ray physics[END_REF]. Calculating those integrals with the "Simpson
w j ψ x i ω j ∼ = Nω j=1 N n=0 f nj S nj on G, (8)
with
f nj (t, ρ, z) = a nj J 0 ρ K 2 j -k 2 nj exp [i (k nj z -ω j t)] . (9)
The S nj 's are the solved-for parameters in the least-squares problem [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]. In Eq. ( 8), N must be a multiple of 3, and in Eq. ( 9) we have
a nj = (3/8) δ j (n = 0 or n = N ), (10)
a nj = 2 × (3/8) δ j (mod(n, 3) = 0 and n = 0 and n = N ), [START_REF] Finkbeiner | Extrapolation of Galactic Dust Emission at 100 Microns to Cosmic Microwave Background Radiation Frequencies Using FIRAS[END_REF]
a nj = 3 × (3/8) δ j otherwise. (12)
Part (i) of the decomposition of the model ISRF then obtains as follows [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF]:
E φ = B ρ = B z = 0, (13)
B φ (t, ρ, z) = N n=0 Nω j=1 R n J 1 ρ ω j ω 0 R n Re [F nj (t, z)] + O 1 N 4 , (14)
E ρ (t, ρ, z) = N n=0 Nω j=1 c 2 ω 0 k n R n J 1 ρ ω j ω 0 R n Re [F nj (t, z)] + O 1 N 4 , (15)
E z (t, ρ, z) = N n=0 Nω j=1 c 2 ω 0 k 2 n -ω 0 J 0 ρ ω j ω 0 R n Im [F nj (t, z)] + O 1 N 4 , ( 16
) with R n = K 2 0 -k 2 n and F nj (t, z) = ω j ω 0 2 a n exp i ω j ω 0 k n z -ω j t S nj . (17)
(Here k n and a n (0 ≤ n ≤ N ) are as k nj and a nj in Eqs. ( 7) and ( 10), replacing K j by K 0 = ω 0 c , with ω 0 some (arbitrary) reference frequency.) Since we assume A j z = A j z for the global potentials generating the model ISRF, part (ii) of its decomposition is deduced from the first part by the EM duality:
E = cB, B = -E/c. (18)
It follows from this and from (13) that the model ISRF, sum of these two parts, has the components ( 14)-( 16), and that the other components are just
E φ = cB φ , B ρ = -E ρ /c, B z = -E z /c. ( 19
)
3 Frequency-by-frequency fitting of the potentials Equation (4) may be split into the different frequencies (marked by the index j), simply by removing the sum on j from both sides of either equation. The same is true for Eq. [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]. Naturally, also the weight w j may then be removed from the l.h.s., by entering the inverse 1/w j into the unknown spectrum function S j on the r.h.s. Equation ( 8) thus becomes imax i=1
ψ x i ω j ∼ = N n=0 f nj S nj on G (j = 1, ..., N ω ). ( 20
)
At this point, one notes that both ψ x i ω j [Eq. ( 3)] and f nj [Eq. [START_REF] Henry | Far-ultraviolet studies. vii. The spectrum and latitude dependence of the local interstellar radiation field[END_REF]] have the same dependence on time, exp(-iω j t), which we can hence remove also, to obtain a least-squares problem with merely the spatial variables ρ and z:
imax i=1 e iK j r i K j r i ∼ = N n=0 g nj S nj on G (j = 1, ..., N ω ), (21)
where
G = {(ρ m , z p ), 1 ≤ m ≤ N ρ , 1 ≤ p ≤ N z }
is the spatial grid, and
g nj (ρ, z) = a nj J 0 ρ K 2 j -k 2 nj exp (ik nj z) . (22)
The separation, into the different frequencies, of the fitting of the sum of the potentials emitted by the "stars", is consistent with the linearity of the wave equation and the Maxwell equations. Moreover, the elimination of the time variable from the fitting represents an appreciable gain in computing time.
We recall that, for the EM field in a galaxy, the arguments of the Bessel function J 0 and the angular exponential, e.g. in Eq. ( 22), have the huge magnitude |x| /λ ∼ 10 25 , which enforces us to use a precision better than quadruple precision in the computer programs, thus leading to slow calculations [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF]. Note that the "separate fitting", i.e. the least-squares problem [START_REF]Méthode de la phase stationnaire[END_REF], is not exactly equivalent to the "grouped fitting", i.e. the least-squares problem (8) (this will be confirmed by the numerical results below): the two are slightly different ways of adjusting the global potentials (1). 2 However, equations ( 13)-( 19) apply with the separate fitting as well -although the relevant values S nj are different. The separate fitting is more appropriate, because solutions corresponding with different frequencies behave independently in the Maxwell equations, and each frequency can be treated with more precision by considering it alone. Indeed a very important point is that, by switching to the separate fitting, we improve the situation regarding the "overfitting", i.e., we decrease the ratio R of the number of parameters N para to the number of data N data : now, for each value of the frequency index j, we have to solve the least-squares problem [START_REF]Méthode de la phase stationnaire[END_REF], with N para = N + 1 unknown parameters and N data = N ρ × N z data (the "data" are the values of the l.h.s. of ( 21) on the spatial grid G ). Whereas, with the formerly used grouped fitting, we had to solve just one least-squares problem [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF] with
N para = (N + 1) × N ω unknown parameters and N data = N t × N ρ × N z data.
On the other hand, through the processes of radiative transfer, there are indeed transfers of radiation intensity from some frequency domains to other ones, e.g. the interaction with dust leads to a transfer from higher to lower frequencies (see e.g. Fig. 3 in Ref. [START_REF] Gordon | The DIRTY model. I. Monte Carlo radiative transfer through dust[END_REF]). But these processes are not directly taken into account by the present model: not any more with the grouped fitting than with the separate fitting. They are indirectly taken into account through the adjustment of the energy density [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF], which we briefly recall now.
The time-averaged volumic energy density of an EM field having a finite set of frequencies, (ω j ) j=1,...,Nω , is given by [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]
U (x) := δW δV (x) = Nω j=1 u j (x), u j (x) := 1 4 6 q=1 α q C (q) j (x) 2 , ( 23
)
where the complex numbers C (q) j (x) (q = 1, ..., 6) are the coefficients in the expansion, in time-harmonic functions, of each among the six components of the EM field:
F (q) (t, x) = Re Nω j=1 C (q) j (x)e -iω j t (q = 1, ..., 6); (24)
and where α q = 0 for an electric field component, whereas α q = 0 c 2 for a magnetic field component (here 0 is the vacuum permittivity, with 0 = 1/(4π × 9 × 10 9 ) in SI units). For an axisymmetric EM field, it is enough to consider the plane φ = 0, thus x = x(ρ, z), and we have
C (q) j = C (q) j (ρ, z), u j = u j (ρ, z). (25)
Using in that case the decomposition (i)-(ii), the expressions of three among the C (q) j coefficients follow directly from the expressions ( 14)-( 16) of the corresponding components of the EM field [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]. Moreover, in the special subcase [START_REF] Sommerfeld | Die Greensche Funktion der Schwingungsgleichung[END_REF] considered here, the other components are given by [START_REF] Majaess | Characteristics of the Galaxy according to Cepheids[END_REF], whence in the same way the three remaining C (q) j coefficients. Now note that, in the least-squares problem [START_REF]Méthode de la phase stationnaire[END_REF], that we use to determine the values S nj allowing to compute the EM field ( 14)-( 16) and [START_REF] Majaess | Characteristics of the Galaxy according to Cepheids[END_REF], no data relative to the intensity of the fields emitted by the point-like "stars" has been used until now. Hence, we may multiply the l.h.s. of ( 21) by some number ξ j > 0, thus obtaining now new values S nj = ξ j S nj (n = 0, ..., N ) as the solution of ( 21). 3 Therefore, to adjust the model, we determine the numbers ξ j (j = 1, ..., N ω ) so that the values u j (x loc ) of the SED for our local position x loc in the Galaxy and for the frequencies ω j , as given by Eq. ( 23), coincide with the measured values, as determined from spatial missions. We take the measured local values f x loc (λ j ) as plotted in Ref. [START_REF] Porter | A new estimate of the galactic interstellar radiation field between 0.1µm and 1000µm[END_REF] (see Appendix A), and we take ρ loc = 8 kpc and z loc = 0.02 kpc, see e.g. Ref. [START_REF] Majaess | Characteristics of the Galaxy according to Cepheids[END_REF]. The model thus adjusted then allows us to make predictions: in particular, predictions of the spatial variation of the SED in the Galaxy. Such predictions may then be compared with predictions of the mainstream models of the ISRF, which models are very different from the present model.
Results: maximum energy density
In the foregoing work [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF], the same adjustment just described was used in the framework of the "grouped fitting" (i.e. the least-squares problem ( 8)). A surprising result was that found for the values of the maximum of the energy density u j (x) in the Galaxy -thus, owing to the axial symmetry (25), for the values of
u jmax = Max{u j (ρ m , z p ); m = 1, ..., N ρ , p = 1, ..., N z }, (26)
found for the different spatial grids investigated, all having ρ varying regularly from ρ 0 = 0 to ρ max 10 kpc and z varying regularly from z 0 = 0 or z 0 = -z max to z max ≤ 1 kpc. 4 These maximum values, which are always found at ρ = 0, thus on the axis of symmetry, are extremely high for lower wavelengths λ j , with u jmax 10 27 eV/cm 3 . Moreover, the value of u j (ρ = 0, z) depends little on z in the domain investigated. These surprisingly high values occur in a larger or smaller range of wavelengths, depending on the settings of the calculation. Therefore, the question arises whether these extremely high values are a physical effect or a numerical artefact. However, the dependence 3 We might even affect different weights ξ ij to the radiations of frequency ω j emitted by the different stars (i), in order to account for different luminosities. However, given that our aim is to determine the spectra S j , each of which characterizes according to Eq. (1) the axisymmetric radiation of frequency ω j in the galaxy, we feel that this would not likely change the results very significantly. 4 Precisely: ρ 0 := ρ m=1 ; ρ max := ρ m=Nρ with, in this subsection, ρ max = 10 kpc×
Nρ-1
Nρ ; z 0 := z p=1 and z max := z p=Nz with, in this paper, z max = 1 kpc and z 0 = -z max .
on the settings is governed by the "amount of overfitting": less overfitting increases the range of the high values [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]. This makes it plausible that the high values might be a true prediction of the model. We will now try to check whether this is indeed the case.
Robustness of the high values on the axis
In the present work based on the separate fitting (which, we argued, is more appropriate), we investigated rather systematically the question which we just asked. Since the influence of the spatial grid was found weak in the foregoing work [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF], only two grids were tried: an (N ρ = 10, ρ 0 = 0) × (N z = 21, z 0 = -1 kpc, z max = 1 kpc) grid (hereafter "rough grid"), and an (N ρ = 20, ρ 0 = 0)×(N z = 23, z 0 = -1 kpc, z max = 1 kpc) grid (hereafter "fine grid"). However, we investigated the influence of the fineness of the frequency mesh (N ω ) and the influence of the discretization number N quite in detail. [That integer N is used to compute the integrals over the wavenumber k, e.g. the integral (1) approximated to N n=0 f nj S nj , see Eq. ( 8).] The effect of choosing N ω = 23, N ω = 46, or N ω = 76, was studied simultaneously with the effect of choosing N = 12, or N = 24, 48, 96, 192, 384, and this was done for the two different grids.
Figures 1 to 4 show these effects. The most salient result is that the extremely high values of u jmax are now found with all calculations and in the whole domain of λ -except that on some curves, abrupt oscillations toward lower values of the energy density are present for some wavelengths. By looking at the set of these figures, it is manifest that such abrupt oscillations occur when an inappropriate choice of parameters is done: essentially, the discretization number N has to be large enough. (This is certainly expected, and this expectation is confirmed by the validation test in Ref. [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF].) Indeed, for a given value of N ω , those oscillations are maximum for the lowest value of N in the trial (N = 12) and progressively disappear when N is increased. What is a "large enough" value of N is not strongly dependent of the fineness of the spatial grid (i.e., of whether the "rough" one or the "fine" one is used) and that of the frequency mesh (N ω ). However, when using the finest frequency mesh (N ω = 76) for the "rough" spatial grid (Fig. 3), increasing N does not allow us to eliminate the abrupt oscillations toward lower values: it even happens then, that increasing N from 192 to 384 actually deteriorates the u jmax = f (λ j ) curve. We interpret this as due to the fact that, when us-ing a rougher spatial grid G for the fitting, less data are provided (the values taken on the grid G by the l.h.s. of Eq. ( 21)) to determine the unknowns S nj on the r.h.s. of (21) -while, of course, increasing N increases the number of unknowns and thus asks for more data. On the other hand, it is seen that (for the relevant values of N , say N = 192 or N = 384, so that little or no oscillations are present), the levels of u jmax depend quite little on N ω i.e. on the fineness of the frequency mesh: compare the bottom figures between Figs. 1, 2, and 3, and compare the three figures in Fig. 4. Also, the levels of u jmax depend quite little on whether the rough or the fine spatial grid is being used (see e.g. Fig. 5). We also checked that the results are little dependent of the pseudo-random "draw" of the set of point-like "stars": another draw of 16 × 16 × 36 triplets (ρ, z, φ) gives very similar curves u jmax = f (λ j ) (Fig. 6). In summary, we now find that, for the relevant values of N , say N = 192 or N = 384, u jmax decreases smoothly from 10 27 to 10 21 eV/cm 3 when λ j varies in the domain considered, i.e., from λ 0.11µm to 830µm. We note moreover that, for the low values of λ j , the values of u jmax calculated using the present "separate fitting" have the same (extremely high) magnitude as those calculated with the former "grouped fitting" [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]. These observations lead us to conclude that: (i) the extremely high values of u jmax (in the whole domain of λ considered) are really what the "Maxwell model of the ISRF" predicts for this model of the Galaxy. (ii) Somewhat surprisingly, it is the low values of u jmax obtained for the higher values of λ when the "grouped fitting" was used [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF] that were a numerical artefact.
Decrease of the energy density away from the axis
Recall that the maxima of the u j 's, which are extremely high, are always obtained for ρ = 0, i.e. on the axis of the (model of the) Galaxy, and that the energy density for ρ = 0 depends little on z in the domain investigated. The next questions are therefore: which is the extension around ρ = 0 of the domain of the very high values? Do such values imply that "too much EM energy" would be contained there? To answer these questions, we calculated the SED with successive lower and lower values of ρ max (see Note 4), starting from its value for the calculations shown on Figs. 1 to 3, i.e., ρ max = 9 kpc, and decreasing it to 1, 10 -1 , ..., 10 -14 kpc, using the "rough grid" parameters (see above), i.e., in particular, N ρ = 10, and using the S nj 's obtained with this rough grid with ρ max = 9 kpc -so that, for ρ max = 9 kpc, those cal-culations are not ones on the fitting grid. We looked in detail to the values u j (ρ m=2 , z p=1 ) = u j (ρ max /9, z = 0). The main results are shown on Fig. 7: even for very small values of ρ = 0, the values of u j are much smaller than u j max . That is, u j (ρ, z) decreases very rapidly when ρ is increased from 0. Actually, we found on the example of the smallest wavelength, corresponding with j = 1, that, from ρ = 1 kpc down to ρ = 10 -15 kpc, we have to a good approximation u 1 (ρ, z = 0) B/ρ, with
B = u 1 (ρ = 1 kpc, z = 0) 10 -0.45 (eV/cm 3 ).kpc. ( 27
)
This behaviour is not valid until ρ = 0, because for ρ → 0, u 1 (ρ, z = 0) tends towards u 1 (0, 0) < ∞, so we may assume u 1 (ρ, 0) B/ρ. On the other hand, Fig. 7 shows that there is nothing special to j = 1: we have u j u 1 , moreover for ρ 10 -15 kpc, u j depends only moderately on λ j . We observed in our calculations that, for ρ ≤ 1 kpc, u j (ρ, z) depends quite little on z with |z| ≤ z max = 1 kpc. Thus we may give the following approximation (which is likely an overestimate) to u j : for all j, and for |z| ≤ z max = 1 kpc, we have
u j (ρ, z) B/ρ for 10 -15 kpc ≤ ρ ≤ 1 kpc, (28)
u j (ρ, z) u j (ρ) B/ρ for ρ ≤ 10 -15 kpc, (29)
with u j (ρ) a decreasing function of ρ. According to Eq. (51) of the Appendix, this implies that, for |z| ≤ z max = 1 kpc, we have also
f x(ρ,z) (λ) B/ρ for 10 -15 kpc ≤ ρ ≤ 1 kpc, (30)
f x(ρ,z) (λ) B/ρ for 0 ≤ ρ ≤ 10 -15 kpc, (31)
independently of λ in the band
λ (1) := 0.1µm ≤ λ ≤ λ (2) := 830µm. ( 32
)
With this approximation, we can assess the total EM energy (53) contained in some disk
D(ρ 1 ) : (0 ≤ ρ ≤ ρ 1 , 0 ≤ φ ≤ 2π, |z| ≤ z max ), (33)
with ρ 1 ≤ 1 kpc, and in the wavelength band [λ (1) , λ (2) ]. This energy is bounded, owing to (30)-(31), by
W 1-2, D(ρ 1 ) Log λ (2) λ (1) × D(ρ 1 ) B ρ(x) d 3 x = Log λ (2) λ (1) × D(ρ 1 ) B ρ ρ dρ dφ dz, (34) i.e. W 1-2, D(ρ 1 ) Log λ (2) λ (1) × B ρ 1 × 2π × 2z max . (35)
But consider, instead of the disk D(ρ 1 ), the ring R(ρ 0 , ρ 1 ) :
(ρ 0 ≤ ρ ≤ ρ 1 , 0 ≤ φ ≤ 2π, |z| ≤ z max )
, with ρ 0 ≥ 10 -15 kpc. (Thus a ring with a very narrow aperture.) Using this time only (30), the same calculation gives
W 1-2, R(ρ 0 ,ρ 1 ) Log λ (2) λ (1) × B (ρ 1 -ρ 0 ) × 2π × 2z max . (36)
Taking ρ 0 = 10 -15 kpc, the conjunction of ( 35) and (36) shows that the contribution of the domain with 0 ≤ ρ ≤ ρ 0 is totally negligible, hence we may write
W 1-2, D(ρ 1 ) Log λ (2) λ (1) × B ρ 1 × 2π × 2z max . (37)
We can calculate the contribution δU that it gives to the average density of the EM energy in some disk D(ρ 2 ) of the Galaxy, with ρ 2 ≥ ρ 1 , making a volume V 2 = πρ 2 2 z max :
δU := W 1-2, D(ρ 1 ) V 2 4 Log λ (2) λ (1) × B ρ 1 ρ 2 2 . (38)
(Note that we may leave B in (eV/cm 3 ).kpc and ρ 1 and ρ 2 in kpc.) To give figures, let us first take ρ 1 = ρ 2 = 1 kpc, so that the corresponding value of δU is just the average volumic energy density U D(1 kpc) in the disk D(ρ 1 = 1 kpc). Then (38) with (27) give us
U D(1 kpc) 51 eV/cm 3 . ( 39
)
Note that this value is not very high. Another interesting application of Eq. ( 38) is to assess the effect, on that average value in the same domain D(1 kpc), of the domain of the "very high" values of the SED, say the domain for which u j ≥ 10 6 eV/cm 3 -i.e., from ( 27) and (28), ρ ≤ ρ vh , with ρ vh = 10 -6.45 3.55 × 10 -7 kpc 1.1 × 10 10 km,
which is almost twice the average distance Sun-Pluto, but still very small on a galactic scale. Taking to this effect ρ 1 = ρ vh and ρ 2 = 1 kpc in Eq. ( 38), the numerical values ( 27), ( 32) and (40) give us δU 4.54 × 10 -6 eV/cm 3 . In summary, the "very high" values of the SED are confined to the close neighborhood of the Galaxy's axis and contribute negligibly to the average energy density (39) in the disk D(1 kpc).
Results: spatial variation of the SED & comparison with the literature
This model's prediction for the spatial variation of the SED in the Galaxy was investigated, using again the separate fitting and the adjustment of the local SED on the measured values (both being described in Sect. 3). It was shown by using two different types of representations.
First, we plotted the SED at four different points in the Galaxy, and we compared the results with those obtained by Popescu et al. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF], who used a radiation transfer model built by them. (Their model also assumes axisymmetry.) show this comparison, our model being used here with N = 192 and N ω = 76. (Other choices of parameters that we tried gave similar figures.) It can be seen that the predictions of the present model do not differ very significantly from those of the radiation transfer model of Ref. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF]. The main difference is that our calculations oscillate somewhat strongly with the wavelength. The comparison of Figs. 8-11 here with Figs. 2345in Ref. [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF] shows that the difference between the results of the two models is significantly smaller now than it was in our previous work, in which the calculations were based on the grouped fitting [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]: the difference in log 10 (u j ) between the results of our model and Ref. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF] is here 1, whereas it went beyond 3 and even 4 in the previous calculations. However, the new calculations oscillate with the wavelength also at higher wavelengths. Whereas, when the grouped fitting was used, there was virtually no oscillation for λ 10 µm at the two positions at ρ = 8 kpc. (There were oscillations in the whole range of λ for the two positions at ρ = 1 kpc.) In order to check if those calculations inside the spatial domain of small values of the SED could be "polluted" by the extremely high values on the galaxy's axis, we investigated the effect of doing the fitting on a "shifted" grid with ρ ≥ 1 kpc. This did not lead to less oscillations. The general reason for these oscillations may be simply that this model takes fully into account the wave nature of the EM field.
Second, we plotted the radial and vertical profiles of the radiation fields at three wavelengths close to the ones considered in Fig. 7 of Popescu et al. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF] ("K, B , UV"). Figures 12 and13 show these profiles as they are calculated at points (ρ, z) belonging to the "logarithmic" grid on which the fitting was done for this calculation (see the legend). Those profiles of the radiation fields obtained with the present model on the fitting grid are relatively similar to those that are predicted with the very different model of Ref. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF], both in the levels and in the rough shape of the profiles. The most important difference is seen on the vertical profiles of Fig. 13: according to the Maxwell model of the ISRF, the energy density level decreases more slowly when the altitude z increases -or even, for the λ = 2.29µm radiation at ρ = 0.059 kpc or the λ = 0.113µm radiation at ρ = 7.5 kpc, the level of the SED does not decrease in the range considered for z. A similar lack of decrease is found on the radial profiles of Fig. 12, for the λ = 2.29µm radiation, either at z 0 or at z = 1.25 kpc. Using that same fitting done on a logarithmic grid, we also calculated and plotted the radial and vertical profiles of the same radiations, but this time for regularly spaced values of ρ (or respectively z), and in a larger range for ρ (or respectively z), Figs. 14 and15. The radial profiles of Figs. 12 and 14 are consistent, although, in contrast with Fig. 12, Fig. 14 plots the SED at points (ρ, z) which were not involved in the fitting, and which moreover involve an extrapolation to a larger range for ρ as compared with the fitting. The vertical profiles of Fig. 15, which also correspond with points which were not involved in the fitting, and also involve an extrapolation to a larger range for z as compared with the fitting, show an oscillating behaviour without any tendency to a decrease at large z.
Asymptotic behaviour at large ρ and at large z
To help understanding the behaviours just noted, in this section we study the asymptotic behaviour of the expressions of the components of the EM field and of the SED, as they are given by the Maxwell model of the radiation field.
The expressions ( 14)-( 16) that are implemented in the numerical model, are deduced from the exact integral expressions of the EM field for a given angular frequency ω, after the summation over the frequencies, and after the discretization ( 7) is done. Hence, we begin with the exact integral expressions of the EM field for a given angular frequency. These expressions, which are valid for any totally propagating, axisymmetric, time-harmonic EM field, are (Eqs. ( 13)-( 15) in Ref. [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF]):
B φ ω S = Re e -iωt +K -K √ K 2 -k 2 J 1 ρ √ K 2 -k 2 S(k) e ikz dk , (41)
E ρ ω S = Re -i c 2 ω e -iωt +K -K √ K 2 -k 2 J 1 ρ √ K 2 -k 2 ik S(k) e ikz dk , (42)
E z ω S = Re ie -iωt +K -K J 0 ρ √ K 2 -k 2 ω - c 2 ω k 2 S(k) e ikz dk , (43)
where K := ω/c -the other components being obtained by the duality [START_REF] Sommerfeld | Die Greensche Funktion der Schwingungsgleichung[END_REF] from the components (41)-( 43), with in the most general case an other spectrum function S (k).
The dependence in ρ of the components (41)-( 43) is determined by that of the Bessel functions J 0 and J 1 , and by the form of the integrals which involve them. At large x we have the asymptotic expansion [START_REF] Dieudonné | Calcul infinitésimal[END_REF]
J α (x) = 2 πx cos x - απ 2 - π 4 + O x -3 2 . ( 44
)
However, the argument of the Bessel functions in Eqs. ( 41)-( 43)
is x = ρ √ K 2 -k 2 .
Hence, as ρ → ∞, x does not tend towards ∞ uniformly, depending on the integration variable k: we even have x ≡ 0 independently of ρ, for k = ±K. Therefore, it is not obvious to see if the integrals (41)-(43) do have an expansion at fixed z as ρ → ∞.
As to the behaviour at fixed ρ and at large z: up to the real part, and for a fixed value of ρ, the components (41)-( 43) are expressions of the form e -iωt I(z), with
I(z) = b a f (k)e izg(k) dk, ( 45
)
and where, specifically, a = -K, b = +K, and the phase function is simply g(k) ≡ k, which has no stationary point. (The regular function k → f (k) depends on the component being considered, and also on ρ as a parameter.)
In that case, we have the asymptotic expansion [START_REF]Méthode de la phase stationnaire[END_REF]
I(z) = f (K) iz e izK - f (-K) iz e -izK + O 1 z 2 . ( 46
)
So at large z and for a fixed value of ρ, all components of any totally propagating, axisymmetric, time-harmonic EM field are order 1 z (unless the coefficient of 1 z in this expansion is zero, which is true only in particular cases -then the relevant component is higher order in 1 z ). This applies indeed to the part (i) of the decomposition (i)-(ii), that is given by Eqs. ( 41)-( 43), but also to the part (ii), since it is obtained from ( 41)-( 43) by applying the EM duality (18) (with, in the most general case, a different spectrum function S (k)). Hence the SED (23) is order 1 z 2 at large z, for any fixed value of ρ -when the C (q) j (x) coefficients correspond with the exact expressions (41)-( 43). [The explicit expression of the coefficient of 1 z 2 , depending on ρ , K, S(K), S(-K) (and, in the most general case, of the values S (K), S (-K) of the spectrum function S corresponding to the part (ii) of the decomposition (i)-(ii)) might easily be obtained from (23), (41)-(43), and (46).] The foregoing result applies to a general spectrum function S(k) (and S (k)). By summation on the frequency index j, it extends to an EM field having a finite set of frequencies.
Let us now investigate the asymptotic behaviour of the EM field and the SED, still in the totally propagating case with axial symmetry, but now after the summation over the frequencies and the discretization [START_REF] Arminjon | On the equations of electrodynamics in a flat or curved spacetime and a possible interaction energy[END_REF]. After the discretization, each among the C (q) j coefficients in the expansions (24) of the components of the EM field has the form [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]:
C (q) j = C (q) j (ρ, z) = N n=0 R (q) n J α ρ ω j ω 0 R n G nj (z) (α = 0 or α = 1), (47) where R
(q) n > 0 (except for R (q) 0 and R (q)
N , both of which turn out to be zero) are constant numbers, and where G nj (z) = exp (iω j t) F nj (t, z) is just the function F nj in Eq. ( 17) hereabove, deprived of its periodic timedependence (and thus is a periodic function of z). Together with (44), Eq. (47) shows that, at a given value of z, we have
C (q) j = O(1/ √ ρ) as ρ → ∞.
The SED for an EM field having a finite set of frequencies is given by Eq. ( 23).
For any given frequency (j), u j is a quadratic form of the C (q) j coefficients, hence
u j (ρ, z) = O 1 ρ (ρ → ∞). (48)
This is compatible with the curves shown on Fig. 14.
Passing to the behaviour at large z: in Eq. ( 47), the dependence in z is entirely contained in the functions G nj (z) which, we noted, are periodic. Hence, the coefficients C (q) j (ρ, z), each of which involves a linear combination of these functions (with coefficients that depend on ρ), are almost-periodic functions of z [START_REF] Ameriosi | Almost-periodic functions and functional equations[END_REF], and the same is true for the components (24) of the EM field. Moreover, for any given value of ρ, each u j in Eq. ( 23) is hence the sum of the square moduli of periodic complex functions of z. Therefore [START_REF] Ameriosi | Almost-periodic functions and functional equations[END_REF], the SED is an almost-periodic function of z, too. This result allows us to understand the lack of a decrease with z, observed on the vertical profiles of Fig. 15, which involve an extrapolation to a larger range for z as compared with the domain used for the fitting: an almost-periodic function f does not tend towards zero at infinity, unless f ≡ 0.5 As to Figs. 12 and 13, they involve no extrapolation, thus the relevant coefficients result from the fitting done on the very domain to which the curves belong. Hence the asymptotic behaviour of u j (whether at large z or at large ρ) is not relevant to them.
Discussion and conclusion
In this paper, we developed an improved numerical scheme to adjust the Maxwell model of the ISRF in a galaxy, which was proposed in a foregoing work [START_REF] Arminjon | An analytical model for the Maxwell radiation field in an axially symmetric galaxy[END_REF]. Namely, at the stage of fitting the radiations emitted by the many different point-like "stars" which make the model galaxy, we are now considering each time-harmonic component separately, which is more precise. This allows us as a bonus to eliminate the time variable at this stage, Eq. ( 21)thus reducing the computer time.
We used that "separate fitting" procedure, first, to check if the extremely high values of the spectral energy density (SED), which were predicted by this model on the axis of our Galaxy with the former "grouped fitting" [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF], are a physical prediction or a numerical artefact. A rather detailed investigation led us to conclude that these extremely high values are indeed what the model predicts -see Sect. 4.1. However, we find also that the SED decreases very rapidly when one departs from the galaxy's axis, see Fig. 7. Moreover, the average energy density of the EM field in, for example, a disk of diameter 1 kpc and thickness 2 kpc, is not very high, Eq. ( 39). The extremely high values of the SED on the axis of our Galaxy (and likely also in many other galaxies) are a new and surprising prediction for the ISRF. Recall that our model is adjusted so that the SED predicted for our local position in the Galaxy coincide with the SED determined from spatial missions, and thus is fully compatible with what we see of the ISRF from where we are. The prediction of the present model may be interpreted as a kind of self-focusing of the EM field in an axisymmetric galaxy. On the other hand, as we mentioned in the Introduction, the existing (radiation-transfer) models for the ISRF do not consider the EM field with its six components coupled through the Maxwell equations. These models consider paths of photons or rays and do not take into account the nature of the EM radiation as a field over space and time, subjected to specific PDE's. So a self-focusing cannot be seen with those models. It is difficult to assess the degree to which this prediction depends on the specific assumptions of the model, in particular the axial symmetry. If this prediction is at least partly confirmed, this will have important implications in the study of galaxies.
Second, we studied the spatial variation of the SED predicted by our model with the new procedure, and compared it with the predictions of a recent radiation transfer model [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF]. The difference between the results of the two models is much smaller now than it was [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF] with the older procedure. However, the SED predicted by our model still oscillates as function of the wavelength (or the frequency) also with the new, "separate fitting" procedure, although the different frequencies are then fully uncoupled. We also plotted the radial and vertical profiles of the radiation fields at three wavelengths. We confirm the slower decrease at increasing altitude z as compared with the radiation transfer model of Ref. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF], indicated by the previous work [START_REF] Arminjon | Spectral energy density in an axisymmetric galaxy as predicted by an analytical model for the Maxwell field[END_REF]. Actually, when the vertical profiles of the radiation fields are calculated and plotted in a domain that involves an extrapolation to a (three times) larger domain of z, a slightly oscillating behaviour without a decrease at large z is observed. This is explained by our study of the asymptotic behaviour of the analytical expressions of the EM field and the corresponding SED: we show that the SED calculated by the implemented model, that involves a discretization of the wave number, is a quasi-periodic function of z -although the exact SED obtained from the integral expressions (41)-( 43) is order 1/z 2 at large z. Thus, extrapolation on the domain of z should be used parsimoniously with the current numerical implementation based on a discretization of the wave number.
A Appendix: Discrete vs. continuous descriptions of the spectral energy density
The SED, u or rather u x (it depends on the spatial position x), is normally a continuous density with respect to the wavelength or the frequency: the time-averaged volumic energy density of the EM field at some point x and in some wavelength band [λ (1) , λ (2) ] is given by
U 1 2 (x) := δW 1 2 δV (x) = λ (2)
λ (1) u x (λ) dλ.
However, in many instances, including the present work, one is led to consider a discrete spectrum, thus a finite set of frequencies, (ω j ) (j = 1, ..., N ω ), hence a finite set of wavelengths. It leads also to a discrete energy density, Eq. ( 23). This raises the question of how to relate together these discrete and continuous descriptions of the SED. To answer this question, we note first that the u j 's in Eq. ( 23) are indeed volumic energy densities. Whereas, u x in Eq. (49) has physical dimension [U ]/[L], i.e., it is
f x (λ) := λu x (λ) (50)
which is a volumic energy density. And it is indeed f x that is being considered by Popescu et al. [START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF], when plotting the SEDs at different places in the Galaxy, or when plotting the radial or axial profiles of the radiation field at some selected wavelengths.
As is more apparent with the "separate fitting" used now (see Sect. 3), the discrete set of frequencies ω j , considered in the Maxwell model of the ISRF, represents just a finite sampling of the actual continuous distribution. The link between the two descriptions is hence given simply by the following relation:
u j (x) = f x (λ j ) = λ j u x (λ j ). ( 51
)
Consider a bounded spatial domain D. The total EM energy contained in the domain D and in the wavelength band [λ (1) , λ (2) ] is given, according to Eq. ( 49), by
W 1-2, D := D δW 1 2 δV d 3 x = D λ (2)
λ (1) u x (λ) dλ d 3 x,
i.e., using (50):
W 1-2, D = D λ (2)
λ (1)
f x (λ) d (Logλ) d 3 x. (53)
If we are using a model considering a fine-enough finite set of wavelengths (λ j ), j = 1, ..., N ω , with λ 1 = λ (1) and λ Nω = λ (2) , we may use (51) to estimate the integral over λ in Eq. (53) as a finite sum, e.g. a Riemann sum:
λ (2) λ (1) f x (λ) d (Logλ) Nω-1 j=1 f x (λ j )(Logλ j+1 -Logλ j ) Nω-1 j=1 u j (x)(Logλ j+1 -Logλ j ) (54
) or a better approximation (trapezoidal, Simpson,...). Maximum of SED on (N ρ =10,ρ 0 =0)×(N z =21,z 0 =-1kpc,z max =1kpc) grid Maximum of SED on (N ρ =20,ρ 0 =0)×(N z =23,z 0 =-1kpc,z max =1kpc) grid (N ρ =10,ρ 0 =0)×(N z =21,z 0 =-1kpc,z max =1kpc) grid, N=192 (N ρ =20,ρ 0 =0)×(N z =23,z 0 =-1kpc,z max =1kpc) grid, N=192 (N ρ =20,ρ 0 =0)×(N z =23,z 0 =-1kpc,z max =1kpc) grid, N=384
N ω =76, N=192 N ω =76, N=384
N ω =76, N=96 N ω =76, N=192 N ω =76, N=384
Maximum of SED on (N ρ =10,ρ 0 =0)×(N z =21,z 0 =-1kpc,z max =1kpc) grid Maximum of SED on (N ρ =10,ρ 0 =0)×(N z =21,z 0 =-1kpc,z max =1kpc) grid
Figure 1 : 1 -
11 Figure 1: Effect of discretization number N for: N ω = 23, rough grid
Figure 2 : 1 -
21 Figure 2: Effect of discretization number N for: N ω = 46, rough grid
Figure 3 : 1 -
31 Figure 3: Effect of discretization number N for: N ω = 76, rough grid
Figure 4 :
4 Figure 4: Effects of discretization number N and number of frequencies N ω , fine grid
Figure 5 :
5 Figure 5: Comparison of two different grids
Figure 6 :Figure 7 :
67 Figure 6: Comparison of two different draws of the set of "stars"
Predicted energy density at ρ = 1 kpc, z = 0 kpc(N ρ =10,ρ 0 =0)×(N z =21,z 0 =-1kpc,z max =1kpc) grid, N ω =76, N=192[START_REF] Popescu | A radiation transfer model for the Milky Way: I. Radiation fields and application to High Energy Astrophysics[END_REF]
Figure 8 :
8 Figure 8: SED at (ρ = 1 kpc, z = 0)
Figure 9 :
9 Figure 9: SED at (ρ = 1 kpc, z = 1 kpc)
Figure 10 :
10 Figure 10: SED at (ρ = 8 kpc, z = 0)
Figure 11 :
11 Figure 11: SED at (ρ = 8 kpc, z = 1 kpc)
Figure 12 :
12 Figure 12: Radial profiles of radiation fields. Fitting done on a logarithmic grid:ρ 1 = ρ max , ρ m = ρ m-1 × q (m = 2, ..., N ρ ); z 1 = z max , z k = z k-1 × q (k= 2, ..., N z ); q = 0.5. SED values at (ρ m , z Nz ) (m = 1, ..., 10), then at (ρ m , z 4 ) (m = 1, ..., 10), are plotted.
Figure 13 :
13 Figure 13: Vertical profiles of radiation fields. Fitting done on the same logarithmic grid as for Fig. 12. SED values at (ρ 10 , z k ) (k = 1, ..., N z ), then at (ρ 3 , z k ) (k = 1, ..., N z ), are plotted.
Figure 14 :
14 Figure 14: Radial profiles of radiation fields. Fitting done on the same logarithmic grid as for Fig. 12. SED values at regularly spaced values of ρ, starting at 0.1 kpc, and for z = 0, then z = 1.25 kpc, are plotted.
Figure 15 :
15 Figure 15: Vertical profiles of radiation fields. Fitting done on the same logarithmic grid as for Fig. 12. SED values at regularly spaced values of z, starting at 0, and for ρ = 0.1, then ρ = 7.5 kpc, are plotted.
The outgoing wave (2) can also be characterized[START_REF]Sommerfeld radiation condition. Wikipedia, The Free Encyclopedia[END_REF] as being the only time-harmonic
If Eq.[START_REF] Dieudonné | Calcul infinitésimal[END_REF], or equivalently Eq. (21), were an exact equality instead of being an equality in the sense of the least squares, then of course it would imply Eq. (8) (with w j ≡ 1) as an exact equality.
This results from the most common definition of an almost-periodic function f[START_REF] Ameriosi | Almost-periodic functions and functional equations[END_REF]: the existence, for any > 0, of a relatively dense set of almost-periods. I.e., for any > 0, there exists a length l such that, for any x ∈ R, there is at least one numberT ∈ [x, x + l [ such that for any t ∈ R, |f (t + T ) -f (t)| ≤ . If f is not identically zero, let a ∈ R, such that f (a) = α = 0. Taking = α2 in the definition above, we thus have|f (a + T ) -f (a)| ≤ α 2 , hence |f (a + T )| ≥ α 2 .Since x can be taken arbitrarily large and since T ≥ x, this proves that f does not tend towards zero at infinity. |
00985227 | en | [
"spi.gciv",
"phys.meca.ther"
] | 2024/03/04 16:41:22 | 2013 | https://hal.science/hal-00985227/file/Assessment%20of%20the%20air%20change%20rate%20of%20airtight%20buildings.pdf | Matthieu Labat
Monika Woloszyn
Géraldine Garnier
Jean Jacques Roux
Assessment of the air change rate of airtight buildings under natural conditions using the tracer gas technique. Comparison with numerical modelling
Keywords: air change rate, single-room building, wind and thermal effects, tracer gas measurements, numerical modelling
Under natural conditions, air change rates are very sensitive to specific building elements and to climate conditions, even more so in the case of airtight buildings. Consequently, applying general correlations to such cases may lead to inaccurate predicted air change rates. Still, this approach remains valuable because of its simplicity compared to other methods such as wind tunnelling and CFD simulations. In this paper, the tracer gas concentration decay technique was selected to contribute additional information to classical air-tightness measurements. The measurements were used to fit the coefficients of a general single-node pressure model. Simulation results were found to be consistent with tracer gas measurements most of the time. However, the closeness of the fit is strongly related to the average pressure coefficients from the literature, which were estimated more precisely using the other techniques mentioned above. From a general point of view however, it would seem promising to extend this method to other buildings.
Introduction
As saving energy is so important in buildings nowadays, building envelopes need to be increasingly efficient in order to meet current regulation codes. The use of efficient thermal insulation materials has made it possible to reduce energy consumption by reducing heat conduction through the envelope. As this type of heat loss has progressively decreased over the years, the focus has shifted to other heat transfer modes, in particular, convective heat transfers. This type of transfer can be reduced by making the envelope air tight in order to limit air transfers between indoor and outdoor areas. While energy losses are reduced however, other considerations arise such as moisture accumulation and indoor air quality [START_REF] Blondel | Screening of formaldehyde indoor sources and quantification of their emission using a passive sampler[END_REF]Plaisance 2011, Karlson and[START_REF] Karlsson | A comprehensive investigation of a low energy building in Sweden[END_REF]. One way to deal with these issues is to improve ventilation systems [START_REF] Woloszyn | The effect of combining a relative-humidity-sensitive ventilation system with the moisture buffering capacity of materials on indoor climate and energy efficiency of buildings[END_REF][START_REF] Koffi | Analyse multicritère des stratégies de ventilation en maisons individuelles[END_REF]. On the other hand, natural ventilation is also useful given that it uses no energy at all. Natural ventilation can be easily achieved by considering openings [START_REF] Santamouris | Experimental investigation of the air flow and indoor carbon dioxide concentration in classrooms with intermittent natural ventilation[END_REF] or dedicated building elements such as a solar chimney [START_REF] Afonso | Solar Chimneys: simulation and experiment[END_REF].
Under natural conditions, air change rates vary greatly because of shifting climates. Indeed, the driving forces are wind and thermal effects, which are both time-dependent. Using constant air change rates would lead to a high level of inaccuracy and detailed approaches are therefore preferable. Numerous studies have been conducted on such cases and have shown that natural driven air change rates are difficult to assess, even when considering simple buildings [START_REF] Larsen | Single-sided natural ventilation driven by wind pressure and temperature difference[END_REF]Heiselberg 2008, Li et al. 2001). As a consequence, it would seem difficult to extend the results reported in the literature to existing buildings.
The input of additional information on air transfers would make it possible to select appropriate results from the literature and to enhance modelling. The main objective of this paper is to propose and to test a simple experimental technique which will help improve a classic single-node pressure model. The results from both the measurements and the simulation will be presented and discussed.
This study is based on an existing airtight building with a single opening. The air tightness of existing buildings is commonly measured using depressurizing methods, which provide preliminary information. However, these methods do not provide information on the driving forces of natural ventilation. Wind tunnelling measurements provide useful additional information about wind effect, which is highly dependent on building geometry, envelope permeability and the surroundings [START_REF] Koinakis | The effect of the use of openings on interzonal air flows in buildings: an experimental and simulation approach[END_REF]). Nevertheless, this method is expensive and time-consuming and in many cases, therefore, other solutions are preferable, such as computational fluid dynamics (CFD) modelling [START_REF] Van Hoof | On the effect of wind direction and urban surroundings on natural ventilation of a large semi-enclosed stadium[END_REF]. Still, natural ventilation is time-dependent because it is strictly dependent on climate conditions, which can vary greatly within a single day. Numerous climate conditions should be simulated so that accurate results can be assessed. Another experimental method called the tracer gas technique is described by [START_REF] Cheong | Airflow measurements for balancing of air distribution systemtracer gas technique as an alternative?[END_REF]. This is an interesting alternative, as it works properly under natural pressure differences. It has already been applied to measure the air change rate in existing buildings by [START_REF] Santamouris | Experimental investigation of the air flow and indoor carbon dioxide concentration in classrooms with intermittent natural ventilation[END_REF] and Karlson and Moshfegh (2007).
The tracer gas technique
Various tracer gas techniques have been examined. SF6 is the most commonly used tracer gas as it is detectable at very low concentrations [START_REF] Afonso | Solar Chimneys: simulation and experiment[END_REF][START_REF] Chao | Ventilation performance measurement using constant concentration dosing strategy[END_REF][START_REF] Janssens | Effects of wind on the transmission heat loss in duo-pitched insulated roofs: a field study[END_REF], Laporthe et al. 2001, Santamouris et al. 2008). However, the technique is not limited to a single gas and other gases such as CO2 [START_REF] Baker | A method for estimating carbon dioxide leakage rates in controlled-environment chamber using nitrous oxide[END_REF][START_REF] Blondel | Screening of formaldehyde indoor sources and quantification of their emission using a passive sampler[END_REF][START_REF] Collignan | Use of metabolic-related carbon dioxide as tracer gas for assessing air renewal in dwellings[END_REF] and N2O [START_REF] Baker | A method for estimating carbon dioxide leakage rates in controlled-environment chamber using nitrous oxide[END_REF], Karlson and Moshfegh 2007[START_REF] Koinakis | The effect of the use of openings on interzonal air flows in buildings: an experimental and simulation approach[END_REF], Laporthe et al. 2001) have also been found suitable in many cases.
The criteria defined by [START_REF] Cheong | Airflow measurements for balancing of air distribution systemtracer gas technique as an alternative?[END_REF] are used to help compare the advantages and drawbacks of these three gases. The results are presented in Table 1 below. Laporthe et al. (2001) experimentally compared SF6 and N2O in order to investigate the effects of the higher density of SF6. However, this did not seem to affect the results significantly. In fact, one conclusion drawn was that SF6 may be preferable because it was more detectable. [START_REF] Baker | A method for estimating carbon dioxide leakage rates in controlled-environment chamber using nitrous oxide[END_REF] experimentally compared N2O and CO2, concluding that both were as accurate when experimental solutions were set to limit the drawbacks of using a tracer gas which is already present in the atmosphere. In the end, CO2 was selected here to obtain the tracer gas measurements as it has less impact on the environment (greenhouse effect) and it is also the least expensive solution.
Concerning the gas injection, several techniques can be found in the literature. [START_REF] Cheong | Airflow measurements for balancing of air distribution systemtracer gas technique as an alternative?[END_REF] defines four of them precisely:
• Constant injection: since the flow release rate is known, it is possible to achieve a tracer gas balance so that air-leakage rates are shown. It is a high gas consumption method, but it is well adapted to long-term observations [START_REF] Afonso | Solar Chimneys: simulation and experiment[END_REF]. • Constant concentration: the principle is very similar, but the system is far more complex (Koinakis 2005, Janssens and[START_REF] Janssens | Effects of wind on the transmission heat loss in duo-pitched insulated roofs: a field study[END_REF]. When used correctly, such systems can lead to a high level of accuracy. It is also the most expensive of the methods presented here. • Pulse injection: a small quantity of gas is injected into a duct and its concentration is measured further along [START_REF] Tanner | Use of carbon dioxide as a tracer gas for determining in package airflow distribution[END_REF]. Consequently, this method is of no interest here. • Concentration decay: a high quantity of gas is released and mixed into the system. Since the tracer gas concentration is measured along with the decrease, air leakages can be calculated [START_REF] Blondel | Screening of formaldehyde indoor sources and quantification of their emission using a passive sampler[END_REF].
Concentration decay is the most widespread method used to study the air change rate in a building because it is the easiest to set up and gives accurate results. This method was compared experimentally with the constant concentration method by [START_REF] Chao | Ventilation performance measurement using constant concentration dosing strategy[END_REF]. Both were found to achieve the aim, but the author warned about the importance of achieving a widespread sampling in the building, particularly if the measurements take place in a multizonal building. Similar conclusions were also reached by [START_REF] Van Buggenhout | Influence of sampling positions on accuracy of tracer gas measurements in ventilated spaces[END_REF]. In order to measure the air change rate in our experimental house, it was finally decided to use the concentration decay method.
The air change rate is deduced from the general tracer gas mass balance equation, given in [START_REF] Cheong | Airflow measurements for balancing of air distribution systemtracer gas technique as an alternative?[END_REF] and presented in (1). The calculation of the air change rate will be discussed further in 3.2.
( ) ( ) ( ) ( ) ( ) ( ) t C t C t Q t S dt t dC V In Out V In - + = (1)
where CIn and COut are the CO2 concentration of indoor and outdoor air, respectively (ppm), V is the indoor air volume (m 3 ), QV the air leakage rate (m 3 .s -1 ) and S is the amount of gas released indoors.
Experimental set-up 3.1 A brief description of the experimental set-up
The experimental house is situated in Grenoble, France (latitude: 45.2°E, longitude: 5.77°N) and is exposed to a natural climate. It consists of a single-room building, the dimensions of which are designed to be representative of a living room (4.56×4.55×2.36 m 3 interior dimensions). The indoor air volume is approximately equal to 49 m 3 (this value will be used further to compute the air change rate of the whole test house, as presented in (1)).
The structure is made of spruce studs (section: 0.07×0.165 m 2 ), positioned every 0.60 m. The walls are composed of gypsum board on the indoor side, cellulose wadding used as an insulation material between the spruce studs, particle board on the exterior side, a rain screen, a ventilated air gap and wooden cladding. The ceiling and the floor are heavily insulated and vapour transfer is prevented by two layers of vapour barrier. Further details about the test house can be found in [START_REF] Piot | Experimental wooden frame house for the validation of whole building heat and moisture transfer numerical models[END_REF].
The roof is a typical tiled French roof with two 30° slopes facing north and south. The floor has been elevated to 0.57 m above the ground in order to simplify the boundary conditions for the numerical models. The door is located in the middle of the northern face and has been insulated with polystyrene in order to achieve a homogeneous level of insulation.
The test house is also equipped with a ventilation system that is located in the attic: the indoor opening is located in the ceiling while the outdoor opening is located close to the eaves on the western face, 3.43 m up the wall of the experimental house, as shown in Fig. 1(a andb). Another opening is located on the door but has been sealed to increase the air tightness of the building. The indoor temperature and relative humidity are also measured, and a weather station, located close to the test house, records the climatic conditions every 10 min. In particular, wind speed and direction are measured 8 m above the ground.
Tracer gas equipment and setting the initial and final indoor concentrations
The outdoor CO2 concentration varies throughout the day, even though the experimental house is not set too close to any human activity area. In order to monitor this, a first gas analyser was set up outside at the weather station. Preliminary measurements showed that the outdoor CO2 concentration remained between 400 and 600 ppm for several months, as shown in Fig. 2. A second analyser was set indoors, while the outdoor CO2 concentration remained monitored. Both devices measure the infrared absorption of gases and are calibrated to detect CO2. The indoor analyser is equipped with a pump which continuously samples indoor air. To enhance the sampling efficiency, four ventilators were installed to mix indoor air and four sampling tubes were added to the analyser. Carbon dioxide is released from a bottle set inside the experimental house.
In order to minimize the uncertainty related to the outdoor CO2 concentration, the indoor concentration should be set far higher than the outdoor concentration. At the same time, it is better to reach the gas analyser's saturation rate in order to take advantage of its sensitivity. These two limits are used to define the initial and final indoor concentration, respectively CIn(t=0) and CIn(t=tStop). To first order, they can be used as boundary conditions to solve (1), considering the average outdoor concentration Out C , a constant air change rate and no gas release. As a consequence, the elapsed time tElapsed while the indoor concentration decreased from CIn(t=0) to CIn(t=tStop) depends on these two limits, as presented in (2).
( ) ( ) - = - = = Out Stop In Out In V Elapsed C t t C C t C Q V t 0 ln (2)
It is preferable to conduct a single experiment over several hours so that the outdoor conditions variations have a detectable influence and are easier to identify. Based on preliminary experiments, the average air change rate and outdoor CO2 concentrations were estimated equal to 6 m 3 .h -1 and 500 ppm, respectively. The indoor gas analyser range is [0; 10,000 ppm]. The initial concentration was set at 9,500 ppm (this value is preferred to 10,000 ppm to avoid extreme scale values). The final CO2 concentration was set at 2,000 ppm, so the indoor concentration is at least four times higher than outside. As a consequence, the average elapsed time for a single experiment should reach 15 h. Similarly, the average air change rate can be deduced from (2), measuring the elapsed time only. This is a very convenient way to discuss the general influence of outdoor conditions, yet a detailed analysis would still contribute more information. This approach will be discussed further in 5.1.
Before carrying out air change rate measurements, several preliminary tests were conducted and aimed at checking:
• The efficiency of the sampling system (ventilators and sampling tubes);
• The low impact of indoor air mixing on the global air change rate;
• A good match with estimated extracted rates when the ventilation system was active (three air changes per hour); • Low reactivity between carbon dioxide and indoor sides and with water vapour.
Air change rate modelling
General equations
Air can move into and out of the building by several means: through openings (here, the ventilation opening located on the western face), through walls because of their air permeability (here, the four vertical walls) and through cracks or at junctures (here, at the door frame).
Once the N air paths have been identified, it is necessary to consider the pressure field all around the building and to compute a pressure balance in order to correctly model air transfers throughout the whole building [START_REF] Koffi | Analyse multicritère des stratégies de ventilation en maisons individuelles[END_REF]. For each single i air path (with i from 1 to N), both wind and thermal effects have to be estimated. Next, the pressure difference between indoor and outdoor air dPi (Pa) can be evaluated using (3). As wind and thermal effects vary quickly within a day, each pressure value is assumed to be time-dependent.
( ) ( ) ( ) ( )
t
+ = + (3)
where PInt is the indoor air pressure (Pa), PDyn is the dynamic pressure induced by wind (Pa) and dPTh is the pressure difference induced by the thermal effect (Pa).
Next, related airflow rates can be estimated using the general relationship (4).
( ) (
)
i n i i i V t dP C Q = , (4)
where QV,i is the air flow (m 3 .h -1 ), C is the discharge coefficient (m 3 .s -1 .Pa -n ), n is a non-dimensional coefficient which characterizes the flow (1 is for laminar flows, 0.5 for turbulent flows). Every value is related to the i air path being considered.
Finally, the N airflows are used to compute a mass balance which leads to the calculation of the indoor pressure PInt used in (3). To avoid numerical problems, (3) is first computed using the indoor pressure from the previous time step.
As already mentioned, wind tunnel measurements would provide accurate and useful information, but these take too long to set up. General results from the literature have therefore been used to describe the wind effect. This can be estimated very simply by computing the dynamic wind pressure as presented in (5):
( ) ( ) ( ) ( ) 2 , , , 2 1 z t V C t t P Z i P Out i Dyn = (5)
where ρOut is the density of outdoor air (kg.m -3 ), CP is a non-dimensional coefficient depending on wind incidence (θ) and VZ (m.s -1 ) is the wind speed at wall altitude z (m) away from the test house.
In order to evaluate the pressure difference induced by thermal effects (the so-called stack effect), the general relationship (6) was used. This requires the definition of a single parameter; the altitude zi (m) of the openings and leakages through which air can move.
( ) ( ) ( ) ( ) t t z g t dP In Out i i Th - - = , ( 6
)
where g is gravity (= 9.81 m.s -2 )
Using air tightness measurements to estimate air flow rates
Air tightness tests (referred to below) rely on creating significant pressure gradients between indoor and outdoor climates. Here, this has been achieved using a device called a Permeascope ® , developed by the French company ALDES. This device was selected over the well-known Blowerdoor ® technique because its design is much better suited to small airtight rooms. Indoor air is extracted by means of the ventilation duct (see Fig. 3(a)) and the extracted air flow rate is measured. Even if these kinds of tests take place under greater pressure differences (from 20 to 100 Pa) than those occurring under natural conditions (from 2 to 20), [START_REF] Walker | A comparison of the power law to quadratic formulations for air infiltration calculations[END_REF] have shown that ( 4) is still valid.
As pressure and air flow rates are both measured, Ci and ni values such as the ones used in (4) can be estimated by performing linear regressions on the measurements. This is not immediate however, because a single test includes the air flows coming from several air paths (here, from the four vertical walls and from the leakages at the door frame).
A vapour barrier is another material commonly used in wood-framed houses to prevent moisture damage, which also strongly reduces air transfers and contributes to achieving airtight buildings. Here, a vapour barrier which has an impact on moisture transfers equivalent to that of an 18-m-thick air layer (Sd = 18 m), was placed in the vertical walls. A second air tightness test was carried out and the results are presented in Table 2. To allow quick comparison, the I4 value (from the French standard) was included. This is the ratio of the air change rate occurring under 4 Pa to the building surface (21 m 2 ). Indeed, 4 Pa is roughly representative of pressure differences occurring under natural conditions. Even if lower I4 values can now be achieved with passive buildings, the envelope air tightness level is high compared to most existing buildings, as presented in [START_REF] Sfakianaki | Air tightness measurements of residential houses in Athens, Greece[END_REF]. When assuming that the leakages at the walls are reduced to almost naught when the vapour barrier is in place, it becomes apparent that half the leakages are located at the door frame. A similar conclusion was also reached by [START_REF] Piot | Hygrothermique du bâtiment : expérimentation sur une maison à ossature bois en conditions climatiques naturelles et modélisation numérique[END_REF] considering other measurements. When assuming that the remaining leakages (the second half) are equally distributed over the four vertical walls, (4) can be rewritten for the door as (7a) and for a single vertical wall as (7b) (the pressure difference is bound to be different for every single vertical wall).
( ) ( ) (
)
n Door Door V t dP C t Q = 2 , ( 7a
) ( ) ( ) ( ) n Wall Wall V t dP C t Q = 8 , ( 7b
)
The air flow rate which occurs through the ventilation opening remains to be defined. A third test (see Fig. 3(b) referred to later as the global airflow was therefore carried out, during which the air was extracted through the opening located at the door (the seal was removed for this test only). Leakages at the door frame and through the walls were also taken into account, as shown in Fig. 3(a). The results are presented in Table 3. When assuming that the leakages at the wall and the door frame are the same in both experiments, the airflow rate crossing the ventilation opening can be deduced by subtracting the air tightness measurements from the global airflow measurements. So can be rewritten as ( 8) in order to model the air flow through the ventilation opening.
Uncertainties on wind and thermal effects
Modelling wind and thermal effects may be difficult to achieve for a real building. Firstly, the thermal effect can be easily computed at the ventilation opening given that its height hopening is clearly identified. This is not the case when evaluating the thermal effect for leakages spread over the entire height of the wall, as illustrated in Fig. 4(a).
Secondly, the pressure field all around the building is non-homogeneous and depends on wind direction: this is mathematically represented by the use of the CP values in (5). However, these values are also strongly dependent on the building geometry, as shown by [START_REF] Uematsu | Wind pressure acting on low rise buildings[END_REF]. The dependency is greatest when looking at specific locations, such as the roof and its edges. Unfortunately, this is where the ventilation opening is located. Moreover, the wind effect is very significant on an opening, which results in higher uncertainties on the airflow rate modelled at the ventilation opening, as illustrated in Fig. 4(b). At the same time, the vertical walls are covered with an open jointed cladding, which may also affect the pressure field acting on air transfers. An alternative way of modelling thermal effects is to set two equivalent heights hWall and hDoor which would represent the global thermal effect at the walls and at the door, respectively. Concerning the wind effect, it was decided to use general CP values (see Table 4), which were already proposed in the HAM-Tools model for vertical walls. It is well known that pressure coefficients vary greatly with the building geometry, as presented in [START_REF] Uematsu | Wind pressure acting on low rise buildings[END_REF] and [START_REF] Larsen | Single-sided natural ventilation driven by wind pressure and temperature difference[END_REF]. (Costolla et al. 2010) advise conducting detailed studies to assess these pressure coefficients accurately, even when the building geometry studied is simple, as here. Indeed, the authors showed that using average pressure coefficient values can lead to substantial errors. Moreover, errors were found to be the greatest when the openings were the smallest; the ratio exceeded 5 in their study. However, the authors also indicated that few methods are able to achieve this goal (wind tunnel experiments and detailed CFD studies, for example). Most of the time, these means are both expensive and time-consuming. As a consequence, the results reported in the literature could still be used and should fit with average trends.
Local wind speed in ( 5) is simply estimated, as presented in [START_REF] Hagentoft | Introduction to Building Physics[END_REF]: it is based on formula (9), which considers a logarithmic wind speed profile. It requires the use of a wind speed measurement V10 obtained 10 m above the ground. In order to model the influence of the surroundings, two constants are introduced and called "terrain coefficients" (k; a). The values given by the author are reported in Table 5.
a Z z k V V = 10 (9)
Results and discussion
Tracer gas measurements and general analysis
Twelve experiments, conducted from October 2010 to January 2011, are listed in Table 6. The last two columns indicate average values so as to compare experiments, yet further calculations use timedependent values. The average airflow rate (Qv) calculation is based on (2) and uses the elapsed time measured.The very last column indicates the temperature difference between indoors and outdoors, representative of the thermal effect. The moisture content difference is also influential and will be taken into account in further calculations. In all cases the electric heater was active and maintained indoor air at a constant temperature. Finally, two distinct regimes were observed during experiment no. 1, separated 3 hours 15 minutes after the beginning of the experiment, so results were split in two lines in Table 6. Results for this specific case will be discussed further. 1) Based on the indoor CO2 decrease from 9,500 to 5,720 ppm (2) Based on the indoor CO2 decrease from 5720 to 2,000 ppm (3) This experiment was stopped when the indoor CO2 concentration reached 3,100 ppm
The results clearly show that the elapsed time can be very different from one experiment to another: it is more than 30 h long for experiment no. 2 and can diminish to almost 5 h for experiment no. 10. As the average airflow rate is computed directly from the decay of the carbon dioxide concentration, it closely follows elapsed time values and varies from 3 to 18 m 3 .h -1 (average air change rates ranging from 0.06 to 0.37 h -1 ). Given the rather low indoor air volume and low air permeability of the test house to air, results will be discussed further by considering the average airflow rather than the air change rate.
It should be noted that considering average weather conditions may lead to poor interpretation. Indeed, the temperature difference can strongly vary during a single experiment. For example, it increased from almost 0 to 15°C during experiment no. 2 (see Fig. 5(b)). This results in an increase of the air change rate, accelerating the CO2 concentration decrease, as it can be seen in Fig. 5(a). When the temperature difference remains the same and under low wind speed conditions, the air change rate does not vary much (see experiments no. 6 and no. 8). This results in a constant slope on the CO2 concentration over time (when plotted with half-logarithmic scales). The highest average wind speed value was recorded during experiment no. 10, which corresponds to the highest measured airflow rate. However, since wind speed did not remain constant during a single experiment, but indeed could vary substantially, more information was obtained by subdividing some experimental runs into short period. Experiment no. 1 was split in two parts to resolve one period lasting 3 hours where the mean wind speed was recorded at 6.6 m.s -1 and a longer second period during which the wind speed had decreased to 1.2 m.s -1 (see Fig. 6(b)). This resulted in a significantly slower decrease rate of the indoor concentration as presented in Fig. 6(a).
The air change rate for experiment no. 10 was observed to be greater than for experiment no. 1a, despite comparable and high wind speeds for both periods. The main difference was noted to be wind direction: for no. 1a the wind blew mainly towards the east whereas for no. 10 it blew towards the west. From a general point of view, it is very convenient to consider average weather conditions and the total elapsed time to give a quick overlook of the experimental conditions and results. To explain the air change rate variations observed however, one should consider weather conditions in great detail. In the following sections, instantaneous weather conditions will be used to model air flows.
Air transfer model
Equations presented in part 4 were implemented in a Matlab / Simulink environment and a variable time-step solver was used to solve the mass balance at the building scale. Because of the uncertainties related to wind and thermal effects, the model was tested repeatedly with different values for hWall, hDoor and {k; a}. In order to compare each solution, computed air change rates were used to recalculate the tracer gas decrease and computed elapsed times were set against measurements. Somehow, this represents the model's mean accuracy.
A more detailed approach was used to complement the average performance. Indeed, in some cases comparisons based only on the elapsed time are not relevant. For example, an important error occurring at a single time step may result in a permanent bias on the elapsed time, as illustrated in Fig. 7(a). Equation ( 2) was changed to (10) to compute the hourly averaged air change rate from tracer gas measurements. The results are plotted in Fig. 7(b) as "Measurements".
( ) ( ) - + - = Out In Out In V C t t C C t C t V Q ln ( 10
)
where ∆t (h) is the time step, here equal to 1 h.
Preliminary simulations showed that the model was very sensitive to the equivalent door height hDoor, poorly sensitive to hWall and behaved differently from terrain coefficient values. This could be a consequence of the rather complex interaction between wind and thermal effects, as shown in [START_REF] Li | Some examples of solution multiplicity in natural ventilation[END_REF]. When considering both the elapsed time and the averaged error on the air change rate, the best solution used a "scattered" terrain coefficient; hDoor = 2 m; hWall = 0.75 m. Detailed errors are presented in Table 7 for the best coefficient combination. A negative error on elapsed time means that the model overestimates the airflow rate. The average error on elapsed time is 15 %, 23 % when looking at the air change rate. Experiment no. 7 shows the necessity of studying errors on both the elapsed time and the airflow rates: the computed airflow rates are 25 % accurate and can be either overestimated or underestimated. The errors at different time steps compensate each other, so that the computed elapsed time almost matches the measured value.
Still, these results are not homogeneous, meaning that the model has possibly encountered limits. Indeed, the airflow rate error exceeds 2.2 m 3 .h -1 for experiment no. 9 and may have been overestimated by 82 % for experiment no. 1.a when the measured airflow rate was 8.4 m 3 .h -1 .
When looking at weather conditions, it becomes apparent that the wind speed V10 is greater than 5 m.s -1 when the highest errors are computed. However, computed airflow rates were correctly estimated for experiment no. 10, which also occurred under high wind speed. This could be a consequence of the use of CP values (see Table 4) to model the influence of wind incidence: this is consistent with earlier observations (see 4.1). In order to illustrate this statement, CP values from [START_REF] Hagentoft | Introduction to Building Physics[END_REF] were used to replace the previous values. Both references are compared in Fig. 8 and the elapsed times are presented in Table 8. The influence of CP values is not homogeneous, which attests to how sensitive the model is. However, the trend is not very clear: better results can be achieved using (Hagentoft 2001) CP values (experiments no. 1.a, 1.b, 6, 11) as well as less accurate results (experiments no. 3, 9, 10). Moreover, the influence can be sizeable with lower wind speed conditions if the thermal effect is less dominating (experiments no. 3 and no. 6). With high wind speed conditions, even if an improvement is observed for experiment no. 1, the results are less accurate for experiments no. 9 and no. 10. From a general point of view, better results were obtained with CP values coming from (Sharag-Eldin 2007).
Outlook
Depending on the desired level of accuracy, one could investigate the numerical fitting further by setting CP values in great detail. This could be achieved by studying the air flow around the tested building, which seems to be reasonable with respect to recent numerical improvements as presented in [START_REF] Obrecht | Towards urban scale flow simulations using the Lattice Boltzmann method[END_REF].
In this case however, the air transfer model developed will be implemented in a more global model called HAM-Tools [START_REF] Kalagasidis | The international building physics toolbox in Simulink[END_REF] to study the interactions between heat, air and moisture transfers. Weather data was analysed over a 6-month period. This has revealed that 95 % of the measured V10 values are below the 5 m.s -1 limit. This would indicate that the 12 experiments were carried out under a representative climate. The model is thought to be accurate enough to further investigate the coupling with such transfers, estimating the potential of natural ventilation to limit indoor vapour excess, for example. In the same vein, this approach could be extended to other pollutants.
Using tracer gas measurements has been found to be effective when fitting a general model, and its applicability to other buildings seems promising. However, a few limitations must be mentioned. Firstly, using tracer gas measurements with multizonal buildings needs to be validated further. Secondly, it is important to underline the importance of achieving tracer gas measurements under representative climates; another experimental campaign would need to be carried out if different average weather conditions were to be considered. So, the legitimate domain is the second limit of this approach.
Conclusions
The tracer gas method was used to obtain air change rate measurements in a 50 m 3 natural ventilated building. Twelve experiments were conducted from October to January under varying wind and thermal conditions, with the air change rate ranging from 3 to 18 m 3 .h -1 . As these rates are rather low, the experiment time lasted from 5 to 30 h. This is long enough to ignore short-term weather variations and allow us to consider general weather and wind data.
A classical single-node pressure model was used to estimate the global air change rate. General correlations were used to represent both wind and thermal effects. Experimental results were used to fit numerically obtained wind and thermal coefficient values. The results are 15% accurate when considering the elapsed time and 23 % when considering the global air change rate. As the uncertainties are related to the use of general pressure coefficients from the literature, better results could probably be obtained using wind-tunnelling or CFD simulations. Still, this method should remain valuable because of its simplicity and cost.
Fig. 1 :
1 Fig. 1: Scheme of the experimental house with the door on the northern side and ventilation opening on the western side (a) and picture of the ventilation opening (b)
Fig. 2 :
2 Fig. 2: Outdoor carbon dioxide concentration measured from 22/09/2010 to 20/10/2010
Fig. 3 :
3 Fig. 3: Scheme of the air tightness test principle (a) and the global airflow test principle (b)
Fig. 4 :
4 Fig. 4: Difficulties assessing the thermal effect on a widespread area (a) and wind effect at a specific location (b)
Table 4 :
4 CP values depending on the wind incidence θ on the walls according to (Sharag-Eldin 2007) 69 0.54 0.35 0.08 -0.23 -0.61 -0.88 -0.92 -0.77 -0.58 -0.50 -0.42
Fig. 5 :
5 Fig. 5: Indoor concentration decay for a few experiments with half-logarithmic scales (a) and temperature difference (b) occurring during experiments no. 2, 4, 6 and 8.
Fig. 6 :
6 Fig. 6: Indoor concentration decay for a few experiments with half-logarithmic scales (a) and wind speed at 10 m (b) occurring during experiments no. 1, 5, 10 and 12.
Fig. 7 :
7 Fig. 7: Comparison of measured tracer gas decay (a) and airflow rate (b) ("scattered" terrain coefficient; hDoor = 0.01 m; hWall = 0.75 m) for experiments no. 1 and no. 2.
Fig. 8 :
8 Fig. 8: Comparison of CP values according to (Sharag-Eldin 2007) and (Hagentoft 2001)
Table 1 : Comparison of gases used as tracer gas based on the criteria defined by Cheong (2001)
1
Criteria SF6 N2O CO2
No hazard Greenhouse effect < 300 ppm < 30,000 ppm
No chemical reactivity ✓ ✓ ✓
Density (kg.m -3 ) 5.11 1.53 1.87
Distinctive from atmospheric gases ✓ ✓ no -[300; 600] ppm
Affordable and available ✓ ✓ Cheapest solution
Table 2 : Air tightness measurements obtained with and without the vapour barrier
2
Test type Vapour barrier C (m 3 .s -1 .Pa -n ) n (-) I4 (m 3 .h -1 .m -2 )
Air tightness No 5.64 0.69 0.62
Air tightness Yes 2.92 0.73 0.27
Table 3 : Comparison of the air tightness measurements with the global airflow measurement
3
Test type Vapour barrier C (m 3 .s -1 .Pa -n ) n (-) I4 (m 3 .h -1 .m -2 )
Air tightness Yes 2.92 0.73 0.38
Global airflow Yes 13.03 0.60 1.43
Table 5 : Terrain coefficients used in (4) (Hagentoft 2001)
5
Terrain coefficient
Table 6 : Tracer gas measurements from October 2010 to January 2011, elapsed time while the indoor concentration decreased from 9,500 ppm to 2,000 ppm and average weather conditions
6
Average Average Average
no. Date Elapsed time (h) airflow rate wind speed temp. difference
(m 3 .h -1 ) (m.s -1 ) (°C)
1a 1b 04/10/2010 3h15 (1) 24h04 (2) 8.4 2.6 6.6 1.2 2.9 7.4
2 06/10/2010 30h22 3.0 1.0 7.5
3 08/10/2010 26h39 3.4 1.3 10.5
4 11/10/2010 26h25 3.4 1.2 10.8
5 12/10/2010 17h52 5.0 1.3 18.3
6 15/10/2010 17h13 5.2 1.5 21.2
7 03/01/2011 14h59 6.0 2.2 28.2
8 04/01/2011 15h54 5.6 1.7 26.4
9 06/01/2011 13h48 (3) 4.6 2.9 19.8
10 07/01/2011 4h58 18.0 5.1 10.5
11 10/01/2011 19h14 4.7 1.6 21.5
12 25/01/2011 13h39 6.6 1.8 27.4
(
Table 7 : Error computed on the time duration and the air flow rate for each experimental sequence with the best parameter combination
7
N° 1.a 1.b 2 3 4 5 6 7 8 9 10 11 12
Measured Qv (m 3 .h -1 ) 8.4 2.6 3.0 3.4 3.4 5.0 5.2 6.0 5.6 4.6 18.0 4.7 6.6
Error Elapsed time -54 % -22 % -14 % -7 % -18 % -7 % 1 % 2 % 8 % -31 % 3 % 5 % 1 %
Table 8 : Error on the elapsed time using CP values according to (Sharag-Eldin 2007) and to (Hagentoft 2001)
8
N° 1.a 1.b 2 3 4 5 6 7 8 9 10 11 12
Measured Qv (m 3 .h -1 ) 8.4 2.6 3.0 3.4 3.4 5.0 5.2 6.0 5.6 4.6 18.0 4.7 6.6
Sharag-Eldin 2007 82 % 27 % 24 % 11 % 24 % 14 % 12 % 25 % 19 % 47 % 15 % 20 % 18 % Hagentoft 2001 63 % 12 % 19 % 32 % 26 % 19 % 6 % 26 % 16 % 99 % 25 % 15 % 16 %
Acknowledgements
The authors would like to thank ADEME (the French Environment and Energy Management Agency, OPTI-MOB project, n°0704C0099) for its financial support. |
04104780 | en | [
"shs",
"scco.ling"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-04104780/file/Chirkova%20%26%20Skalbzang%20Tshering%20Baima%20Verb%20Stem%20Alternations%20HAL.pdf | Katia Chirkova
Skalbzang Tshering
Verb Stem Alternations in Pingwu Baima
come L'archive ouverte pluridisciplinaire
Introduction
This paper focuses on verb stem alternations in the Pingwu variety of Baima (ISO-639 bqh), a little-studied Tibetic (Tibeto-Burman) language spoken at the border of Sichuan 四川 and Gansu 甘肃 provinces in the People's Republic of China. Baima has approximately 10,000 speakers, who reside in the counties of Pingwu 平武, Songpan 松潘 / Zung chu, Jiuzhaigou 九 寨沟 / Gzi rtsa sde dgu in Sichuan Province and in the counties of Wenxian 文县 and Zhouqu 舟曲/ 'Brug chu in Gansu Province (see H. Sun et al. 2007: 1;Suzuki 2015: 120). The area of the distribution of the Baima language lies at the historical Sino-Tibetan border, which is home to diverse varieties of Tibetic, 1 Qiang (or Rma), 2 Chinese (Mandarin), and Rgyalrong languages. 3 Map 1 shows the area of the distribution of Baima and some of its neighboring languages. (Transcriptions for language names follow conventions in their respective descriptions. Tibetan county names are in Wylie's 1959 transcription.) Map 1. Distribution of Baima and some of the neighboring languages Most previous work on Baima has centered on only one variety (that of Pingwu county) with a focus on the disputed status of Baima as either a dialect of Tibetan (or a member of the Tibetic language family) (see Awang Cuocheng & Wang 1988;Zhang 1994aZhang -b, 1997;;[START_REF] Bufan | Baima zhishu wenti yanjiu 白马话支 属问题研究 [A study of the genetic affiliation of Baima[END_REF] or an independent Tibeto-Burman language (see H. Sun 1980aSun -b, 2003;;Nishida & H. Sun 1990;H. Sun et al. 2007). Verb stem alternation in Pingwu Baima (hereafter Baima) has been made part of the argument in some more in-depth descriptions. These include, on the one hand, Huang & [START_REF] Bufan | Baima zhishu wenti yanjiu 白马话支 属问题研究 [A study of the genetic affiliation of Baima[END_REF] and, on the other hand, Nishida & H. [START_REF] Tatsuo | Hakuba yakugo no kenkyū: Hakuba no kōzō to keitō 《白馬譯語の研究: 白馬語の構造と系统》 [A study of the Baima-Chinese vocabulary 'Baima yiyu': the structure and affiliation of the Baima language[END_REF] and H. Sun et al. (2007). Of the latter two studies, H. Sun et al. (2007) is an updated and expanded version of the analysis in Nishida & H. Sun (1990: 230-232). For that reason, H. Sun et al. (2007) is used hereafter as a collective reference to both studies.
Huang & [START_REF] Bufan | Baima zhishu wenti yanjiu 白马话支 属问题研究 [A study of the genetic affiliation of Baima[END_REF] and H. Sun et al. (2007) agree in describing verbs in Baima as having up to three different stems associated with TAM categories, specifically:
(1) one stem that bears a non-past function and may denote prospective, factual, habitual, and present progressive meanings (hereafter non-past);
(2) one stem that has past-time reference and is predominantly used in the perfective, terminative, and past imperfective (hereafter past);
(3) an imperative stem. 4At the same time, Huang & Zhang (1997) and H. Sun et al. (2007) differ in their presentation of some basic facts (such as the frequency of alternating verbs in Baima) and in their methodological approaches (involving or not comparison with etymologically related WT forms). Remarkably, this results in diametrically opposed interpretations of data from one and the same variety of Baima.
Huang & Zhang (1995: 107) report that only a few Baima verbs are alternating in their data (a list of 2,343 words). They examine alternating verbs in relation to their Tibetan etymologies to conclude that verb paradigms in Baima are similar to those known from Written Tibetan (hereafter WT). Zhang (1995: 107) Huang & [START_REF] Bufan | Baima zhishu wenti yanjiu 白马话支 属问题研究 [A study of the genetic affiliation of Baima[END_REF] report only two verbs with three stems in their data ('go' and 'eat'). In verbs with two stems (e.g. 'dig', 'look', 'sell'), the past stem is said to commonly combine the functions of the past and imperative stems in the corresponding WT paradigm. However, other configurations also appear possible, as in 'sit, stay', where one stem is said to combine the functions of the present, future, and past stems in the corresponding WT paradigm, while the other stem is imperative. Based on examples of alternating verbs in their data, Huang & Zhang suggest that the development of the WT patterns of stem formation in Baima is most close to that in Kham Tibetan dialects (ibid., p. 115).
H. Sun et al. (2007: 80-84, 199-201) report a higher frequency of alternating verbs in their data (a list of ca. 3,000 words). In their estimation, approximately one third of randomly selected 100 verbs are alternating. Their analysis distinguishes two types of stem alternation, which they discuss separately, namely: (1) aspectual (with one stem used in the prospective and progressive aspect and the other stem used in the perfect and perfective aspect), and ( 2) mood (with one base stem and one imperative stem). Table 2 reproduces the original examples in H. Sun et al. (2007: 81-83). uɛ³⁵ ʃuɛ⁵³
Table 2. Alternating verbs in H. Sun et al. (2007: 81-83) H. Sun et al. restrict their analysis to synchronic data to conclude that (i) both types of stem alternation are unpredictable (H. Sun et al. 2007: 82-84) and (ii) their underlying morphology is distinct from that in Tibetic languages. The main argument is that imperative verb stem formation in Baima may be variously expressed through (1) onset alternation (as in 'wash (clothes)', 'burn (firewood)' in Table 2), (2) vowel alternation (as in 'weep', 'look'), or (3) a combination of onset and vowel alternation (as in 'patch (a garment)'); whereas in Tibetic 6 This verb etymologically reflects the WT verb 'do; make' (see Table B.1 in the appendix).
7 One of the anonymous reviewers suggests that the suppletion pattern in the verb 'come' alone may be enough to prove that Baima is a Tibetic language. That suppletion pattern in Tibetic languages (WT 'ongs -shog) is similar to the suppletive pattern in the adjective 'good' in Germanic languages (e.g. Fulk 2018: 220-221), which is often cited in textbooks on historical linguistics as a way to prove that a language is Germanic (e.g. Hock 1991: 563-564). We note, however, that Baima /uɛ / does not regularly correspond to WT 'ongs (see verb 6 in Table B.1) and, moreover, the etymology of that Baima form is uncertain. For that reason, we prefer to err on the side of caution and do not consider that specific suppletion pattern as diagnostic evidence to prove that Baima is a Tibetic language.
languages, the main form of imperative stem formation is o ablaut (H. Sun et al. 2007: 199-200).
As can be glimpsed from the summaries above, with their limited number of mostly non-overlapping examples, differences in methodological approaches, and contradictory findings, the previous in-depth descriptions of Pingwu Baima provide an at-best incomplete picture of stem alternation in that variety. Differences in methodological approaches also inevitably affect some of the conclusions. Most importantly, that H. rather than with the imperative stem, viz., khrus. That being the case, the observed difference between imperative forms in Baima and Tibetic languages is likely not the result of differences in the underlying morphology of imperative stem formation, but rather follows from the elimination in many Baima alternating verbs of separate imperative stems of classical paradigms.
The present study takes a new look at verb stem alternation in Pingwu Baima by examining a larger corpus of data than the corpora in previous works, and by consistently applying to the corpus one uniform methodological approach-systematically relating alternating verbs to corresponding classical WT paradigms. Our goals are:
(1) to systematically elicit all possible verb stems of high frequency Baima verbs in order to independently assess the relative frequency of alternating verbs in this language;
(2) on that basis, to get a better understanding of the development of the classical patterns of stem formation in Baima.
Our data reveal a relatively high number of alternating verbs in Baima. A systematic comparison of alternating verbs to corresponding classical paradigms confirms that stem alternations in Baima regularly reflect both affixal and ablaut Old Tibetan (hereafter OT) verb morphology. 8 Such a comparison also confirms a clear tendency toward elimination in Baima of a separate imperative stem of classical paradigms, as can already be glimpsed from Huang & [START_REF] Bufan | Baima zhishu wenti yanjiu 白马话支 属问题研究 [A study of the genetic affiliation of Baima[END_REF] and H. Sun et al. (2007) data. Such a tendency has not been commonly reported in modern Tibetic languages. An examination of Baima verb stem alternation in the context of its neighboring Tibetic and non-Tibetic varieties, such as Rgyalrong, where verb-stem differentiation is a highly visible feature (J. Sun 2014b: 633), suggests language contact as a possible trigger for this development.
The remainder of the paper is organized as follows. Section 2 provides information on the data and methodology of this study. Section 3 analyses alternating verbs in relation to their corresponding WT paradigms. Section 4 discusses the development of verb alternation in Baima in the context of the neighboring languages. Section 5 sums up the major findings and rounds off the paper. Appendix A provides an overview of synchronic and historical phonology of Baima. Appendix B provides a complete list of verbs in our study.
Data and methodology
For this study we have selected 250 verbs from the list of the Tibeto-Burman core vocabulary in [START_REF] Dai Qingxia | Zang-Mian Yuzu Yuyan Cihui 《藏缅语族语言词汇》 [A Tibeto-Burman lexicon[END_REF]. The verbs on the list were first recorded in isolation (in citation form) and then as placed in a set of contexts representing major TAM distinctions, modeled on [START_REF] Dahl | Tense and Aspect Systems[END_REF] TAM questionnaire, and also including an explicit imperative context. 9 The data were recorded in July 2019 with one male speaker, Mr. Li Degui 李德贵, the main language consultant of the first author. 10 The elicited verbs include 15 (non-alternating) loan verbs from 8 Old Tibetan refers to the phonological system underlying traditional Tibetan orthography, which for the most part can be recovered through a comparison of the modern dialectal reflexes of the orthographic (Written Tibetan) forms. This and other stories in the corpus do not contain examples of the use of that verb in the imperative. Overall, of the 39 verbs that occur in that story, only two verbs occur in the imperative: 'come' (/ʃuɛ̂/) and 'put, place' (/ʒɑ̂/).
Onset and/or vowel alternation is hence the dominant strategy in verbs with two stems, while suppletion is the dominant stem derivation strategy in verbs with three stems. The citation stem in verbs with three stems is the imperative stem (e.g. /dzʊ ̀/ for the verb 'speak; say', see verb 5 in Table B.1), and it is the past/imperative stem in verbs with two stems (e.g. /tsŷ/ for the verb 'prick, thrust', see verb 29 in Table B.2).
All stems of alternating verbs in our data were supplied with Tibetan etymologies. Given that our list of common verbs overlaps with the list of verbs in Nishida & H. [START_REF] Tatsuo | Hakuba yakugo no kenkyū: Hakuba no kōzō to keitō 《白馬譯語の研究: 白馬語の構造と系统》 [A study of the Baima-Chinese vocabulary 'Baima yiyu': the structure and affiliation of the Baima language[END_REF], it was possible in the majority of cases to rely on the Tibetan etymologies proposed by Zhang Jichuan (1994a-b) for that latter word list. Note that the word list in Nishida & H. Sun does not systematically provide different stems of alternating verbs and indicates in most cases only one (citation) stem of a given alternating verb. Accordingly, Zhang (1994a-b) only provides Tibetan etymologies for the listed (citation) forms. Table 4 gives examples of alternating verbs, as elicited in our data, compared to the forms of those verbs in Nishida & H. [START_REF] Tatsuo | Hakuba yakugo no kenkyū: Hakuba no kōzō to keitō 《白馬譯語の研究: 白馬語の構造と系统》 [A study of the Baima-Chinese vocabulary 'Baima yiyu': the structure and affiliation of the Baima language[END_REF], and accompanied by the Tibetan etymologies in Zhang (1994a-b). (1994a-b) All Tibetan etymologies were double-checked by the second author, a native Tibetan linguist.
In a few cases, new or missing etymologies were provided (e.g. 'bark (dog)', 'be drunk', see below).
As the next step, the Tibetan etymologies from Zhang (1994a-b) were supplied each with a complete list of all attested stems of that verb, as reported in Tibetan dictionaries. The 5. A comparison of the alternating verbs in our data with the corresponding WT paradigms suggests that in the vast majority of cases, Baima alternating verbs can be related to the attested stems of WT verbs through regular sound correspondences between Baima and WT (see Appendix A). Furthermore, such a comparison reveals the following tendencies in the simplification of the originally complex OT verb paradigms in Baima.
First, as already noted in Huang & [START_REF] Bufan | Baima zhishu wenti yanjiu 白马话支 属问题研究 [A study of the genetic affiliation of Baima[END_REF], Baima reduces the opposition between the present and the future stems of classical paradigms to a single stem. This is a common development in modern Tibetic languages, where the non-past or imperfective stem generally corresponds with the present stem of Written Tibetan (cf. suggesting an underlying form that is not attested.
In 45 verbs (or 74%) out of the total of 61 alternating verbs in our corpus, all stems can be related to the attested stems of corresponding classical paradigms through regular sound correspondences between Baima and WT. In 13 verbs (or 21% of all alternating verbs), one of the stems shows irregular sound correspondences with the attested stems (onset, rhyme or tone), thereby suggesting an underlying form that is at variance with the attested forms. 5, the would-be source of the non-past stem is *'bya, whereas the would-be source of the past/imperative stem is *phya, both unattested forms for this verb in WT. Appendix B provides complete lists of verbs in our corpus together with their suggested etymologies. The following section discusses the morphology of verb stem alternation in Baima.
Morphology of verb alternation in Baima
In contrast to the conclusions in H. Sun et al. (2007), our data suggest that the underlying morphology of verb stem alternation in Baima is distinctly that of Old Tibetan.
Recall that classical WT verb paradigms consist of four stems at the most, traditionally labeled present, past, future, and imperative. The four verb stems show a variety of prefixes, suffixes, and ablaut, with sound changes obscuring in some cases the system (see [START_REF] Coblin | Notes on Tibetan verbal morphology[END_REF][START_REF] Beyer | The Classical Tibetan Language[END_REF][START_REF] Hill | A Lexicon of Tibetan Verb Stems as Reported by the Grammatical Tradition[END_REF][START_REF] Hill | Sino-Tibetan: Part 2, Tibetan[END_REF]Hill , 2019: 9-21, 42-44: 9-21, 42-44). An overview of Tibetan verb morphology in Hill (2010: xv-xxi) classifies WT alternating verbs into 12 paradigms, as reproduced in Table 6. (WT dras), to compare, the non-past stem of that verb is / n dʐa̠ ̂/ (WT dra). However, when added 13 The prefixes g-and b-disappeared irrespective of voicing of the root initial. Before being elided, they acted as a buffer against devoicing of the following root initials, yielding modern voiced obstruents and sonorants. On the other hand, in clusters with the root initial l-, the preinitials g-and b-have likely been lost early in the history of Baima, as their reflexes are the same as that of a simplex l-(that is, Baima j-). Examples include WT lag pa 'hand', gla ba 'musk deer', yielding both Baima /jɑ̄/; WT (yar) lang 'get up', blangs, the past stem of the verb 'take, hold', yielding both Baima /jō/.
paradigm present past future imperative 1 '-_ _-s 2 '-_ 3 '-_-d b-_ 4 _-d b-_-s 5 g-_ _-s 6 b-_ 7 '-_-d d-_ 8 b-_-s 9 '-_ bX-_-s b-_ 10 X-_-s 11 '-_-d bX-_ d-_ 12 bX-_-s
to closed rhymes, the suffix -s is generally lost without compensation, as in 'pick, pluck': the non-past stem /ⁿd ʢ uɛ̂/ (WT 'thog); to compare, the past/imperative stem of that verb is /tuɛ̂/ (WT btogs). 14 Finally, our Baima data provides no consistent evidence for da-drag or a postfinal -d that could follow the codas -n, -r, -l before the standardization of the Tibetan script in the 9th century (e.g. Beyer 1992: 169, 175;[START_REF] Zhang | The puzzle of da-drag in Tibetan[END_REF]). The tone of Baima words which historically ended in -d are for the most part the same as in words which historically did not end in -d. For example, the Baima reflex of the past stem of the verb 'sleep', viz., nyald, is homophonous with the reflex of nyal 'fish', both /ɲɛ̄/. 15 The regular developments above lead to the development of three distinct patterns of stem formation in verbs with two stems, as already identified in H. Sun et al. (2007) (see Table 2) and as also observed in our data (see Table 3). ( 1 The marginal productivity of the vowel alternation pattern of stem formation is likely observed in the case of the verb 'ache; be painful' (see verb 11 in Table B.3). This verb is alternating in Baima (its non-past stem is /tsʰa̠ ̂/, and its past stem is /tsʰê/), whereas the 17 We are grateful to the anonymous reviewer for this suggestion.
(V)1 CV(V)1 CV(V)2 2-3 '-_(-d) b-_C-s 7-8 '-_-d b-_C(s) 9 '-_ bX-_C-s 10-12 '-_(-d) bX-_C(-s) 4 _-d b-_-s 5 g-_ b-_-s C1V(V)1 C1V(V)2 6 g-_ b-_
corresponding WT verb is non-alternating (tsha). It can be tentatively proposed that the past stem of this verb is derived from the non-past stem through innovative ablaut (a > e).
Finally, the reasons for the irregular relationship between both stems of the Baima verbs 'open (door)' (verb 22 in Table B.2), 'sweep' (verb 15 in Table B.2), and 'close, shut (door)'
(verb 13 in Table B.3) and the attested stems of classical paradigms are more obscure. These Baima verbs cannot be identified with any known Tibetan roots and may have yet unidentified sources.
Baima verb stem alternation in the context of the neighboring non-Tibetic varieties
As discussed in the previous section, the patterns of stem formation in Baima unmistakably reflect OT verb morphology. At the same time, they evidence one relatively uncommon tendency in the development of the originally complex OT patterns: that of elimination of a separate imperative stem of classical WT paradigms, whereby the past stem comes to serve both past and imperative functions. Such a tendency has not been commonly reported for modern Tibetic languages and it likely represents an innovative development. We further note that that development may not be limited to Baima, but may be shared with its neighboring Tibetic varieties (see Map 1). A preliminary comparison is possible with Zhongu (a Tibetic variety of Songpan county), for which an extensive vocabulary list of 1,500 common lexical items with their suggested Tibetan etymologies is available (see J. [START_REF] Hongkai | Is Baima a dialect or vernacular of Tibetan?[END_REF]. That list yields a total of 35 verbs that overlap with verbs on our Baima list. The majority of those overlapping verbs (20 in total) have two stems that distinguish, similar to Baima, between one non-past stem, which corresponds with the present stem of classical paradigms, and one stem serving both past and imperative functions, which corresponds with the past stem of classical paradigms. Among the remaining 15 verbs, 6 verbs have three stems in both Zhongu and Baima, including a separate imperative stem (see verbs in Table B.1), while 9 verbs have two stems in Baima and one or three stems in Zhongu. The relatively uncommon tendency in Baima and possibly its neighboring Tibetic varieties to eliminate a separate imperative stem of classical paradigms can be further considered in the context of local non-Tibetic languages, some of which, most importantly
Rgyalrong, also have complex patterns of stem alternation.
Rgyalrong verbs have a maximum of three stems, conventionally labeled 1, 2, 3, following J. Sun (2004Sun ( , 2014b)). Stem 1 occurs in non-past contexts; stem 2 occurs in past contexts; stem 3 occurs only in some highly specific transitive contexts constrained by number (singular), tense-aspect (non-past, non-progressive), and direction (direct) (e.g. J. Sun 2014bSun : 633-634, 2016;;[START_REF] Jacques | Rgyalrong Language[END_REF]Jacques , 2021: 493-499): 493-499). Most Rgyalrong varieties are reported to have only two stems (stem 1 and stem 2), commonly differentiated through (partially productive) ablaut. Put differently, unlike Tibetic languages, Rgyalrong verbs lack a separate imperative stem and commonly distinguish between one stem used in non-past contexts and one stem used in past contexts. In this context, the tendency to eliminate a separate imperative stem in Baima and its neighboring Tibetic varieties can potentially be interpreted as a type of remodeling of the originally complex OT paradigms to better reflect the stem differentiation in the nearby non-Tibetic (Rgyalrong) languages.
At the same time we note that while such an interpretation is conceivable, it stands in contrast with some reported cases of language contact between Tibetic and Rgyalrong languages in the area. Gserpa and Khalong (see Map 1), two Tibetic varieties in close contact with Showu Rgyalrong (J. Sun 2006Sun : 114-116, 2007Sun , 2016[START_REF] Evans | Qiāng 羌 language[END_REF], are a case in point. Both 19 One of the anonymous reviewers notes that, although the levelling of verb stem alternations in favor of the etymological past has not often been discussed in the literature, it is by no means rare in Tibetic languages. The reviewer cites Lhasa Tibetan as an example, noting that the most recent sources on that variety, in particular the Bod rgya tshig mdzod chen mo [The Great Tibetan-Chinese Dictionary] (1993), offer numerous examples of the levelling of the inflection of some originally complex paradigms in favor of the etymological past. We note that while such a tendency may not be uncommon, no other reported variety shows it to such an extreme extent as
Summary and conclusion
In this study we presented a new analysis of verb stem alternation in the Pingwu variety of Baima, based on a small corpus of 250 common verbs. The main findings include:
(1) Baima has a relatively high number of alternating verbs and complex verbal morphology in the sense of stem alternation (affixal, ablaut). In contrast to the conclusion in Huang & Zhang (1995: 115), this is quite unlike Kham Tibetan varieties and more in line with Amdo varieties.
In fact, Baima shares several diagnostic developments with its neighboring Amdo varieties, including the semantic extension of 'chew, gnaw' to 'eat' (see section 3).
(2) In contrast to the conclusions in H. Sun et al. (2007), the morphology of stem alternation in Baima is clearly that of OT, consistently reflecting both regular and irregular OT morphology.
Observed irregularities may be largely ascribed to analogical changes.
(3) One potentially innovative development in Baima is a tendency to eliminate a separate imperative stem of classical WT paradigms. That development is likely shared by Baima with its neighboring Tibetic varieties, such as Zhongu. That development may be tentatively attributed to language contact with local non-Tibetic languages. However, the identity of those languages and related mechanisms of borrowing require further investigation, as they appear to be distinct from those in more transparent cases of language contact between local Tibetic and Rgyalrong varieties (Gserpa, Khalong).
Naturally, further work and additional data from more speakers are required to test the robustness of the present conclusions. There is also a need for a comparative study of various Baima varieties (including also those spoken in Songpan, Jiuzhaigou, Wenxian, and Zhouqu counties) so as to allow for a more comprehensive understanding of the evolution of classical patterns of stem formation in this language. Such a comprehensive study has the potential to further our understanding of less common developments in Pingwu Baima. An added bonus is that such a study may shed further light on the dynamics of language contact in small indigenous communities of the historical Sino-Tibetan border areas, which has indubitably played a significant role in the formation of local Tibetic and non-Tibetic varieties alike.
Baima. For that reason, in this particular case we propose to provisionally keep language contact as a possible explanation for the observed developments until more compelling, internal explanations have been found.
a by-product of historically breathy voice, followed by tense voice (laryngealized or creaky phonation of the vowel) (see below). The three contrastive tonal categories in Baima likely arose through an overlay of the phonation difference in consonants (historically breathy vs. plain) and the phonation difference in vowels (tense vs. lax). Syllables with tense vowels acquired a high falling, short tone, the relative pitch height of which is predictable on the phonation type of the syllable onset-higher on syllables with non-epiglottalized onsets, and lower on syllables with epiglottalized onsets, and therefore non-contrastive. Syllables with non-tense vowels, on the other hand, acquired a lower, long tone. That lower long tone subsequently split into two contrastive tones, depending on the historical onset type. Syllables with plain onsets yielded a long level tone with modal voice quality (mid tone), whereas syllables with historically breathy and voiceless aspirated onsets yielded a long low falling-rising tone with breathy-like voice quality (low tone) (see Chirkova under review for detailed discussion). As a result of these complex developments, depending on the historical onset type, Baima reflexes of OT rhymes may vary from voice quality (tense vs. modal vs. lax) to mixed voice and vowel quality contrasts (in particular for long vowels from OT coalesced syllables and rhymes with the codas -n, -ng, -m, -l, -s). Table A.4 summarizes common reflexes of OT rhymes.
A slash ("/") separates reflexes after historically plain initials from those after historically breathy initials, in that order. Baima verbs are listed next to their suggested etymologies (attested WT stems).
Ø (or -r) -b(s) -d -g(s) -n -ng(s) -m(s) -l -s a
Uncertain origins are indicated by question marks in the corresponding syllable slot. Question marks beside suggested etymons signal that the etyma in question are tentative. Baima forms that are at variance with the attested stems are highlighted in gray. Corresponding WT slots of such forms include the closest matching form among the attested stems, followed by the wouldbe source of the irregular Baima form, as proposed on the basis of major sound correspondences between Baima and WT. In the tables, verbs are sorted in the order of the Tibetan consonants followed by the order of the vowels.
No. Gloss Baima WT We tentatively propose that this verb may be etymologically related to the WT verb 'block, obstruct'. That is based on the observation that in a number of Kham dialects (such as the Chaya 察雅 / brag gyab variety of chab mdo Tibetan or the Zhongxinrong 中心绒 / gtsang chen rong variety of 'ba' thang Tibetan), a combination of the verb 'block, obstruct' with the noun khyi 'dog' stands for both 'hold back a dog' and 'dog barks' and is used as an equivalent to the standard expression (khyi) zug 'bark (dog)'. We tentatively suggest that a similar semantic development may have taken place in Baima. 22 We tentatively propose that this verb is etymologically related to the verb 'pher or 'phel with the meaning 'raise', 'increase'. 23 The verb 'be drunk' is listed in Nishida & H. Sun (1990: 356) as /tʃʰo⁵³ pʰɛ¹³ nbɔ¹³ ʃɿ¹³/, where /tʃʰo⁵³/ is 'beer' (WT chang) and /nbɔ¹³ ʃɿ¹³/ are verbal enclitics (completive and non-egophoric) (see [START_REF] Chirkova | Evidentials in Pingwu Baima[END_REF]. No etymology for the verb /pʰɛ¹³/ is provided in Zhang (1994a-b). We tentatively propose that this Baima verb may be related to the form ban, as found in some Tibetan dialects. For example, in 'ba' thang Tibetan, ban is used both transitively and intransitively (respectively, 'moisten' and 'be moist'), and its intransitive use is often extended to stand for 'be drunk'. WT does not have the form ban in the meaning 'be moist' or 'moisten', while it is possible that this form is related to the WT verb b(r)an 'saturate with water, moisten' (with the inflectional paradigm 'bran, bran (d), bran, bran(d)/bron, e.g. Hill 2010: 196).
consulted sources include Skal bzang ye shes et al. (1958), Mkhyen rab 'od gsal (1976), Dpal khang lo tsa ba et al. (1991), Hill (2010), Pad ma dbang grags et al. (2015), and Bielmeier et al. (2018). The attested stems were listed next to the alternating verbs, as elicited in our data, for comparison. Consider examples in Table
Gserpa and Khalong have been argued to evidence a Showu Rgyalrong substratum that manifests itself in numerous lexical loans and, most importantly in the context of the present discussion, also verbal morphology. Non-past (or imperfective) stems of alternating verbs in these varieties mostly trace back to WT perfective stems, whereas new past (or perfective) stems have been argued to be created from innovative imperfective stems by means of ablaut, which has clear similarities in form and function with the stem-building ablaut in Showu Rgyalrong. Remarkably, while borrowing morphologically-based phonological alternation from the neighboring Rgyalrong variety, both Gserpa and Khalong appear to retain Tibeticstyle verbal paradigms with a distinct imperative stem, which is many cases reflect the imperative stem of the corresponding WT paradigm. Consider the following examples. The imperfective stem of the verb 'laugh' in both Gserpa and Khalong corresponds with the perfective stem of the classical paradigm, viz., bgad (Gserpa /vga/, Khalong /vgat/). The perfective stem of this verb, on the other hand, is innovated from the imperfective stem via the Rgyalrong-style ablaut (viz., Gserpa /vge/, Khalong /vget/). Finally, the imperative stem corresponds with the WT imperative stem dgod (Gserpa /rgot/, Khalong /rgot/). Unlike Gserpa and Khalong, both Baima and Zhongu contain very few recognizable loanwords from local non-Tibetic languages (e.g.Huang & Zhang 1995: 116-117;[START_REF] Chirkova | On the Position of Báimǎ within Tibetan: A Look from Basic Vocabulary[END_REF] J. Sun 2003: 782-783). Yet they evidence an innovative strategy of verb-stem differentiation which is at variance with commonly observed developments of classical Tibetan patterns in Tibetic languages. More detailed work on local Tibetic and non-Tibetic varieties in the historical multilingual Sino-Tibetan borderlands is clearly needed to shed light on the precise developments in the Baima case.19
Table A.1. Major trends in the development of Baima consonants. T = voiceless unaspirated obstruent, Tʰ = voiceless aspirated obstruent, D = voiced obstruent, ⁿD = prenasalized stop or affricate, S = plain sonorant. The original complex OT rhyme structure has been drastically simplified in Baima. All original consonantal codas were lost, transforming all closed syllables into open syllables. The disappearance of consonantal codas led to the development of the phonation difference in vowels: tense vs. non-tense or lax, with vowel duration and pitch as co-articulated cues. Tense vowels (with shorter duration and higher pitch) evolved from OT open syllables and syllables with codas -b,-d, -g, 20 whereas lax vowels (with longer duration and lower pitch) evolved from OT coalesced syllables and syllables with codas -m, -n, -ng, -s, -l.
Table 1 .
1 Table 1 reproduces the original examples in Huang & Zhang (1995: 107) together with their suggested etymologies. Alternating verbs in Huang &
Gloss non-past past imperative
Baima WT Baima WT Baima WT
go ndʑi⁵³ mchi tɕʰae³⁵ chas ndʑuɐ⁵³ 'gro
eat ndʒa⁵³ 'cha' ndʒø³⁵ 'chos ndʒuɐ⁵³ 'cho~'chos
rob, plunder ndʐuɐ⁵³ 'phrog tʂʰuɐ⁵³ phrogs
dig ko⁵³ rko, brko kø³⁵ brkos, rkos
chase nde⁵³ 'ded te⁵³ ded
look ta⁵³ lta, blta tø³⁵ bltas, ltos
sell ndzu³⁵ 'tshong, btsong tsʊ³⁵ btsongs, tshong
make ndʑu⁵³ 'grub tʂu⁵³ grub
sit; stay dy⁵³ sdod, bsdad, bsdad de³⁵ [sic.] 5 sdod
Sun et al. do not link Baima verbs to etymologically related WT forms obscures the fact that in verbs with two stems (that is, all verbs in Table 2 but 'eat', 'go', and 'come'), one single stem serves both past (perfect/perfective) and imperative functions and regularly corresponds with the past stem of
classical WT paradigms. This can be illustrated with the verb 'wash (clothes)'. The past (perfect/perfective) and imperative stems of that verb are manifestly identical, /tɕy⁵³/. Based on regular correspondences between Baima and WT (as in Zhang 1994a-b), the form /tɕy⁵³/ regularly corresponds with the past stem of the classical paradigm for 'wash', that is, bkrus,
Stem alternation strategy No. of stems No. of verbs
Chinese and 6 periphrastic constructions consisting of a native Baima word followed by the
(alternating) auxiliary verb 'do; make' (see Table B.6 in Appendix B). Among the remaining
229 native Baima verbs, 168 (or 73% of all native Baima verbs) are non-alternating (that is,
with only one stem), while 61 verbs (or 27% of all native Baima verbs) are alternating. Among
the alternating verbs, 6 verbs have three stems (non-past, past, imperative), and 55 verbs have
two stems (non-past and past/imperative). In contrast to Huang & Zhang (1995) and more in
line with H. Sun et al. (2007), our data hence show a relatively high number of alternating verbs
(representing over a quarter of all native Baima verbs). Verbs with two stems, of which one
bears a non-past function and the other stem serves both past and imperative functions,
constitute the absolute majority (90%) of all alternating verbs. Observed stem alternation
strategies include, on the one hand, onset and/or vowel alternation and, on the other hand, more
marginally, suppletion. Table 3 details the distribution of these strategies in our verb list.
onset and/or vowel alternation onset alternation two 29
onset and vowel alternation two 6
vowel alternation three 2
two 19
suppletion three 4
two 1
Total 61
9 Sample contexts include the following:
(13) [A: When you visited your brother yesterday, what he DO after you had dinner? ANSWER:] He WRITE
letters.
(16) [Q: What your brother DO when we arrive, do you think? (= What activity will he be engaged in?)] He
WRITE letters.
(18) What you brother usually DO after breakfast? A:] He WRITE letters.
(22) [Q: What are you planning to do right now? A:] I WRITE letters.
(25) [A: My brother works at an office. B: What kind of work he DO?] He WRITE letters.
(137) When I COME home (yesterday), he WRITE two letters (=he finished writing them just before I came).
Imperative: WRITE letters!
10 The resulting paradigms were cross-checked for consistency with different verb stems as they occur in a corpus
of traditional stories recorded with four elderly Baima speakers between 2003 and 2017. (That corpus is in part
available in the Pangloss Collection, see https://pangloss.cnrs.fr/corpus/Baima?lang=en). Our main reason to rely
on elicitation rather than on the analysis of the verb stems as they occur in that corpus is that traditional narratives
tend to rely more heavily on the use of past stems, so that instances of non-past and, in particular, imperative stems
are infrequent. As a result, narratives rarely contain complete verbal paradigms. Hence, due to the lack of
appropriate contexts, for individual verbs it is often impossible to establish whether they are alternating or not.
These points can be illustrated with the verb 'herd' in the story "An orphan and a fox" (see Chirkova 2005; H. Sun
Table 3 .
3 Stem alternation strategies in the verb list in our study
et al. 2007: 365-374; the sound file is available at https://pangloss.cnrs.fr/corpus/Baima?lang=en). Of the six
occurrences of the verb 'herd' in that story, three make use of the present stem (/ⁿdzɞ̂/), as in the following example:
(Abbreviations used in interlinear glossing follow the Leipzig Glossing Rules (LGR,
http://www.eva.mpg.de/lingua/resources/glossing-rules.php). Non-standard abbreviations include: EGO =
egophoric.)
ɲə̂=rɛ̂ tsɑ̄ rɐ̂-ŷ ⁿdzɞ̂-ɲə̂ ɕɛ̄ ʃə
person=COM vicinity goat-sheep herd.IPFV-person do.PFV PFV.N-EGO
'[The orphan girl] served as a herding girl to other people.'
The remaining three occurrences make use of the past stem (/ⁿdzʊ ̀/), as in the following example:
ɲə̂-rɛ̂ tsɑ̄ rɐ̂-ŷ ⁿdzʊ ̀ ʃə
person=COM vicinity goat-sheep herd.PFV/IMP PFV.N-EGO
'[She] herded goats and sheep for other people.'
Table 4 .
4 Examples of alternating verbs, as elicited in our study, compared to the forms in
Gloss Elicited Nishida & H. Sun Zhang (1994b)
N-PST PST IMP (1990)
speak; say dzɞ̂ dzɛ ̀ dzʊ ̀ dzø³⁵ zlos
prick, thrust ⁿdzŷ tsŷ tsu⁵³ btsugs
split open ŋ ɡa̠ ̂ kē ke³⁵ bkas
open (door) ⁿdʑa̠ ̂ ɕʰa̠ ̂ ɕʰa⁵³ phye
Nishida & H.
[START_REF] Tatsuo | Hakuba yakugo no kenkyū: Hakuba no kōzō to keitō 《白馬譯語の研究: 白馬語の構造と系统》 [A study of the Baima-Chinese vocabulary 'Baima yiyu': the structure and affiliation of the Baima language[END_REF]
, and accompanied by Tibetan etymologies suggested in Zhang
Table 5 .
5 Baima alternating verbs next to the attested stems of the corresponding WT verbs.Tibetan etymologies for Baima forms are highlighted in bold. Baima forms that show irregular sound correspondences with the attested stems are highlighted in gray.
Only 8 verbs in our corpus appear to retain a separate imperative stem of WT paradigms. Of these, 6 verbs have three stems, that is, including a distinct imperative stem that corresponds with the imperative stem of the classical paradigm. For example, the imperative stem /dzʊ ̀/ of the verb 'speak; say' in Table5regularly corresponds with the imperative stem zlos of that verb in WT. The remaining 2 verbs, namely 'look' and 'drink', have two stems. The past/imperative stem of the verb 'look' exceptionally corresponds with the imperative stem of the corresponding WT paradigm (Baima /tyʉ̄/, WT ltos), rather than with the past stem (that is, WT bltas), as in most verbs with two stems.12 The imperative stem of the verb 'drink' (/ⁿdù/) is identical with the present stem (also /ⁿdù/) due to regular sound change (the two stems are distinct in WT, respectively, 'thungs and 'thung). The past stem of the verb
Zeisler 2004: 876-888)
.
11
In our corpus, the non-past stem of Baima alternating verbs regularly corresponds with the present stem of classical paradigms. Examples include: 'speak; say': Baima /dzɞ̂/, WT zlo; 'prick, thrust': Baima /ⁿdzŷ/, WT 'dzugs or 'dzug (note that in Baima, the developments in consonantal clusters in the coda position, i.e. -Cs, are identical to those of a simple consonantal coda, see
Table A.4 in Appendix A).
Second, alternating verbs in Baima attest to a consistent elimination of a separate imperative stem of classical paradigms. In the vast majority of alternating verbs with two stems (that is, 53), the originally distinct functions of the past and imperative stems are expressed by a single stem in Baima. That single stem regularly corresponds with the past stem of classical paradigms. Examples include: 'prick, thrust': Baima /tsŷ/, WT btsug or btsugs; 'split open': Baima /kē/, WT bkas.
'drink' (/ⁿdỳ/) does not regularly reflect the attested past stems of that verb ('thungs, btungs),
Table 6
6 In spite of the drastic simplification of the original complex OT rhyme structure in Baima (see Tables A.2 and A.4), Baima alternating verbs unmistakably reflect both regular and irregular OT ablaut. Table7provides some examples.
stem has the suffix -s in most paradigms. ("X-" in the past and imperative stems in paradigms
9-12 stands for a devoicing prefix, cf. Beckwith 1996, as in 'block, obstruct': the past stem
*bXgag > bkag, the imperative stem *Xgogs > khog.) In addition, some verbs with four stems
also show ablaut. The regular patterns include a-a-a-o (as in 'look': lta, bltas, blta, ltos); e-a-
. Regular affixal paradigms in OT verbs
(adapted from Hill 2010: xix)
As shown in Table
6
, the present stem may have the prefixes '-and g-, and the suffix -d. The past stem has the prefix b-in all but one paradigm (1, "weak verbs") and the suffix -s in most paradigms. The future stem has the prefixes b-, and d-and no suffixes. Finally, the imperative a-o (as in
'do': byed, byas, byas, byos); o-a-a-o (as in 'put, place': 'jog, bzhag, gzhag, zhog)
.
The irregular ablaut pattern, viz., a-o-a-o, is limited to just four verbs, including 'eat
' (za, zos, bza', zo) and 'chew, gnaw' ('cha', 'chos, 'cha', 'cho ~ 'chos)
(Hill 2014: 622)
.
Table 7 .
7 Examples of regular and irregular OT ablaut in Baima alternating verbs. Tibetan etymologies of Baima forms are highlighted in bold.
As shown in Table
7
, Baima 'look' reflects the a-a-a-o ablaut; 'put, place' reflects the o-a-a-o ablaut; 'take out, dredge' reflects the e-a-a-o ablaut; while Baima 'eat', which represents a semantic extension from the OT verb 'chew, gnaw', reflects the irregular ablaut pattern a-o-ao.
On the other hand, most affixal OT morphology is lost in Baima. Regular sound change in Baima leads to the loss of all verbal prefixes but the prefix '-in the present stem, which is retained in Baima as prenasalization of the root initial (see Table
A
.1).
13
The suffixes -d and -s in the present and past stems are lost, as are all OT consonantal codas in general, transforming all closed syllables into open syllables and generating innovative vowels and diphthongs (see
Table A.4
). In particular, the suffix -s in the past and imperative stems, when added to an open rhyme, leads to the vowel quality and tone change. Examples include: (1) /ɲʉ̄/, the past/imperative stem of the verb 'buy' (corresponding with WT nyos); to compare, the non-past stem of that verb is /ɲɞ̂/ (WT nyo); (2) /tyʉ̄/ the past/imperative stem of the verb 'look' (corresponding with WT ltos); (3) /tʂē/, the past/imperative stem of the verb 'cut with scissors'
A few exceptions can be noted, for instance, verbs 7-8 in TableB.2 (that is, 'grind' and 'weave, knit'). Tone change in the past/imperative stems of these verbs, both /tɑ̄/, rather than the regularly expected form /tɑ̂/ to correspond with WT btags, suggests that -s in the past/imperative forms may be analysed as a distinct suffixal element, which conditioned the mid level tone on that form. Additional examples include verbs 'push down', 'press', 'build, pile up', 'carry on the shoulder' in TableB.5. 15 Two possible exceptions are 'quake (earth)': Baima /sʰa̠ ̂ ŋɡî/, corresponding with WT sa 'guld; and /ʑʊ̂/ 'ride (a horse)', corresponding with WT bzhond (respectively, verbs 20 and 110 in Table B.5).
present past non-past past/imperative
1 '-_ _C-s
NCV
Paradigm OT Baima
) The onset alternation type encompasses those verbs that have, on the one hand, the prefix '-in the present stem and, on the other hand, identical (closed) rhymes in both stems (see verbs in Table
B
.2). In that paradigm, the non-past stem of the verb is prenasalized in Baima, and the past/imperative stem is non-nasalized, while the vowel remains unchanged. (The tones in the two stems may be different depending on the historical onset type, see Table
A
.3.)
(2) The vowel alternation type encompasses those OT paradigms that do not have the prefix '-in the present stem or where the addition of the prefix '-is impossible for phonological reasons (for instance, in verbs with complex onsets, such as 'look': *'-lta > lta) (see verbs in Table
B
.3). Vowel alternation in the two stems results from the addition of the suffix -s to an open rhyme in the past stem or from the original OT ablaut.
(3) The third alternation type, which combines onset and vowel alternation, includes verbs that have the prefix '-in the present stem, OT ablaut and/or an open rhyme in the past stem (see verbs in Table
B
.3).
16
Table
8
schematically presents the related developments.
14 16 The forth possible pattern, that is, C1V(V)1/C1V(V)1, where the OT onsets and rhymes in both stems are identical or become identical through regular sound change, obviously leads to the merger of the two stems, hence resulting in verbs with one stem (or non-alternating verbs). Examples include: /ɲɛ̄/ 'sleep
', cf. WT nyal, nyal(d), nyal, nyol(d)
; /nɑ̄ ɲɛ̄/ 'listen', cf. WT rna (m)nyan, (m)nyan(d), (m)nyan, nyon(d) / mnyand; /sê/ 'kill', cf. WT gsod, bsad, gsad, sod. The fifth possible pattern with two different non-nasalized onsets and/or two different vowels in two stems, that is, C1V(V)1/C2V(V)2, is marginal and irregular, as in as in 'give', Baima non-past stem /ʒə̂/, past/imperative stem /ɕī/, corresponding with
WT sbyin, byin, sbyin, sbyin (see verb 5 in Table B.4
).
Table 8 .
8 Regular changes to the OT patterns of stem formation in Baima verbs with two stems. Brackets indicate optional constituents.Table 9 presents some illustrative examples.
Pattern of stem Gloss OT Baima
formation present past N-PST PST/IMP
die 'chi shi n dʒ ʢ ə̂ ʃə̂
NCV(V)1/CV(V)1 sell 'tshong btsongs ⁿdzù tsū
pick, pluck 'thog btogs ⁿd ʢ uɛ̂ tuɛ̂
exchange rje brjes ʒɛ̂ ʒè
C1V1/C1V2 buy nyo nyos ɲɞ̂ ɲʉ̄
take out, dredge 'dren drang ⁿdʐē tʂō
NCV(V)1/CV(V)2 put, place 'jog bzhag n dʒuɛ̂ ʒɑ̂
cut with scissors 'dra dras n dʐa̠ ̂ tʂē
Table 9 .
9 Examples of changes to the regular OT affixal paradigms in Baima verbs with two past/imperative stem. The latter stem, /kē/, regularly corresponds with the WT past stem bkas, whereas the former stem, / ŋ ɡa̠ ̂/, would be the expected outcome if the underlying OT form had been *'ga rather than the attested present stem of this verb, 'gas. It is conceivable that the original WT pattern with the coda -s in both stems (that is, 'gas, bkas) has been reinterpreted in Baima as one, in which the coda -s in the past stem is reanalyzed as the past suffix -s (bka-s), leading to the back formation of the present stem without -s (viz., *'ga). Another example is the verb 'weigh with a steelyard' (see verb 4 in TableB.2) with the o-a-a-o ablaut in the classical paradigm('gyog, bkyags, bkyag, khyog). The non-past form of that verb in Baima, / n dʑɑ̂/, is at variance with the present stem 'gyog. It is conceivable that the non-past stem has been reinterpreted on the basis of the past/imperative stem of this verb, Baima /tɕɑ̂/, corresponding with WT bkyags. An analogical explanation may also account for the loss of the original suffix -d in the present stem of the verbs 'wash', 'divide', and 'plough', reproduced in Table 10.
stems
Among the three main patterns of stem formation that result from regular sound change, the
first (that is, NCV(V)1/CV(V)1) and the second (that is, C1V1/C1V2) are numerically more
common, accounting for 29 and 19 verbs respectively (see Tables B.2-3). The third pattern
(viz., NCV(V)1/CV(V)2) is more marginal, accounting for only 6 verbs (see Table B.4).
The few alternating verbs, in which one of the stems is at variance with the attested WT
stems, are open to interpretation in terms of analogical changes. Consider the following examples. The verb 'split open' (see verb 2 in Table
B
.4) has / ŋ ɡa̠ ̂/ as its non-past stem and /kē/ as its
Table 10
10
. Examples of possible loss of the suffix -d in the present stem. "No." indicates the serial number of the verb in Tables B.3-4 in Appendix B. Baima forms that show irregular sound correspondences with attested WT stems are highlighted in gray. The would-be sources of irregular Baima forms are preceded by an asterisk and provided in the corresponding WT slot, separated from the attested stems by a semicolon.
For example, the paradigm of the verb 'divide' may have been changed by analogy with the high frequency verb 'speak', that is /dzʊ ̀/ 'speak' (IMP) : /dzɞ̂/ 'speak' (N-PST) :: /gʊ ̀/ 'divide' (IMP) : /gɞ̂/ 'divide' (N-PST), that is, replacing the form that the attested present stem bgod would have yielded regularly. 17
Table 11 .
11 Table11provides some examples. Examples of alternating verbs in Zhongu together with their Tibetan etymologies in J.[START_REF] Hongkai | Is Baima a dialect or vernacular of Tibetan?[END_REF], compared to corresponding verbs in BaimaWe note that in addition to the tendency to eliminate a separate imperative stem of classical
Gloss Zhongu WT Baima
N-PST PST IMP present past imperative N-PST PST IMP
eat < gnaw ⁿtʃʰɐ ⁿtʃʰi ⁿtʃʰo 'cha 'chas 'chos 18 n dʒ ʢ a̠ ̂ n dʒʊ ̀ n dʒ ʢ ɞ̂
stay; sit ⁿdə dɛ di 'dug bsdad sdod n dy dê dyʉ̂
paradigms, Zhongu and Baima also share some other developments. These include the semantic extension from 'chew, gnaw' to 'eat'. Overall, that extension appears to be an areal development in several Tibetic varieties spoken in the neighborhood of Baima, including in addition to Zhongu, also Kun sngon, Chos.rje, and Zhangla or lCang.la (see Hua
[START_REF] Kan | Zangyu Songpanhua de yinxi he yuyin de lishi yanbian 藏語松潘話的音系和語音的歷史演變 [Sound system of Songpan Tibetan and its historical development[END_REF] J. Sun 2003: 834, footnote 41)
.
Baima and Zhongu also share the suppletive paradigm of the verb 'sit; stay', where the past stem bsdad and the imperative stem sdod are suppletive for the verb 'dug. The same suppletive paradigm has also been reported for a few other Amdo varieties of the historical Sino-Tibetan borderlands, including Ndzorge (Sichuan, J. Sun 1993: 974, note 34) and Nangchenpa
(Qinghai, Causemann 1989: 93, cited from Zeisler 2004: 564)
.
Table A.1 summarizes major trends in the development of Baima onsets.
Historical OT OT initial Baima Example
onset type preinitial Gloss WT Baima
breathy oral voiced stop, affricate, fricative D ̤ > D stone rdo dɞ̂
nasal voiceless aspirated stop, affricate ⁿD ̤ > ⁿD ʢ fly (v.) 'phur m b ʢ û
oral, nasal nasal, liquid, approximant S̤ > S ʢ pus rnags n ʢ ɑ̂
plain ∅- voiceless aspirated stop, affricate Tʰ hempen cloth thags tʰɑ̂
or voiceless unaspirated fricative tooth so sʰɞ̂
non-breathy voiceless unaspirated stop, pillar ka ba kɑ̄
affricate
voiced stop, affricate, fricative T poison dug tŷ
oral voiceless unaspirated stop, mark, sign rtags tɑ̂
affricate
nasal voiced stop, affricate ⁿD insect 'bu m bû
∅- nasal, liquid, approximant S forest nags nɑ̂
Table A.2 provides some illustrative examples.
non-tense (lax) snow kha ba kʰɑ ̀
lower pitch codas -m, -n, -ng to be dry skam kō
longer duration bottom part gdan dɛ ̀
lap; bosom pang pō
codas -l, -s to sleep nyal ɲɛ̄
rice 'bras ɳ ɖɽē
Table A.2. Major trends in the development of Baima vowels
Phonation type
Vowel duration OT rhyme Example
F0 Gloss WT Baima
open mouth kha kʰa̠ ̂
tense coda -r four bzhi ʒə̂
shorter duration year lo jɞ̂
higher pitch codas -b, -d, -g needle khab kʰʉ̂
voice; language skad kê
to be hungry ltogs tuɛ̂
coalesced syllables food za ma sō
20
OT rhymes with the coda -r have the same reflexes as syllables with zero coda, for which reason it is likely that -r was lost early in the history of Baima
(Huang & Zhang 1995: 99)
.
Table A.3 summarizes tonal developments in Baima.
Stage 1 Stage 2 Example
OT rhyme Phonation Baima onset Tone Gloss WT Baima
-b, -d, -g (historically breathy) village sde dɛ̂
open (-r) tense epiglottalized lips; beak mchu ⁿdʒ ʢ û
shorter duration (historically plain) H mouth kha kʰa̠ ̂
higher pitch voiceless aspirated tooth so sʰɞ̂
voiceless non-aspirated to dig (IPFV) rko kɞ̂
voiceless non-aspirated M pillar ka ba kɑ̄
-m, -n, -ng, -s, -l lax (historically breathy) bottom part gdan dɛ ̀
coalesced longer duration voiced obstruent, sonorant L to jump mchong ⁿdʒù
syllables lower pitch (historically plain) snow kha ba kʰɑ ̀
voiceless aspirated charcoal sol sʰɛ ̀
Table A.3. Tonal developments in Baima
Baima verbs with their suggested etymologies
This section regroups all verbs in our data. The alternating verbs are organized by the number of stems. Verbs with three stems are listed first (TableB.1), followed by verbs with two stems (Tables B.2-4). Verbs with two stems are further organized by alternation type: onset alternation first (TableB.2), followed by vowel alternation (TableB.3), and by a combination of onset and vowel alternation after that (TableB.4). Non-alternating verbs are listed in TableB.5. TableB.6 lists loan verbs from Chinese and periphrastic constructions with the auxiliary verb 'do; make' in our data.
a̠ ʉ e ɑ ɛ o o iɛ e / ɛ
i ə i i i e e i ə ə / i
u u y, u i, y y e u u ə y
e ɛ e e e e e e e / ɛ e
o ɞ ɞ e (after labials), ʉ uɛ ʊ u o i e (after labials), yʉ / ʊ
Table A.4. Baima reflexes of OT rhymes
Appendix B.
Table B .
B 3. Verbs with two stems differentiated by vowel alternationOne more verb with two stems, 'drink', has identical non-past and imperative forms, both / n dù/, corresponding, respectively to WT 'thung and 'thungs. The past stem of this verb, Baima / n dỳ/ does not regularly correspond with the attested past stem of this verb 'thungs.Finally, one verb with two stems, 'flow (water'), has a suppletive paradigm. The non-past stem of this verb is / n dʑɞ̂/, likely reflecting WT 'gro 'go'. The past stem of this verb is /ɕʰɑ ̀/ (homophonous with the past stem of the verb 'sweep', see B.2.15). It may be etymologically related, albeit in an irregular fashion, to WT phyin, the past stem of the verb gshar 'go successively, again and again, move one after the other, follow in succession'. TableB.5 lists non-alternating verbs in our data. As can be seen from that table, most non-alternating verbs correspond with verbs that are alternating in OT. In the majority of cases, the originally distinct stems merge in Baima due to regular sound change (see footnote 16). In the table, attested stems forms that match sound correspondences between Baima and WT are highlighted in bold. In one case, 'cry, weep' (B.5.37), a non-alternating verb results from an ongoing elimination of stems. That verb is recorded with two stems in H.Sun et al. (2007) (see
No. Gloss Baima WT
N-PST PST/IMP present past imperative
1 dig, scoop kɞ̂ kyʉ̄ rko brkos
2 kick (dɞ̂tɞ̂) dʑɑ̂ dʑʉ̂ (rdogtho) rgyag brgyab(s)
3 mend (ɑ ̀mba̠ ̂) dʑɑ̂ dʑʉ̂ (lhanba) rgyag brgyab(s)
4 divide ɡɞ̂ gʊ ̀ bgod; *bgo bgos
5 exchange ʒɛ̂ ʒè rje brjes
6 buy ɲɞ̂ ɲʉ̄ nyo nyos
7 throw, cast ( n dɐ̂) m bè m bò (mda) 'phen(d) 'phangs
8 cut; mow tʂa̠ ̂ tʂē 'breg/breg; *bra? bregs
9 plough mɞ̂ mē rmod; *rmo rmos
10 jump tsɛ̂ tsē rtse brtses
11 ache, be painful tsʰa̠ ̂ tsʰê tsha ?
12 herd n dz ʢ ɞ̂ n dzʊ ̀ 'tsho 'tshos
13 close, shut (door) 24 zɞ̂ zʊ ̀ bzo? bzos?
14 take, hold jē jō len blangs; *langs
15 look ta̠ ̂ tyʉ̄ lta ltos
16 split firewood sɛ̂ sē gse gses
17 give birth sɞ̂ syʉ̄ gso gsos
18 rest; dry in the sun ʃɞ̂ ʃyʉ̄ sro bsros
No. Gloss Baima WT
N-PST PST/IMP present past
1 wash n dʑ ʢ û tɕȳ 'khru bkrus
2 split open ŋ ɡa̠ ̂ kē 'gas; *'ga bkas
3 cut with scissors n dʐa̠ ̂ tʂē 'dra dras
4 take out; dredge ⁿdʐē tʂō 'dren drang
5 give ʒə̂ ɕī sbyin; *sbyi? 25 byin
6 put; place n dʒuɛ̂ ʒɑ̂ 'jog bzhag
Table B.4. Verbs with two stems differentiated by a combination of onset and vowel alternation
24
In terms of regular sound correspondences, this verb matches the WT verb 'do'.
25
The non-past form of this Baima verb, viz., /ʒə̂/, is homophonous with the numeral 'four', WT bzhi.
Table 2 )
2 , but it has only one stem in our data (viz., /ŋû/). TableB.5. Non-alternating Baima verbs with their suggested etymologies TableB.6 verbs lists the 15 non-alternating verbs that are borrowed from Chinese and the 6 periphrastic expressions that consist of a noun or a verb followed by the (alternating) auxiliary verb 'do; make'.
No. Gloss Baima WT
N-PST / PST/ IMP present past imperative
1 borrow (money) ɕī skyi bskyis skyis
2 spit ɕŷ skyug skyugs skyugs
3 vomit ɕŷ skyug skyugs skyugs
4 grow (in height) ɕê skyed bskyed skyed
5 stretch out, extend tɕō rkyong brkyangs rkyongs
6 fear, be afraid ɕa̠ ̂ or tɕa̠ ̂ skrag; *skra skrag(s); *skra skrags; *skra
7 swell up ɕō or tɕō skrang skrangs skrangs
8 stir tɕŷ dkrug dkrugs dkrugs
9 dry up, dry out kō skam bskams, skam skams
10 smear kŷ skud bskus skus
11 apply ointment kŷ skud bskus skus
12 make noodles < boil, cook kî skol bskol skol
13 lose (a battle) < shrink, recoil? kʰô 'khum? 'khums? 'khums?
14 carry on the back kʰû 'khur khur(d), bkurd khur(d)
15 fall ill < be subdued, kʰì 'khul khul khul
overwhelmed?
16 run tɕŷ 'khyus 'khyus, khyus khyus
17 pull, lead kʰî khrid khrid khrid
18 nod (head) ( ŋ ɡɞ̂) ŋuɛ̂ (mgo) 'gug(s)? 26 bkug khug
19 squat kū dgum bkums khum(s)
20 quake (earth) sʰa̠ ̂ ŋgî sa 'gul sa 'gul(d) sa 'gul
21 crack between the teeth kuɛ̂ 'gog bkog gog, khog
(seeds)
22 shell, peel kuɛ̂ 'gog bkog gog, khog
23 laugh ga̠ ̂ dgod, rgod b(r)gad, rgod; dgod, rgod
*b(r)ga
24 cross (e.g. a river) gɛ ̀ rgal brgal rgol
25 straddle kô dʑò (gom) rgyang brgyang-s? rgyangs
26 throw, cast ʑʉ̂ rgyab brgyab rgyob
27 transform, change ɲ dʑỳ 'gyur 'gyur 'gyurd
28 lift, raise (head) ( ŋ ɡɞ̂) ʑa̠ ̂tʃa̠ ̂ 'gyogs/'gyogs (mgo) yar bkyags khyogs
26 We tentatively propose that this verb may reflect WT 'gugs 'to bend, to cause to bend'. This is based on the observation that the expression mgo 'gugs is used in the sense of 'nod head' in several Tibetic varieties spoken in the vicinity of Baima. These include the Zhongxinrong / gtsang chen rong variety of 'ba' thang Tibetan, Thebo, and the Bola 博拉话 / 'bo ra variety of bla brang Tibetan. Table B.6. Loan verbs and periphrastic expressions with the auxiliary verb 'do; make' in our data
Note that similar to Old Tibetan and modern Tibetic languages, prohibitive imperative in Baima is formed by combining the negator /ma̠ / with the imperfective stem of the verb. Examples include: /ma̠ ̂ n dʒ ʢ a̠ ̂/ 'do not eat', /ma̠ ̂ ŋû/ 'do not cry', /ma̠ ̂ tɕa̠ ̂/ 'don't be afraid'.
The imperative form of 'sit; stay' should read /de⁴²/, that is with tone 42, as elsewhere in[START_REF] Bufan | Baima zhishu wenti yanjiu 白马话支 属问题研究 [A study of the genetic affiliation of Baima[END_REF].
Interestingly, some Tibetic varieties spoken in the border areas between Sichuan, Qinghai, and Gansu provinces exceptionally maintain the WT distinction between a separate present stem and a separate future stem. This is the case in Gserpa (J.Sun 2006: 115). Khalong (J.Sun 2007: 327), on the other hand, evidences WT future stems as sources of imperfective stems in some verbs.
We note that the loss of the past stem of that verb and the usurpation of that stem by the imperative stem is likely a recent development in Baima. Nishida & H.[START_REF] Tatsuo | Hakuba yakugo no kenkyū: Hakuba no kōzō to keitō 《白馬譯語の研究: 白馬語の構造と系统》 [A study of the Baima-Chinese vocabulary 'Baima yiyu': the structure and affiliation of the Baima language[END_REF] and H.Sun et al. (2007) record the verb 'look' as having three stems, namely, (i) the non-past stem /ta¹³/⁵³/ (which regularly corresponds with WT lta), (ii) the past stem /tɛ³⁵/ (which regularly corresponds with WT bltas), and (iii) the imperative stem /tø³⁵/ (which regularly corresponds with WT ltos). Conversely, this verb has only two stems in Huang & Zhang's data (see Table1) and in our data, namely, (i) the non-past stem /ta̠ ̂/ (corresponding with WT lta) and (ii) the past/imperative stem /tyʉ̄/ (corresponding with WT ltos).
Note that the major sound correspondences between Zhongu and OT, as described in J.[START_REF] Hongkai | Is Baima a dialect or vernacular of Tibetan?[END_REF], rather suggest 'cho as the underlying form for the imperative stem of this verb, as is also the case in Baima. Specifically, according to J.Sun (2003: 790), OT os corresponds to Zhongu /i/, while OT o corresponds to Zhongu /o/.
This is based on a similar shift from grang 'be cold; become cold' to 'blow (wind)' in the Tibetic varieties neighbouring Baima, including 'bo ra (/lang cca/ is 'the wind blows; to blow (wind)'), Thebo (/cca/ or /tɕa/ 'to blow (wind)'), and Cone (/lʉː H tɕɑː L / 'the wind blows; to blow (wind)').
Appendix A. Baima synchronic and historical phonology
This study adopts the phonological analysis of Pingwu Baima (hereafter Baima), as detailed in [START_REF] Chirkova | Tonal developments in Baima[END_REF] and Chirkova (2021, under review). Compared to the previous phonological analyses of Baima in Nishida & H. [START_REF] Tatsuo | Hakuba yakugo no kenkyū: Hakuba no kōzō to keitō 《白馬譯語の研究: 白馬語の構造と系统》 [A study of the Baima-Chinese vocabulary 'Baima yiyu': the structure and affiliation of the Baima language[END_REF], [START_REF] Bufan | Baima zhishu wenti yanjiu 白马话支 属问题研究 [A study of the genetic affiliation of Baima[END_REF], and H. Sun et al. (2007); [START_REF] Chirkova | Tonal developments in Baima[END_REF] and Chirkova (2021, under review) (i) additionally recognize phonation contrasts in consonants (non-epiglottalized, epiglottalized) and vowels (tense, modal, lax), and (ii) propose a new analysis of the tonal system, whereby contrastive tonal categories in Baima are described as typified not only by fundamental frequency (pitch height, slope), but also by contrastive phonation.
The structure of the Baima syllable is (C)V(V), where C (which is optional) can be any phonemic consonant, and V stands for vowel. The segmental inventory includes:
(i) 57 consonant phonemes, of which 35 are non-epiglottalized (/p pʰ t tʰ k kʰ ts tsʰ tʃ tʃʰ tɕ tɕʰ ʈɽ ʈɽʰ s sʰ ʃ ʃʰ ɕ ɕʰ x m b ⁿd ŋ ɡ ⁿdz ɲ dʑ ɳ ɖɽ m n ɲ ŋ r j l w/) and 22 are epiglottalized (/b d ɡ dz dʒ dʑ ɖɽ z ʒ ʑ ɣ m b ʢ ŋ ɡ ʢ ⁿdz ʢ ⁿdʒ ʢ ɲ dʑ ʢ m ʢ n ʢ ɲ ʢ ŋ ʢ j ʢ l ʢ/ ) (note that in non-nasalized voiced obstruents, epiglottalization is contingent on voicing and for that reason, not marked in transcriptions) (ii) 15 vowel phonemes, including 11 monophthongs (/i e ɛ ʊ ə ʉ u ɞ o a̠ ɑ/) and 4 diphthongs (iɛ yʉ uɛ ua̠ ).
Baima has three contrastive tonal categories: high falling, mid, and level, as in / ŋ ɡuɛ̂/ 'bark (dog)', / ŋ ɡuɛ̄/ 'pan', / ŋ ɡuɛ / 'downstairs'.
The simple (C)V(V) syllable structure in Baima represents considerable reduction and simplification of the complex OT syllable structure, that is, (b)(C)C(M)V(C)(s/d), where C stands for consonant, M for medial and V for vowel. The main developments of Baima onsets are conditioned by the presence or absence of OT prefixes. Unprefixed voiceless stops and affricates remain unchanged and non-epiglottalized, whereas unprefixed voiceless fricatives become aspirated. Unprefixed voiced obstruents become devoiced and non-epiglottalized. The major trend for oral prefixal consonants ( g-, d-, b-, r-, l-, s |
04099889 | en | [
"phys",
"spi"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-04099889/file/LAVOIE-2022_IBM_MultiStepIceAccretion_LevelSet_JoA.pdf | Pierre Lavoie
email: [email protected]
Emmanuel Radenac
Ghislain Blanchard
Eric Laurendeau
Philippe Villedieu
Phd Candidate
Immersed boundary methodology for multistep ice accretion using a level set
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
I. Introduction
Numerical tools for the prediction of in-flight ice accretion are typically based on a quasi-steady assumption where modules are called sequentially and solved to steady state within a time-iterative scheme (e.g., LEWICE [START_REF] Wright | User Manual for the NASA Glenn Ice Accretion Code LEWICE Version 2.2.2[END_REF], FENSAP-ICE [START_REF] Beaugendre | Development of a Second Generation In-Flight Icing Simulation Code[END_REF]). The process is illustrated in Fig. 1 where the modules are: (1) a mesh generation tool, (2) a solver for the aerodynamics, (3) a solver to obtain the droplet trajectories and impingement rates, (4) a solver to obtain the wall convective heat transfer (in the boundary layer), [START_REF] Wright | Validation Methods and Results for a Two-Dimensional Ice Accretion Code[END_REF] a solver to perform a heat and mass balance applied to the deposited water to obtain the ice accretion rate and finally (6) a tool to update the geometry based on the ice thickness evolution. Modules [START_REF] Wright | User Manual for the NASA Glenn Ice Accretion Code LEWICE Version 2.2.2[END_REF] to [START_REF] Petrosino | Ice Accretion Model on Multi-Element Airfoil[END_REF] are embedded in a time loop for which the total ice accretion time is divided in time steps (multi-step) generating successive layers of ice (multi-layer). When using Body-Fitted (BF) meshes, a mesh update is required with each new ice layer generated by the multi-step process, and it can be repeated several times in order to obtain the final ice shape prediction. This leads to additional costs related to the mesh update and additional difficulty in updating the ice shape which can exhibit unphysical surface overlaps in concave regions when using a Lagrangian approach (displacement of surface mesh nodes, a method which is usually employed in ice accretion codes [START_REF] Saeed | Modified Canice for Improved Prediction of Airfoil Ice Accretion[END_REF][START_REF] Bourgault-Côté | Multi-Layer Icing Methodologies for Conservative Ice Growth[END_REF]). Tools for the numerical prediction of ice accretion are constantly evolving. Originally, icing tools were limited to 2D simulations and mostly used potential flow solvers (e.g., panel method) coupled with an integral boundary layer code (e.g., LEWICE [START_REF] Wright | User Manual for the NASA Glenn Ice Accretion Code LEWICE Version 2.2.2[END_REF][START_REF] Wright | Validation Methods and Results for a Two-Dimensional Ice Accretion Code[END_REF] and CANICE [START_REF] Saeed | Modified Canice for Improved Prediction of Airfoil Ice Accretion[END_REF]). While these tools offer fast computation times, they are not well suited for configurations involving separated flows and recirculations (e.g., multi-element airfoil [START_REF] Petrosino | Ice Accretion Model on Multi-Element Airfoil[END_REF]). They are also not easy to generalize for 3D applications and more complex configurations (e.g., full aircraft). Nowadays, most icing software use either a Euler [START_REF] Trontin | Description and assessment of the new ONERA 2D icing suite IGLOO2D[END_REF] or Reynolds Averaged Navier-Stokes (RANS) flow solver [START_REF] Beaugendre | Development of a Second Generation In-Flight Icing Simulation Code[END_REF][START_REF] Bourgault-Côté | Multi-Layer Icing Methodologies for Conservative Ice Growth[END_REF][START_REF] Gori | PoliMIce: A simulation framework for threedimensional ice accretion[END_REF][START_REF] Radenac | Validation of a 3D ice accretion tool on swept wings of the SUNSET2 program[END_REF][START_REF] Pena | A single step ice accretion model using Level-Set method[END_REF]. When using a Euler flow solver, it must be coupled with a boundary layer code in order to model the viscous effects and retrieve the convective heat transfer at the wall. This task is not trivial for 3D simulations but is the subject of recent research activities at the ONERA [START_REF] Radenac | Use of a Two-Dimensional Finite Volume Integral Boundary-Layer Method for Ice-Accretion Calculations[END_REF], where a PDE-based formulation is developed for the purpose of icing applications. The modeling of surface roughness is also important as it directly influences the convective heat transfer at the wall. The roughness is often modeled as a constant over the entire wall, but new models with spatially varying roughness heights have been developed recently [START_REF] Fortin | Heat and mass transfer during ice accretion on aircraft wings with an improved roughness model[END_REF][START_REF] Mcclain | Ice accretion roughness measurements and modeling[END_REF][START_REF] Han | Surface Roughness and Heat Transfer Improved Predictions for Aircraft Ice-Accretion Modeling[END_REF]. These models can potentially improve the prediction of glaze ice shapes. Concerning the evaluation of the droplet trajectories and impingement rates, Lagrangian methods were originally used and are still common in current icing tools [START_REF] Trontin | Description and assessment of the new ONERA 2D icing suite IGLOO2D[END_REF][START_REF] Gori | PoliMIce: A simulation framework for threedimensional ice accretion[END_REF][START_REF] Bidwell | Icing Analysis of a Swept NACA0012 Wing Using LEWICE3D Version 3.48[END_REF][START_REF] Bellosta | A robust 3D particle tracking solver for in-flight ice accretion using arbitrary precision arithmetic[END_REF]. On the other hand, Eulerian formulations for the droplets have gained interest in the recent years [START_REF] Bourgault-Côté | Multi-Layer Icing Methodologies for Conservative Ice Growth[END_REF][START_REF] Pena | A single step ice accretion model using Level-Set method[END_REF][START_REF] Bourgault | A finite element method study of Eulerian droplets impingement models[END_REF][START_REF] Radenac | IGLOO3D Computations of the Ice Accretion on Swept-Wings of the SUNSET2 Database[END_REF] as they are easy to generalize for complex 3D configurations. Both Lagrangian and Eulerian formulations are also the subject of ongoing research for the modeling of Supercooled Large Droplets (SLD) [START_REF] Honsek | Eulerian Modeling of In-Flight Icing Due to Supercooled Large Droplets[END_REF][START_REF] Trontin | Revisited Model for Supercooled Large Droplet Impact onto a Solid Surface[END_REF] which requires the consideration of additional physics such as droplet breaking, splashing and bouncing. The thermodynamic module is responsible for determining the ice accretion rate (or ice thickness) which is then used to update the geometry (the ice shape). The mass and energy balances usually follow the work of Messinger [START_REF] Messinger | Equilibrium Temperature of an Unheated Icing Surface as a Function of Air Speed[END_REF] but the methods have seen some improvements to be more suitable for 3D simulations or to model more physics.
For instance, iterative algebraic methods (e.g., [START_REF] Zhu | 3D Ice Accretion Simulation for Complex Configuration Basing on Improved Messinger Model[END_REF]) and PDE-based methods (e.g., Shallow Water Icing Model (SWIM) [START_REF] Bourgault | Development of a Shallow-Water Icing Model in FENSAP-ICE[END_REF]) can be used for 3D simulations. Some methods also include more physics such as the conductive heat transfer through the ice thickness (e.g., [START_REF] Myers | Extension to the Messinger Model for Aircraft Icing[END_REF][START_REF] Gori | Local Solution to the Unsteady Stefan Problem for In-Flight Ice Accretion Modeling[END_REF]). These methods assume the ice is continuous, but alternative approaches are also developed where the ice is made of discrete elements and the ice evolution is based on probabilistic arguments.
For instance, the morphogenetic model of [START_REF] Szilder | Comparing Experimental Ice Accretions on a Swept Wing with 3D Morphogenetic Simulations[END_REF] follows this approach and provides an interesting step forward in the prediction of ice scallops and other complex ice shapes specific to 3D simulations. Although significant improvements have been made over the years, some ice shapes are still very difficult to predict even to this day (e.g., glaze ice on swept wings [START_REF] Fujiwara | Comparison of Computational and Experimental Ice Accretions of Large Swept Wings[END_REF]). Thus, the numerical prediction of ice accretion still remains a very active field of research. In addition to improving individual modules constituting the ice accretion process, research must also address the automation and robustness of the framework as a whole. The robustness of ice accretion tools is often limited by the difficulty of generating meshes on complex ice shapes and also by the geometry update which can exhibit overlaps in concave regions if not treated properly. Instead of performing a surface evolution and mesh deformation (e.g., [START_REF] Tong | Three-Dimensional Surface Evolution and Mesh Deformation for Aircraft Icing Applications[END_REF]), another type of strategy is investigated : the use of an Immersed Boundary Method (IBM) combined with a level-set method. This methodology has the potential to alleviate the issues related to the mesh and geometry update. The objective of this paper is thus twofold: confirm the potential of this methodology and assess its accuracy against the usual body-fitted approach.
Immersed Boundary Methods have been applied with success in many fields but have rare applications to the prediction of ice accretion. The CIRA used a discrete IBM to solve compressible inviscid flows on 3D Cartesian grids in [START_REF] Capizzano | A Compressible Flow Simulation System Based on Cartesian Grids with Anisotropic Refinements[END_REF] which was later extended for compressible viscous flows [START_REF] Capizzano | Turbulent Wall Model for Immersed Boundary Methods[END_REF][START_REF] Capizzano | Coupling a Wall Diffusion Model with an Immersed Boundary Technique[END_REF]. A discrete IBM was also applied to an Eulerian droplet solver [START_REF] Capizzano | A Eulerian Method for Water Droplet Impingement by Means of an Immersed Boundary Technique[END_REF] with the intent of performing ice accretion simulations. However, no ice prediction results using these IBMs have been published yet.
Another research team from the University of Strasbourg applied IBMs to a 3D ice accretion code (NSMB-ICE) with a level-set approach [START_REF] Al-Kebsi | Multi-Step Ice Accretion Simulation Using the Level-Set Method[END_REF]. The compressible viscous flow solver employs a penalty method (an IBM) and the Eulerian droplet solver uses a discrete approach similar to the one of [START_REF] Capizzano | A Eulerian Method for Water Droplet Impingement by Means of an Immersed Boundary Technique[END_REF]. The level-set approach initially proposed in [START_REF] Pena | A single step ice accretion model using Level-Set method[END_REF] is used to update the iced geometry, but is applied to multi-step ice accretion (up to 5 steps on a single test case). According to the authors, the implementation is currently limited to laminar flow and rime ice. Furthermore, no comparison is made against a more classical body-fitted approach nor experimental results. The idea of using a level-set for the prediction of ice accretion is also reused in [START_REF] Bourgault-Côté | Multilayer Airfoil Ice Accretion Simulations Using a Level-Set Method with B-Spline Representation[END_REF] where multi-step simulations are performed in 2D combined with the use of NURBS. An explicit tracking of the air/ice interface is also performed in order to enforce mass conservation.
Our initial attempt at applying IBMs to an ice accretion suite is presented in [START_REF] Lavoie | A Penalization Method for 2D Ice Accretion Simulations[END_REF], where a penalization method is applied to the aerodynamic (Euler equations) and droplet solvers (Eulerian formulation). The 1 st step of the multi-step process is performed using a Body-Fitted mesh while for the subsequent steps, the ice shape is immersed on the initial mesh. A geometric approach was used to evaluate the signed distance field required by the penalization method. Furthermore, a Lagrangian node displacement approach was used to update the geometry. This paper extends the contribution of [START_REF] Lavoie | A Penalization Method for 2D Ice Accretion Simulations[END_REF] with two key features. First, an improved penalization method suitable for ice horn accretion is applied to the Euler equations [START_REF] Lavoie | An Improved Characteristic Based Volume Penalization Method for the Euler Equations Towards Icing Applications[END_REF]. Second, a level-set approach [START_REF] Pena | A single step ice accretion model using Level-Set method[END_REF] is implemented in the IBM multi-step ice accretion process to solve the issues related to unphysical geometry update, replacing the Lagrangian geometry update.
The paper first presents the ice accretion suite used as the development platform, IGLOO2D [START_REF] Trontin | Description and assessment of the new ONERA 2D icing suite IGLOO2D[END_REF]. Then, implementation details for the penalization and level-set methods are covered: preprocessing, penalization of the Euler equations, penalization of the droplet solver, extraction of the surface data and implementation of the level-set method. A section discusses the benefits of using the level-set method in the IBM ice accretion framework on a manufactured case. Then, rime and glaze ice cases from the AIAA Ice Prediction Workshop (IPW) [START_REF] Broeren | 1st AIAA Ice Prediction Workshop[END_REF] are used for verification where the Body-Fitted and penalized solutions are compared using the multi-step process. Additional verifications are performed for the ice accretion around a NACA0012 airfoil. Finally, ice accretion simulations are performed on a three-element airfoil before conclusions are drawn.
II. Methodology
The 2D ice accretion suite IGLOO2D [START_REF] Trontin | Description and assessment of the new ONERA 2D icing suite IGLOO2D[END_REF] is used as the development environment. In IGLOO2D, different types of solvers are available for each module but only the ones used in this paper are discussed. The unstructured mesh generation is handled by GMSH [START_REF] Geuzaine | Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities[END_REF]. The aerodynamic field is evaluated using the Euler equations and the convective heat transfer is evaluated using a simplified integral boundary layer method (SIM2D) [START_REF] Trontin | Description and assessment of the new ONERA 2D icing suite IGLOO2D[END_REF]. For the droplet trajectories and impingement evaluation, the Eulerian solver is selected. The ice accretion solver is based on a Messinger-type mass and energy balance to obtain the ice thickness. Finally, the ice geometry is generated by a Lagrangian displacement of the surface nodes. In this paper, a level-set method is also integrated in the ice accretion framework which corresponds to an Eulerian displacement of the geometry.
The modules can be classified either as volume or surface solvers. The aerodynamics (EULER2D) and the Eulerian droplet trajectories (TRAJE2D) are solved on 2D volume meshes. On the other hand, the simplified integral boundary layer method (SIM2D) and the ice accretion (MESSINGER2D) are solved on 1D surface grids.
For the application of the IBM, the suggested approach is to start the multi-step ice accretion process from a standard BF mesh, thus keeping the original BF solution for the 1 st ice layer (as well as for the clean areas of the surface for the following steps). Usually, the BF mesh is updated to match the new ice geometry for each subsequent step. With our IBM, the volume mesh update is avoided and a penalization method is applied to the volume solvers (airflow and droplets trajectory) to impose the correct boundary conditions on the Immersed Boundary (IB) which arbitrarily cuts through the mesh. The surface mesh representing the ice shape (the IB) is however re-meshed in order to retain an adequate discretization of the ice shape (e.g. more refined near high-curvature features). The use of the penalization method requires some modifications to the ice accretion suite : the addition of a preprocessing step, the modification of the volume solvers and the extraction of surface data, as highlighted in red in Fig. 2. These modifications are discussed in the following sections along with the integration of the level-set method in the multi-step process.
A. Immersed Boundary Pre-Processing
In this paper, both an explicit and implicit definition of the IB are required. The multi-step ice accretion process starts from a BF volume mesh, thus initially providing a surface mesh which represents the solid-air interface. It can be interpreted as an Immersed Boundary which correspond to the BF surface for the first step. This explicit definition must be conserved throughout the multi-step ice accretion process in order to use the surface solvers (i.e., ice accretion, boundary layer). On the other hand, the penalization methods (a type of IBM) implemented in IGLOO2D use a signed distance field (implicit definition) to obtain information about the interface at any point in the volume mesh.
The IB preprocessor evaluates the signed distance field (φ) by first detecting the inside (solid) and outside (fluid) cells. Knowing the list of edges defining a closed IB, a ray casting algorithm can be used for this matter [START_REF] Schneider | Chapter 13 -computational geometry topics[END_REF]. Once this information is known, it can be used to determine the sign of the signed distance field where φ > 0 in the fluid and φ < 0 in the solid. The distance is evaluated by taking advantage of the available explicit definition of the interface.
For each cell center, a geometric approach determines the minimum projected distance to the list of edges (or faces in 3D) defining the IB. It comes back to evaluating the minimum distance between a point and a segment in 2D [START_REF] Schneider | Chapter 13 -computational geometry topics[END_REF] or between a point and a triangle in 3D. Then, the normals to the IB (n φ ) and its curvature (κ) can be evaluated from the signed distance field (φ) as:
n φ = - ∇φ ||∇φ|| (1)
κ = ∇ • n φ (2)
Notice that n φ is defined to point towards the solid, contrary to the usual definition, which is useful in the implementation of the penalization methods. An example of signed distance field around a clean NACA23012 airfoil is illustrated in Fig. 3 along with the normals to the wall (n φ ). Here, the signed distance field is strictly positive because it is evaluated on a BF mesh where the contour φ = 0 is the surface of the airfoil.
Fig. 3 Signed distance contours (φ) and surface normals (n φ ) for a clean NACA23012 airfoil
Although the volume mesh update is avoided when using the IBM, the IB is re-meshed at the pre-processing phase for the 2 nd ice accretion step and further. This is possible because the IB discretization (surface mesh) is independent of the volume mesh. The surface re-meshing is done using GMSH where a B-spline is fitted through the discrete list of nodes defining the ice shape. The nodes are then redistributed according to a user-specified characteristic mesh size.
This provides a surface mesh discretization which is very close to what is obtained with the BF ice accretion process.
B. Volume Penalization Method
With a volume penalization method, the boundary conditions are applied by the addition of source terms in the continuous form of the governing equations in order to enforce the desired condition at the IB. The source terms are turned on if the computational volume (a cell) is located inside the solid zone and turned off if in the fluid zone. Hence the governing equations are solved as usual in the fluid but penalized in the solid. The source terms are turned on and off using a mask function (χ) which takes the form of a sharp Heaviside function (Eq. ( 3)).
χ = 0 φ ≥ 0 (fluid) 1 φ < 0 (solid) (3)
For the aerodynamics, the penalization of the Euler equations is performed using the CBVP-Hs method of [START_REF] Lavoie | An Improved Characteristic Based Volume Penalization Method for the Euler Equations Towards Icing Applications[END_REF]. This approach enforces the no-penetration velocity (slip wall, Eq. ( 4)) and uses the normal momentum relation to account
for the wall curvature in the pressure extrapolation (Eq. ( 5)). The conservation of total enthalpy (Eq. ( 6)) and entropy (Eq. ( 7)) are also enforced across the IB to close the system.
(v • n φ )n φ = 0 (4)
n φ • ∇P = κρ||v|| 2 (5)
n φ • ∇H = 0 (6)
n φ • ∇s = 0 (7)
This method was found to provide better flow attachment and a better conservation of entropy and total enthalpy for ice shapes exhibiting high curvature such as ice horns [START_REF] Lavoie | An Improved Characteristic Based Volume Penalization Method for the Euler Equations Towards Icing Applications[END_REF]. The set of penalized Euler equations is given by Eq. ( 8) where the penalization terms enforcing Eqs. ( 4)-( 7) are gathered on the right-hand side (RHS).
∂ρ ∂t + (1 -χ) ∇ • (ρv) = - χ η c n φ • ∇ρ -κ ρ 2 γP ||v|| 2 ∂(ρv) ∂t + (1 -χ) ∇ • (ρv ⊗ v + P I) = - χ η c n φ • ∇(ρv) + κρv 1 - ρ γP ||v|| 2 - χ η ρ(v • n φ )n φ ∂(ρE) ∂t + (1 -χ) ∇ • ((ρE + P )v) = - χ η c ρn φ • ∇H - χ η ρ(v • n φ ) 2 (8)
In Eq. ( 8), v is the air velocity, I is the identity tensor, E is the total energy, η and 1/η c are penalization parameters that can be respectively interpreted as a characteristic time and a characteristic velocity. For more details about the derivation of Eq. ( 8), the interested reader is invited to read reference [START_REF] Lavoie | An Improved Characteristic Based Volume Penalization Method for the Euler Equations Towards Icing Applications[END_REF].
For the Eulerian droplet equations, the penalization method of [START_REF] Lavoie | A Penalization Method for Eulerian Droplet Impingement Simulations towards Icing Applications[END_REF] is used. When droplets impinge the body
(v d • n φ > 0)
, no penalization is applied and the physical equations are solved in the solid zone (φ < 0). The droplets are thus allowed to cross the IB and enter the body. However, when the droplets enter the computational domain from the solid zone (v d • n φ ≤ 0) a boundary condition is applied on the primitive variables (Eq. ( 9)), enforcing a null flux and avoiding re-injection of the droplets.
α = 0 v d = 0 if v d • n φ ≤ 0 (9)
To translate this behavior to the droplet equations using penalization terms, the usual mask function (χ, Eq. ( 3)) is multiplied by a droplet mask function (χ d , Eq. ( 10)), ensuring the penalization term is only active in the solid zone if the droplets are reinjected in the fluid zone.
χ d = 0 αv d • n φ ≥ 0 (impingement) 1 αv d • n φ < 0 (re-injection) (10)
The set of penalized droplet equations is given by Eq. ( 11), where the influence of gravity is neglected and the penalization terms are highlighted in red.
∂α ∂t + ∇ • (αv d ) = - χχ d η α ∂(αv d ) ∂t + ∇ • (αv d ⊗ v d ) = C D Re d 24Stk α(v a -v d )-2 χχ d η αv d (11)
In Eq. ( 11), α is the non-dimensional volume fraction of water, v d is the non-dimensional droplets velocity, v a is the non-dimensional air velocity and C D is the droplets drag coefficient. The droplets Reynolds number (Re d ) and the Stokes number (Stk ) are:
Re d = ρ a ||v a -v d ||D d µ (12)
Stk = ρ d D 2 d U ∞ 18Lµ (13)
where D d is the droplet diameter, µ the dynamic viscosity of air and L a characteristic dimension (e.g., the chord length for an airfoil). The drag model of Schiller and Naumann [START_REF] Schiller | A drag coefficient correlation[END_REF] is used for the droplets which are assumed to remain spherical:
C D = 24 Re d (1 + 0.15Re 0.687 d ) Re d ≤ 1000 0.4 Re d > 1000 (14)
The penalization parameters must be small (η 1, η c 1) to accurately enforce the boundary conditions, which leads to a stiff system of equations. The penalization terms are thus treated implicitly when solving the system of equations for both the aerodynamics and droplet trajectories.
C. Surface Data Extraction
Relevant surface information from the volume solvers (aerodynamics and droplet trajectory) must be communicated to the surface solvers (boundary layer, ice accretion) at each step of the multi-step loop (e.g., pressure, velocity, droplet impingement rate). However, the penalization method does not explicitly provide the data on the Immersed Boundary (IB). Instead, the variables are known in the surrounding cells and an additional extraction step is thus required to recover the surface data.
In this paper, the data is interpolated at the node defining the IB (the surface points) using a weighted least square approach. The nearest cell to the interpolation point is first detected. Then, all the cells sharing a node with the identified cell are flagged as neighbors and included in the interpolation stencil. The penalization methods used in this paper are designed to fill the solid cell with valid data. The solid cells are included in the interpolation stencil, hence the need for methods ensuring a controlled continuity of the solution across the solid-fluid interface, as described in [START_REF] Lavoie | An Improved Characteristic Based Volume Penalization Method for the Euler Equations Towards Icing Applications[END_REF][START_REF] Lavoie | A Penalization Method for Eulerian Droplet Impingement Simulations towards Icing Applications[END_REF].
The interpolation uses an inverse distance weight with a smoothing parameter to avoid dividing by zero when the interpolation point and stencil points are too close. The weight between a cell center J (part of the stencil) and the interpolation point P is evaluated as:
w J = 1 ||r P J || 2 + 2 (15)
where ||r P J || is the distance between P and J. The smoothing parameter is selected as = 0.5∆x J with ∆x J the characteristic size of cell J,
D. Geometry Update via the Level-Set method
A Lagrangian approach can be used to update the geometry according to the normals to the wall (n, pointing towards the fluid) and the ice thickness (h ice ) provided by the ice accretion solver. A simple node update can be performed as:
x new = x old + h ice n (16)
where x new and x old are respectively the new and old node locations. This type of approach does not naturally handle the overlaps that can occur near concave region and requires methods for collision detection and front merging to obtain a usable surface mesh. A simple fix can be implemented in 2D as described in [START_REF] Ruff | Users Manual for the NASA Lewis Ice Accretion Prediction Code (LEWICE)[END_REF]. However, it does not directly translate to a 3D implementation which involves more complex geometric operations on a 2D surface mesh.
Alternatively, the level-set method can be used to update the geometry. This was done for instance in [START_REF] Pena | A single step ice accretion model using Level-Set method[END_REF] where the level-set equation (Eq. ( 17), [START_REF] Osher | Level Set Methods and Dynamic Implicit Surfaces[END_REF]) is used with an icing velocity field (V ice ) and solved on the volume mesh.
∂φ ∂t + V ice • ∇φ = 0 (17)
This approach has the benefit of being valid for both 2D and 3D simulations. It also naturally handles the issues related to the geometry update such as geometry overlaps. Here, the level-set method reuses the signed distance field (φ) computed at the IB pre-processing step. The interface (IB or BF) is represented by the contour φ = 0 and is advanced in time (Eq. ( 17)) to generate the ice shape, following the icing velocity vector field V ice . In this paper, the level-set is discretized using a 2 nd order scheme in time (Heun's method) and space (upwind with MUSCL extrapolation). The following sections describe a method to retrieve the icing velocity field and discuss the need for a re-initialization step in the advection of the level-set.
Velocity Propagation
The icing velocity magnitude (V ice,surf ) can be computed from the ice accretion time (∆t ice ) and the ice thickness (h ice ) provided on the surface mesh by the thermodynamics solver.
V ice,surf = h ice ∆t ice (18)
However, V ice,surf must be propagated in the volume mesh in order to perform the level-set advection (Eq. ( 17)). To obtain a behavior similar to the Lagrangian node displacement approach (Eq. ( 16)), the icing velocity is propagated from the surface mesh in the normal direction, producing constant velocity bands. A PDE-based approach (Eq. ( 19)) is used to propagate the information from the surface (V ice,surf ) to the field (V ice ) following the normal direction to the surface.
∂V ice ∂t = sign(φ)n φ • ∇V ice (19)
When the surface mesh corresponds to the body-fitted mesh boundary, V ice,surf is imposed as a Dirichlet boundary condition using ghost cells and is propagated in the fluid zone to obtain (V ice ). When the ice shape is immersed in the mesh (IB), the surface no longer corresponds to the mesh boundaries. For instance, this occurs from the 2 nd ice layer onward in the multi-step icing process. In this situation, a band of cells in the vicinity of the interface is initialized by a nearest neighbor search, taking advantage of the explicit definition of the interface. These cells are then frozen (no update) so they can act as ghost cells when solving Eq. ( 19) on both sides of the IB. The update is prevented by setting the Right Hand Side (RHS) of Eq. ( 19) to zero for the frozen cells. The propagation Eq. ( 19) accounts for the sign of φ in order to propagate V ice,surf from the band of initialized cells towards the fluid (φ > 0) and solid zones (φ < 0).
Once the icing velocity magnitude is known in the volume mesh, the vector field is set as:
V ice = -V ice n φ,0 (20)
where n φ,0 represents the normal to the initial contour φ = 0 (before the advection process begins). In other words, the icing velocity field remains fixed during the advection of the level-set. An example of propagated icing velocity field is illustrated in Fig. 4, showing the constant velocity bands in the normal direction to the interface.
x
Level-Set Advection and Re-Initialization
While the contour φ = 0 is advected using the level-set equation ( 17), φ does not conserve the properties of a signed distance field [START_REF] Sussman | A Level Set Approach for Computing Solutions to Incompressible Two-Phase Flow[END_REF]. A re-initialization of the level-set is thus performed (i.e., the signed distance field is re-evaluated).
This could be done by reusing the geometric approach from the pre-processing step. Because the new location of the IB is only known via its implicit definition at this stage, this would imply the application of a contour extraction technique to obtain an explicit definition of the interface (new surface mesh). The signed distance field is instead updated using the re-initialization equation [START_REF] Osher | Level Set Methods and Dynamic Implicit Surfaces[END_REF], as follows:
∂φ ∂t = S(φ 0 ) (n φ • ∇φ + 1) (21)
S(φ 0 ) = φ 0 φ 2 0 + 2 (22)
This equation incorporates a smoothed sign function S(φ 0 ) which is based on the signed distance before re-initialization (φ 0 ). According to [START_REF] Sussman | A Level Set Approach for Computing Solutions to Incompressible Two-Phase Flow[END_REF], it ensures that φ remains unchanged at the interface during the re-initialization process. In practice, numerical experiments showed the introduction of wiggles in the contour φ = 0 when using this approach, an undesirable behavior as a surface mesh is to be constructed from this extracted interface. To ensure the interface remains exactly at the same location, the idea used for the velocity propagation is repurposed here: freezing the update of a band of cells in the vicinity of the interface. Again, it is done by setting the RHS to zero in Eq. ( 21) for the frozen cells. In this way, the band of cells is not updated and the signed distance remains φ 0 . This approach follows the assumptions that φ remains close to a signed distance field in the vicinity of the interface. In this paper, two iterations of Eq. ( 21) are performed at every time step of the level-set advection process (Eq. [START_REF] Bourgault | A finite element method study of Eulerian droplets impingement models[END_REF]). An example of level-set advection is shown on Fig. 5, where the φ contours are displayed inside the ice shape only. Without re-initialization (Fig. 5a), the signed distance field is distorted inside the solid while activating the re-initialization (Fig. 5b) provides a more regular and sensible solution.
E. Surface Mesh Extraction
Once the level-set advection is completed and the signed distance field is re-initialized, the surface mesh extraction can be performed. It consists of two parts: (1) the contour extraction providing an explicit definition of the IB from the level-set and (2) the meshing of the surface (i.e., using GMSH). The first part is performed in the level-set module (geometry evolution solver) while the second part is performed when preprocessing the IB.
A surface discretization can be obtained by performing the extraction of the contour φ = 0. Note that the contour extraction is not performed before the re-initialization because of our general method which combines a Body-Fitted surface for the clean geometry and an Immersed Boundary for the ice shape. It is desired to extract a contour which matches the BF surface where there is no ice while the IB is normally extracted for the iced zones. To do so, a check based on φ is performed at the BF surface during the extraction process. If the BF surface is in the solid zone and far from the contour φ = 0 (e.g., leading edge in Fig. 5a), the signed distance field can be so distorted that the contour may be falsely detected at the BF surface. The re-initialization of the level-set solves this issue and thus helps the contour extraction.
For the contour extraction, well-known methods such as the marching cubes [START_REF] Lorensen | Marching Cubes: A High Resolution 3D Surface Construction Algorithm[END_REF], marching tetrahedra [START_REF] Treece | Regularised marching tetrahedra: improved iso-surface extraction[END_REF] and marching squares (a 2D equivalent of the marching cubes) can be used. In this paper, 2D unstructured meshes made of triangles are used which allows for a simple contour extraction method. We consider four possible configurations for the contour intersection with a triangular cell: edge to edge, edge to vertex, vertex to vertex or vertex only (Fig. 6).
Edge to edge Vertex
Edge to vertex These cases are all handled automatically by performing an edge-based interpolation assuming a single intersection point per edge. The process marches from cell to cell and adds consecutive intersection points to a linked list, forming a surface discretization. Tested edges are tagged along the way to avoid adding duplicates to the list. Once the marching process can no longer find any intersection on untested edges, the contour is completed. An edge is intersected by the contour if there is a sign change in φ between its two vertices. If φ at one vertex is below a specified threshold, the intersection is assumed to occur at the vertex and no interpolation is made. In this case, all the edges sharing the vertex are tagged as tested. This approach retains the discretization of the body-fitted surface where there is no ice accretion (φ ≈ 0) and perform a more classical contour extraction for the IB. It also directly provides an ordered list of points (surface mesh) for each body when dealing with a multi-element configuration. Note that in IGLOO2D, φ is reconstructed at the vertices from a weighted least square interpolation using the cell-center solution. An example of the marching process illustrated in Fig. 7 where the vertices are identified as positive, negative or zero and the extracted contour is illustrated in red.
Fig. 7 Example for the contour extraction marching process
The extraction process usually produces an irregular discretization where nodes can be very close to each other when the edge intersection is detected near a vertex. To help retrieve a more uniform and smoother surface mesh, nodes are merged if they are too close and inserted if they are too far apart. The merge and insertion process is performed by an arithmetic average followed by a correction to bring the node back on the level-set. Following the idea presented in [START_REF] Enright | A Hybrid Particle Level Set Method for Improved Interface Capturing[END_REF], the correction takes the form:
x corrected = x merged/inserted + (φ -φ target )ψn φ (23)
where φ target = 0 and ψ is a relaxation parameter set to unity but that can be reduced to avoid erroneous corrections (e.g., point near the wrong contour if multiple contours are involved). An example of contour extraction is provided in Fig. 8a near the leading edge of an iced NACA23012. The effect of node merging and insertion is illustrated in Figs.
8b-8c. Note that this node correction process is not mandatory as the surface is later re-meshed using GMSH. However it helps in avoiding surface wiggles due to an irregular initial node distribution when fitting a spline.
III. Ice Accretion Results
In this section, the new ice accretion framework using the IB and level-set methods is assessed. The objective is to reproduce the ice accretion results obtained with a classical BF approach while improving the robustness of the numerical tool (e.g., no negative volumes in the mesh, no overlaps in the surface mesh). In order to demonstrate the benefits of using the level-set approach, ice accretion over a manufactured ice shape is first performed using the level-set method and compared to the Lagrangian node displacement method. Then, rime ice case 241 and glaze ice case 242 from the 1 st AIAA Ice Prediction Workshop (IPW, [START_REF] Broeren | 1st AIAA Ice Prediction Workshop[END_REF]) are tested. These cases are respectively run ED1977 and run ED1978 from [START_REF] Lee | Implementation and Validation of 3-D Ice Accretion Measurement Methodology[END_REF], with slightly corrected icing conditions. Additional ice accretion cases from [START_REF] Trontin | Description and assessment of the new ONERA 2D icing suite IGLOO2D[END_REF] (cases 001, 003 and 004) are also tested to further demonstrate the behavior of the IBM. Finally, the new framework is tested on the multi-element McDonnell-Douglas LB606b Airfoil (MDA) [START_REF] Petrosino | Ice Accretion Model on Multi-Element Airfoil[END_REF] to illustrate the flexibility of the method on complex high-lift systems. The simulation parameters are summarized in Table 1. In this section, two methods are available for the representation of the ice shape (IBM or BF) and two for the geometry update (node displacement or Level-Set). This makes four possible combinations. When simply referring to the Immersed Boundary Method, the use of the level-set method is implied. Similarly, when referring to the Body-Fitted method, the use of the Lagrangian node displacement approach is implied (the standard approach in the icing community). In addition, the calculations are carried out with IGLOO2D. The default options described in [START_REF] Trontin | Description and assessment of the new ONERA 2D icing suite IGLOO2D[END_REF] were used for the body-fitted approach, in particular for the MESSINGER2D solver and the boundary-layer solver SIM2D. Regarding the meshes, unstructured grids generated by GMSH were systematically used. The wall mesh size is in the range of 1e-3 to 5e-3 chords (with a refinement in the range of 5e-4 chords for blunt trailing edges). These mesh sizes are fairly representative of default mesh sizes used in IGLOO2D. They generally allow obtaining a good trade-off between solution accuracy (predicted ice shape) and computational time.
For all the calculations, the wall mesh size is kept constant near the leading edge and extended over 0.75 chords as shown in figure Fig. 9. This is required when using the IBM in order to avoid the re-meshing during the multi-step process while maintaining an equivalent wall cell size compared to the BF method (with re-meshing).
A multi-step approach is adopted, using 2 to 10 steps. For the IBM approach, the calculations are performed by changing the calculation strategy for the volume solvers EULER2D and TRAJE2D (penalization) and for the ice shape transportation (level-set), all other parameters remaining the same.
-0.1 0 0.1 0.2 0.3
Fig. 9 Example mesh around a NACA23012 with an extended refinement zone near the leading edge
In the following sections, ice shapes are compared using the criteria presented in [START_REF] Wright | Validation Methods and Results for a Two-Dimensional Ice Accretion Code[END_REF]. Namely, two ice shapes are considered in good agreement if they have similar icing limits, ice thickness distribution, horn angle (or maximum thickness angle), maximum ice thickness and overall ice area. Contrary to [START_REF] Wright | Validation Methods and Results for a Two-Dimensional Ice Accretion Code[END_REF], only a qualitative comparison is made.
A. Manufactured Ice Shape
In order to clearly show the behavior of the level-set method against the usual Lagrangian node displacement approach, a manufactured ice shape is used with a fixed ice accretion rate (thickness and time). The ice accretion time is set to 400s and the ice thickness is enforced to 0.02m for every surface node with coordinate x < 0.05m.
This manufactured geometry was first presented in [START_REF] Lavoie | Comparison of thermodynamic models for ice accretion on airfoils[END_REF] and is generated from a NACA0012 airfoil with added artificial ice near the leading edge. The three-horn configuration was selected to obtain multiple flow recirculation zones and create a difficult situation for the ice growth solver because of the presence of highly concave and convex features.
On Fig. 10, the ice accreted three-horn geometry is illustrated with the enforced ice accretion thickness. The level-set solution is represented in blue, indicating the zone with φ < 0. The ice shape generated by the Lagrangian node displacement is shown as a solid black line, where the geometry overlaps can be seen near concave regions of the geometry. The contour φ = 0 is extracted by our edge marching method and represented by the red line with markers.
As observed in Fig. 10, the level-set method automatically handles the geometry overlaps and the extracted contour provides an explicit surface mesh discretization that can be used in the multi-step ice accretion process. For the results presented in this paper, only the ice shape is used as an Immersed Boundary and the clean geometry is still treated using a Body-Fitted approach. As an illustration, the aerodynamic and droplet fields for the rime ice case are shown in Fig. 11 where the IB (the ice shape) is represented by the red line and the solid body is white. In this section, two-step and 10-step ice accretion simulations are performed for the rime ice case 241. The results are compared between the Body-Fitted and IB methods. The wall mesh size is about 2e-3 chords with a refinement to 5e-4 chords at the trailing edge.
In Fig. 12, the pressure coefficients (Cp) and collection efficiency (β) are compared for the BF and IB methods on the 1 st ice layer of a two-step simulation. Ideally, the IB method should reproduce the results obtained with a BF approach. Fig. 12a illustrates a slight mismatch in Cp near the point of maximum suction. Nonetheless, the collection efficiency is very close between the two methods (Fig. 12b). As rime ice accretion is mostly governed by the collection efficiency, it generates very similar ice shapes for the BF and IB methods despite the difference in pressure coefficients (Fig. 13a). The ice shapes are also in good agreement with the experimental results. The experimental ice shape is the so-called MCCS (Maximum-Combined-Cross-Section) [START_REF] Broeren | Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel[END_REF] derived by the experimentalists from the ice scans (it is more or less the envelope of the ice shape). Since the simulation starts from a BF mesh, the 1 st step is not affected by the IBM and thus, the 1 st ice layer should be the same for both methods. However, a difference might be introduced by the geometry evolution solver which can use either a level-set with contour extraction or the Lagrangian node displacement approach. On Fig. 13a, a two-step ice accretion prediction is made where the 1 st ice layer is illustrated with a dashed line. As the generation of the 1 st ice layer is not influenced by the IBM, the Lagrangian and Eulerian (level-set) geometry updates can be compared, showing negligible difference. Thus, discrepancies observed in Fig. 12 for the surface data can be attributed to the IB method
and not to the level-set approach.
x
Fig. 13 Rime ice case 241 -multi-step ice shape predictions
A close-up of the leading edge Cp distribution is given in Fig. 14a to highlight the discrepancy observed near
x/c = 0.06. The difference between the two solutions can be explained by the interaction between the body-fitted wall and the IBM. Figure 14b provides a view of the airfoil's upper surface near the location of interest. For the rime ice case 241, the ice (illustrated in red) is getting thinner as we approach the impingement limits and eventually it merges with the body-fitted surface. The issue comes from the volume penalization method which, in our case, use a 1 st order implementation of the boundary condition. The cells are penalized if φ < 0 at the cell center and the usual physical equations are solved otherwise. The boundary conditions are applied at the cell center, regardless of the location of the IB within the cell. Thus, when the ice shape is thin enough (Fig. 14b), the cells are no longer penalized. In turn, a slip velocity is applied relative to the body-fitted wall instead of a velocity tangential to the ice shape. This premature switch to a BF wall boundary condition explains the difference in Cp near x/c = 0.06 in Fig. 14a. The issue could be solved by implementing a 2 nd order discretization of the penalization terms or by performing a local mesh refinement to obtain a better representation of the ice shape near the impingement limits. However, the current implementation is still able to provide a good prediction of the ice shape in comparison with the BF results (Fig. 13).
For a 10-layer ice shape prediction (Fig. 13b), the solution is still in good agreement with the experimental data for both methods. Increasing the number of steps reduces the thickness of each ice layer and this might affect the behavior of the penalization method for the same reason described earlier (1 st order discretization of the penalization terms). For instance, the penalization method might effectively see the same geometry for 2 consecutive ice layers even though the ice shape has actually moved. This typically occurs if the ice layer is too thin relative to the mesh cells. For the 10-step simulation presented here, the mesh cell size is about the same as the thickness of a single layer, providing good results. For the glaze ice case, the mesh characteristics are the same as for the rime ice case 241. The wall mesh size is about 2e-3 chords with a refinement to 5e-4 chords at the trailing edge.
For this case, there is again a slight mismatch on the Cp distribution (Fig. 15a), but a very similar collection efficiency for both methods (Fig. 15b). As glaze ice accretion is sensitive to the heat transfer coefficient which is in turn driven by the aerodynamics, the mismatch in Cp might explain the slight difference observed on the ice shape Fig. 16a.
The effect of using the Lagrangian node displacement vs. the level-set approach can again be estimated by analyzing the 1 st ice layer on Fig. 16a, where a negligible difference is observed. It suggests that the difference in Cp and β is due to the IBM. By observing Fig. 15a, the Cp distribution corresponds well between the BF and IB methods for x/c < 0.02 and
x/c > 0.06. The zone where the discrepancy occurs is located near the ice accretion limits where the ice shape stops sharply. For the IBM, this results in a detached flow with a recirculation zone (Fig. 16b) while it is not the case for the BF method, explaining the difference. Here, the comparison is made between the two methods with equivalent mesh size. However, this result suggests that the penalization method require a finer mesh near curved features to be equivalent to the BF approach. Although the ice shape prediction for the 2 nd layer is similar for both methods, it does not reproduce the experimental measurements (Fig. 16a). Note that icing experiments carry large uncertainties as well as spanwise variations [START_REF] Wright | Validation Methods and Results for a Two-Dimensional Ice Accretion Code[END_REF][START_REF] Ruff | Quantification of ice accretions for icing scaling evaluations[END_REF].
However, the divergence from the experimental ice shape seems too large to be attributed only to these uncertainties.
Some tests were performed by refining the mesh and manually increasing the wall roughness (by a factor 2), without significant improvement. Here, the use of a droplet size distribution might help in obtaining a prediction towards the experimental ice shape. This option was, however, not tested as it is not yet available for use with our penalization method.
By increasing the number of ice layers to 10 (Fig. 17a), the ice shape prediction is still far from the experimental results. Moreover, an ice horn is created but not in the same location. When comparing the BF and IB methods, the ice shape is similar for the most part, but with a larger difference near the ice horn (where the effect of the aerodynamics becomes more dominant on the ice accretion). The difference in ice shape is due to the combined effect of the penalization and level-set methods compared to the BF and Lagrangian approach (standard approach). In Fig. 17b, all four combination of methods are shown for the 10 th ice layer only. The figure illustrates that the use of the level-set method has only a limited impact while the IB methods have a larger effect on the difference in ice shape. This is similar to the observation made for the two-step ice accretion simulation, where the penalization method requires a finer mesh near curved features to be equivalent to the BF solution. Using an equivalent cell size, the IBM however provides a good approximation, with only a slightly longer ice horn and slightly different thickness distribution on the upper side.
D. Additional Cases on a NACA0012
In this section, 10-step ice accretion calculations are performed on cases 001, 003 and 004 from [START_REF] Trontin | Description and assessment of the new ONERA 2D icing suite IGLOO2D[END_REF] to further assess the behavior of the IBM. The calculations are performed on a coarser mesh (wall mesh size of 5e-3), but it is still representative of typical ice accretion simulations with IGLOO2D.
When comparing the ice shape prediction obtained from the IB and the BF methods, a good match is observed for the rime ice case 001 (Fig. 18a), but a larger difference is seen for the glaze ice cases 003 and 004 (Figs. 18b-18c). This is in line with the observation made in the previous sections. Glaze ice shape are more sensitive to the airflow solution and a perfect correspondence is not obtained for the wall data between the two methods (e.g., Cp distribution, Fig. 15a). Numerical experiments performed outside the scope of this paper showed that, in general, a mesh refinement can help the IBM in retrieving the BF solution. Nonetheless, using the IB and the level-set methods still provides a good estimation of the ice shapes when compared to the experimental data (Fig. 18) and body-fitted solution, even on coarser meshes. In this section, ice accretion is performed on the three-element McDonnell-Douglas Airfoil (MDA, Fig. 19) using the icing conditions provided in [START_REF] Petrosino | Ice Accretion Model on Multi-Element Airfoil[END_REF]. This test case is selected to show the flexibility of the IB and level-set methods. A two-step ice accretion simulation is performed with both the BF and IB methods. Here the objective is to see if the IB method can reproduce the BF solution on this more challenging configuration. To do so, a mesh refinement was performed to obtain similar pressure coefficients and collection efficiency on the 1 st ice layer (2 nd time step), as shown in Fig. 20. The wall cell sizes of the resulting mesh are summarized in Table 2. A finer mesh is required on this test case due to the flow separation downwind of the flap. Because a Euler flow solver is used (inviscid), this flow detachment is very sensitive to the mesh size. The current mesh set-up allowed the IB and BF to behave in a similar way (e.g., similar onset of the flow detachment). Although a better ice shape prediction can be achieved using a RANS solver and a droplet size distribution, it was
Table 2 Wall mesh characteristics
shown in [START_REF] Petrosino | Ice Accretion Model on Multi-Element Airfoil[END_REF] that a fair estimation of the ice shape can be achieved using a Euler flow solver and a single size of droplets for this specific test case. The ice accretion results obtained with IGLOO2D for both methods are shown for the flap, slat and main element in Fig. 21. The predicted ice shapes are not so far from the experiment for the flap and slat but are quite different from the expected solution for the main element. Perhaps a simulation involving more ice layers would improve the results. For instance, six steps are used in [START_REF] Petrosino | Ice Accretion Model on Multi-Element Airfoil[END_REF]. In this paper we are concerned about reproducing the ice shapes from the BF method with the IBM and in this regard, the ice shapes (Fig. 21) are in fact similar for both methods. The comparison still exhibits the usual discrepancy due to the accuracy of the IBM for the aerodynamics.
x/c y/c -0.12 -0.
IV. Conclusion
This paper investigates the potential of using an Immersed Boundary Method (IBM) combined with a level-set to alleviate the issues related to the mesh and geometry update in the numerical prediction of ice accretion. It also aims at assessing the accuracy of such a methodology in comparison with the usual body-fitted approach.
A penalization method is applied to the aerodynamics and droplet trajectories. The surface data is extracted using a weighted least square approach in order to use the boundary layer and ice accretion modules. The geometry (the ice shape) is updated using either a Lagrangian or an Eulerian (level-set) approach. A contour extraction process is also described for 2D meshes made of triangles in order to retrieve the explicit definition of the ice-air interface.
Using a manufactured test case, the level-set is shown to automatically handle geometry folding during the ice shape update while the Lagrangian approach fails at providing a useable surface discretization unless a correction is added.
For ice shape predictions, a BF mesh is used for the clean geometry and only the ice shape is treated as an IB. Following this approach, rime and glaze ice test cases from the Ice Prediction Workshop are performed using up to 10 ice layers.
The IBM predicted an ice shape equivalent to the body-fitted approach on the rime ice case. For the glaze ice case, the predicted ice shape is close to the body-fitted solution but exhibits a larger difference where ice accretion is most dependent on the aerodynamics (e.g., near ice horns). The difference is mostly attributed to the accuracy of the IBM and not to the use of the level-set. Additional rime and glaze ice cases on a NACA0012 showed that the current approach (IBM + level-set) provides a fair estimation of the ice shape when compared to both the BF method and the experimental results, even on coarser meshes. Moreover, a 2-step ice shape prediction on the McDonnell-Douglas multi-element airfoil showed that with proper mesh refinement, the IBM combined with the level-set method can reproduce the BF solution on a more challenging configuration.
Although some improvements can be made in terms of efficiency and accuracy, this paper shows the potential of the proposed methodology for automatic multi-step ice shape predictions. Also, the extension to 3D ice accretion is, in theory, straightforward except for the contour extraction process which will require some adaptation to deal with a 2D surface mesh.
Fig. 1
1 Fig. 1 Sequential call to modules in multi-step icing simulations.
Fig. 2
2 Fig. 2 Sequential call to modules in a multi-step icing simulation using an IBM.
Fig. 4
4 Fig. 4 Example of a propagated icing velocity field for a clean NACA23012 airfoil, body-fitted surface. Coordinates and velocity non-dimensionalized by the chord (c).
Fig. 5
5 Fig. 5 Example of level-set advection from a clean NACA23012 airfoil, single step ice accretion on a BF mesh
Fig. 6
6 Fig. 6 Example of contour intersection with a triangle cell: edge and vertex cases
Fig. 8
8 Fig. 8 Example of a level-set extraction on an iced NACA23012 airfoil. red: surface extraction; blue: ice
Fig. 10
10 Fig. 10 Comparison between Lagrangian node displacement and Level-Set approach with contour extraction
Fig. 11
11 Fig. 11 Rime ice case 241: solution from the volume solvers around the 1 st ice layer of a two-layer simulation.
Fig. 12
12 Fig. 12 Rime ice case 241 -comparison of wall surface data on the 1 st ice layer of a two-layer simulation for the BF and IB methods.
Fig. 14 Rime ice case 241 -illustration of the Cp discrepancy near x/c = 0.06, two-step ice accretion
Fig. 15
15 Fig. 15 Glaze ice case 242 -comparison of wall surface data on the 1 st ice layer for the BF and IB simulations.
Fig. 16
16 Fig. 16 Glaze ice case 242 -two-step ice accretion
Fig. 17
17 Fig. 17 Glaze ice case 242 -10-step ice accretion
Fig. 18
18 Fig. 18 Additional cases on a NACA0012 airfoil, 10-step ice accretion
Fig. 19
19 Fig. 19 Global view of the McDonnell-Douglas LB606b airfoil and its experimental ice shape
Fig. 20
20 Fig. 20 Wall data on the 1 st ice layer (2 nd step) for the McDonnell-Douglas multi-element airfoil (MDA LB606b)
Fig. 21
21 Fig. 21 Two-step ice accretion on the McDonnell-Douglas multi-element airfoil (MDA LB606b)
Table 1 Simulation Parameters
1
Rime 241 Glaze 242 Case 001 Case 003 Case 004 Multi-Element
Geometry NACA23012 NACA23012 NACA0012 NACA0012 NACA0012 MDA
Chord [m] 0.4572 0.4572 0.5334 0.5334 0.5334 0.9144
AoA [deg] 2.0 2.0 4.0 4.0 4.0 8.0
Mach 0.325 0.315 0.325 0.317 0.317 0.27
P static [kPa] 92.528 92.941 101.325 101.325 101.325 101.325
T static [K] 250.15 266.05 250.7 262.3 262.3 268.2
LWC [g/m 3 ] 0.42 0.81 0.55 1.0 0.6 0.6
MVD [µm] 30.0 15.0 20.0 20.0 15.0 20.0
Icing Time [s] 300 300 420 231 384 360
Roughness (ks) [mm] 0.4572 0.4572 0.5334 0.5334 0.5334 0.9144
Lagrangian Layer 1 BF Lagrangian Layer 2 IB Level-Set Layer 1 IB Level-Set Layer 2 Clean Geometry (b) Main Element
0.08
0
Experimental Ice
BF Lagrangian layer 1 0.06
-0.02 BF Lagrangian Layer 2 IB Level-Set layer 1
IB Level-Set layer 2 0.04
Clean Geometry
-0.04
0.02
Experimental Ice
-0.12 -0.1 -0.08 -0.06 1 -0.08 -0.06 (a) Slat y/c -0.15 -0.04 -0.1 -0.05 0 0.05 -0.02 0.9 0 0.95 0.02 x/c 0.1 BF x/c y/c 0.02 0.04 0.06 0.08 0.12 -0.08 -0.06 -0.04 -0.02 0 1 1.05 1.1 0.14 0.16 0.18
Experimental Ice BF Lagrangian Layer 1 BF Lagrangian Layer 2 IB Level-Set Layer 1 IB Level-set Layer 2 Clean Geometry (c) Flap
Acknowledgments
This work is financed by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chair program and the ONERA. The authors would like to thank Francesco Petrosino for kindly providing the MDA geometry and the experimental ice accretion data. |
04105066 | en | [
"spi.other"
] | 2024/03/04 16:41:22 | 2023 | https://theses.hal.science/tel-04105066/file/These_Ali_ALKHANSA.pdf | Stefan Ali Al Khansa
Raphael Cerovic
Yezekael Visoz
Samson Hayel
Lasaulce
First and foremost, I would like to express my sincere gratitude to my supervisors: Professor Samson Lasaulce, Professor Ass. Yezekael Hayel, and Doctor Raphaël Visoz. My first contact was with Professor Samson Lasaulce back in my masters at CentraleSupelec where I followed some of the courses he gave there. Being impressed by his work back then, I was lucky and grateful to continue my Ph.D. under his supervision. I am also thankful for him for introducing me to Professor Ass. Yezekael and Doctor Raphaël giving me the opportunity to have their guidance which helped me in all the time of research. Special thanks are to Doctor Raphaël Visoz being a major contributor to my work; I am thankful for his almost daily time on different discussions and ideas exchanging leading to the progress of my Ph.D.. I am deeply grateful to all the jury members for taking interest in my work. I was honoured to have: Professors Didier Le Ruyet and Jean-Pierre Cances as reviewers, who I especially thank for their valuable review of my manuscript, and professors Tijani Chahed and Veronica Belgema as examiners.
I am also thankful to all who influenced me personally and academically over the 3 years at Orange Labs, where I have spent the majority of my time. The kind environment and the supportive colleagues in the RAP team is the needed atmosphere to work in. I have been warmly welcomed since day one, and I would like to thank all its current and former members for their support and the pleasant discussions that we have had, both technical and less formal ones. Despite the hard time of covid, where we had to spend a lot of time teleworking, the continuous support of the team was always present in different forms according to the health and regulations situations. I am truly grateful to the team manager Pierre Dubois for his continuous support during the whole duration of my Ph.D.. Also, special thanks are to Stefan Cerovic for helping me at the beginning of my Ph.D., sharing with me his codes, and contributing to some of the work of this thesis. His help made it easier for me to understand the new challenging subject I was considering. Particular thanks are for Rita and Meriem and for all my Postdoc/Ph.D. colleagues Nour, Yi, Imene, Romain, Youssef, Amel, Yibo, and Shanglin. I am grateful for your kind company and for all the fun moments that we have spent together. Finally, it would not be possible to complete my Ph.D. journey without the selfless help and huge support of my family and my friends, here and in Lebanon. Despite the massive situation Lebanon is facing, you were always there for me. Without your love, support, belief, and continuous encouragement, I would not have finished this thesis. Particular patience was needed by my fiancée Walaa to handle the stressful nature of my work and how workaholic I occasionally am. I am eternally grateful for your love and support which helped throughout me this journey. Special thanks to my friends Hassan Fakih and Ali Wehbe who listened to my mid-thesis presentation and suggested different ideas for the possible directions I can take in the remaining part of my Ph.D.. One suggestion of Ali Wehbe was further discussed leading to the contributions seen in chapter 5. A big thank you is to my sister Rasha, my co-author, and my technical reviewer. Your collaboration helped me accelerate and improve my work. It was really a pleasure to collaborate and publish together.
iii Abstract One of the main objectives for 5G and 5G-beyond cellular networks is to allow heterogeneous services to coexist within the same network architecture. Some of these services need a very high peak data rates and a fast adaptation of the channel state, as in enhanced Mobile Broadband (eMBB). In order to meet those needs, we aim at improving the spectral efficiency. Cooperative communication represents one of the key physical layer technologies which aim to optimize the spectral efficiency. The concept is to use the shared resources and information of the users to improve the transmission and reception processes. The cooperation process is performed using relaying nodes. A relaying node can be a dedicated relay node or a source node that performs user cooperation. The difference between a relay node and a source node which implements user cooperation is the fact that the latter has its own message whereas the relay node does not.
An orthogonal Multiple Access Multiple Relay Network (MAMRN) is considered, where at least two sources communicate with a single destination using the help of at least two relaying nodes. Orthogonality is achieved using Time Division Multiplexing (TDM) and all the relaying nodes are assumed half-duplex while all the links experience slow fading. The destination is the central node where the different allocations are performed. In an initialization phase, a Link Adaptation (LA) algorithm is performed where different resources are allocated to the sources. Following this step, the transmission of a frame is divided into two phases. In the first phase, sources transmit in turns in consecutive time slots. In the second phase, the destination schedules relaying node(s) to transmit redundancies. The second phase consists of a limited number of retransmission time slots. Bidirectional limited control channels are available from sources and relays towards the destination to exchange the needed information so that the destination is capable of performing its selections/allocation strategies.
In the first part of this thesis, the LA problem in the initialization phase is considered. Specifically, rate allocation algorithms are studied. Due to the complexity of the exhaustive search approach, sequential algorithms are proposed. The presented algorithms aim at maximizing the Average Spectral Efficiency (ASE) under individual Quality of Service (QoS) targets for a given Modulation and Coding Scheme (MCS) family. The algorithms are applicable in both slow and fast changing radio condition scenarios. The rates are first initialized and then an iterative rate correction step is applied. Furthermore, and in sharp contrast with existing cooperative transmission schemes, a time-varying packet size at the transmission phase is considered. In other words, the LA process is extended to determine both the rate and the time slot duration for each source. The resulting scheduling and LA algorithms approach the performance of the upper bound as demonstrated by Monte-Carlo (MC) simulations. In addition, the MC results validate the improvements of using relaying nodes and introducing a time-varying packet size.
In the second part of this thesis, the focus is on designing relaying nodes selection strategies that determine which relaying nodes are activated at each time slot in the retransmission phase. One key element in this chapter is to exploit the multi-path diversity of the different relaying nodes. Rather than selecting a single relaying node to help one source node at a given retransmission time slot, the proposed scheduling strategies allocate one source to be helped by multiple relaying nodes. Such a retransmission scheme is called Parallel Retransmission (PR), as the different relaying nodes are retransmitting parallel to each other. Furthermore, and adopting the PR scheme, novel selection strategies are proposed aiming at reducing the overhead of the control exchange process between the destination and the relaying nodes. Numerical results show that these strategies outperform the strategies seen in the prior art that are based on Single Retransmission (SR) and that ignores the effect of control exchange overhead.
In the third part of this thesis, a joint allocation algorithm is presented assuming the presence of the full Channel State Information (CSI) at the destination side. Rather than solving the LA problem and the relaying node scheduling problem separately, an optimal joint allocation method is proposed. The presented joint strategy builds on the fact that for a given scheduling decision, an optimal rate allocation exists. Accordingly, the proposed algorithm selects the scheduling that leads to the optimal rate allocation. Furthermore, due to the complexity of the proposed algorithm, a sequential algorithm is presented. The MC simulations validate the importance of the joint allocation strategy that outperforms the non-joint allocations. Another importance of the joint allocation method is its practicality compared to the non-joint rate allocation which needs to pass exhaustively through all the possible rate values. In addition, the MC simulations validate the performance of the sequential joint allocation being a practical solution that achieves the upper bound.
In the last part of this thesis, future work and other possible directions are presented. Specifically, using online learning algorithms for rate allocation is considered. It is seen that one possible way to tackle this problem is to adopt the methods of the Multi-Armed Bandit (MAB) framework. Another direction presented was the generalization of the considered network to other orthogonality scenarios where Frequency Domain Multiplexing (FDM) is considered. Following the FDM regime, novel selection strategies are proposed. Further mentioned open challenges to the MAMRN are to consider v multiple antennas at the destination node, study non-centralized systems where games are observed, and investigate the methods needed in the nonorthogonal MAMRN.
Résumé
L'un des principaux objectifs des réseaux cellulaires 5G et 5G-beyond est de permettre la coexistence de services hétérogènes au sein d'une même architecture de réseau. Certains de ces services nécessitent des débits crêtes de données très élevés et une adaptation rapide de l'état du canal, notamment dans le cas de l'eMBB (Enhanced Mobile Broadband). Afin de répondre à ces besoins, nous cherchons à améliorer l'efficacité spectrale. La communication coopérative est l'une des principales technologies de la couche physique qui vise à optimiser l'efficacité spectrale. Le concept consiste à utiliser les ressources partagées, et les informations des utilisateurs pour améliorer les processus de transmission et de réception. Le processus de coopération est réalisé à l'aide de noeuds relais. Un noeud relais peut être un noeud relais dédié ou un noeud source qui réalise la coopération entre utilisateurs. La différence entre un noeud relais et un noeud source qui met en oeuvre la coopération entre utilisateurs est le fait que ce dernier a son propre message alors que le noeud relais n'en a pas.
On considère un réseau orthogonal à accès multiple et à relais multiples (Multiple Access Multiple Relay Network (MAMRN)), dans lequel au moins deux sources communiquent avec une seule destination à l'aide d'au moins deux noeuds relais. L'orthogonalité est obtenue en utilisant le multiplexage par répartition dans le temps (Time Division Multiplexing (TDM)) et tous les noeuds de relais sont supposés être en semi-duplex alors que toutes les liaisons subissent un affaiblissement lent du signal (slow fading). La destination est le noeud central où les différentes allocations sont effectuées. Dans une phase d'initialisation, un algorithme d'adaptation de lien (Link Adaptation (LA)) est exécuté où différentes ressources sont allouées aux sources. Après cette étape, la transmission d'une trame est divisée en deux phases. Dans la première phase, les sources transmettent à tour de rôle dans des tranches de temps consécutives. Dans la deuxième phase, la destination programme le(s) noeud(s) relais pour transmettre les redondances. La deuxième phase consiste en un nombre limité de tranches de temps de retransmission. Des canaux de contrôle limités bidirectionnels sont disponibles à partir des sources et des relais vers la destination pour échanger les informations nécessaires afin que la destination soit capable d'exécuter ses stratégies de sélection/allocation.
Dans la première partie de cette thèse, le problème de LA dans la phase d'initialisation est considéré. Plus précisément, les algorithmes d'allocation de débits de données sont étudiés. En raison de la complexité de l'approche de recherche exhaustive, des algorithmes séquentiels sont proposés. Les algorithmes présentés visent à maximiser l'efficacité spectrale moyenne (Average Spectral Efficiency (ASE)) sous des objectifs individuels de qualité de service (Quality of Service (QoS)) pour une famille de schémas de modulation et de codage (Modulation and Coding Scheme (MCS)) donnée. Les algorithmes sont applicables dans des scénarios de conditions radio changeant lentement ou rapidement. Les débits sont d'abord initialisés, puis une étape de correction itérative du débit est appliquée. En outre, contrairement aux systèmes de transmission coopératifs existants, la taille des paquets lors de la phase de transmission est considérée variable dans le temps. En d'autres termes, le processus LA est étendu pour déterminer à la fois le taux et la durée de l'intervalle de temps pour chaque source. L'ordonnancement et les algorithmes qui en résultent se rapprochent des performances de la limite supérieure, comme le démontrent les simulations de Monte-Carlo (MC). De plus, les résultats MC valident les améliorations apportées par l'utilisation de noeuds relais et par l'introduction d'une taille de paquet variable dans le temps.
Dans la deuxième partie de cette thèse, l'accent est mis sur la conception de stratégies de sélection de noeuds relais qui déterminent quels noeuds relais sont activés à chaque slot temporel dans la phase de retransmission. Un élément clé dans ce chapitre est d'exploiter la diversité des chemins multiples des différents noeuds de relais. Plutôt que de sélectionner un seul noeud relais pour aider un noeud source à un intervalle de temps de retransmission donné, les stratégies d'ordonnancement proposées attribuent une source à plusieurs noeuds relais. Un tel schéma de retransmission est appelé retransmission parallèle (Parallel Retransmission (PR)), car les différents noeuds relais retransmettent en parallèle. De plus, en adoptant le schéma PR, de nouvelles stratégies de sélection sont proposées afin de réduire la surcharge du processus d'échange de contrôle entre la destination et les noeuds relais. Les résultats numériques montrent que ces stratégies sont plus performantes que les stratégies de l'art antérieur qui sont basées sur la retransmission unique (Single Retransmission (SR)) et qui ignorent l'effet de la surcharge de l'échange de contrôle.
Dans la troisième partie de cette thèse, un algorithme d'allocation conjointe est présenté en supposant la présence de l'information complète sur l'état du canal (Channel State Information (CSI)) du côté de la destination. Plutôt que de résoudre séparément le problème de LA et le problème d'ordonnancement des noeuds relais, une méthode d'allocation conjointe optimale est proposée. La stratégie conjointe présentée s'appuie sur le fait que pour une décision d'ordonnancement donnée, il existe une allocation de taux optimale. En conséquence, l'algorithme proposé sélectionne l'ordonnancement qui conduit à l'allocation optimale des débits de données. En outre, en raison de la complexité de l'algorithme proposé, un algorithme séquentiel est présenté. Les simulations MC valident l'importance de la stratégie d'allocation conjointe qui surpasse les allocations non conjointes. Une autre importance de la méthode d'allocation conjointe est sa praticité par rapport à l'allocation de débits non conjointe qui nécessite de passer exhaustivement par toutes les valeurs de débits possibles. En outre, les simulations MC valident la performance de l'allocation séquentielle conjointe, qui est une solution pratique permettant d'atteindre la limite supérieure.
Dans la dernière partie, les futurs travaux et d'autres directions d'études possibles sont présentés. Plus précisément, l'utilisation d'algorithmes d'apprentissage en ligne pour l'allocation de debits est envisagée. On voit qu'une façon possible d'aborder ce problème est d'adopter les méthodes de type bandit à bras multiples (Multi-Armed Bandit (MAB)). Une autre direction présentée est la généralisation du réseau considéré à d'autres scénarios d'orthogonalité où le multiplexage dans le domaine des fréquences (Frequency Domain Multiplexing (FDM)) est envisagé. En suivant le régime FDM, de nouvelles stratégies de sélection sont proposées. D'autres défis ouverts pour le viii MAMRN sont de considérer des antennes multiples au niveau du noeud de destination, d'étudier des systèmes non centralisés où des jeux sont observés, et d'étudier les méthodes nécessaires dans le MAMRN non orthogonal. The binary Galois field E{.}
List of Figures
List of Acronyms
The expected value [q]
The Iverson bracket having the value 1 if q is satisfied, and 0 otherwise ⌈q⌉
The ceiling function which takes the least integer greater than or equal to q |S| Cardinality of set S s ∈ S s is an element of set S S ∪ R The union of sets S and R S ∩ R The intersection of sets S and R S ⊂ R Set S is a subset of set R S ⊆ R Set S is subset or equal to set R S \ R The minus operation between the sets S and R in set theory S The complement of set S in set theory Pow(S) The power set of S, i.e., the set of all possible subsets of S Pr{.} The probability of an event log Logarithmic function exp Exponential function
A T Transpose of A A -1
Inverse of A ∧
The logical and Cooperative communication is known to be one of the most effective techniques to improve the coverage and capacity of wireless networks. The research on cooperative systems is widespread and still a hot topic today. Today, users demand to have access to all wireless services no matter the settings they are in (time, location). As wireless networks are now an integral part of our modern society, and as the demand for better quality and availability of wireless services is increasing, cooperative communication is seen as a promising solution to answer the increasing demands. Although wireless technologies are always updating to novel strategies, the increasing number of users in the networks imposes quite big challenges by their nature when it comes to their design (environment, scarce frequency spectrum, etc.). Accordingly, we investigate the concept of cooperative communication being one possible avenue for overcoming those challenges. The main idea is to allow devices to share their available resources in power and/or bandwidth, as well as their antennas, in order to mutually enhance their transmission and reception. When talking about cooperative networks, different factors/metrics are considered. The first factor to take into consideration is the number of sources, relays, and destinations included in the network. Following the number of each of these nodes, different types of networks are formed, and thus, different cooperative scenarios are seen. The second factor to take into consideration is the relaying protocol being used. A relaying protocol represents the rule that the relaying nodes will follow in their retransmissions. The third factor to take into consideration is whether the system is centralized or decentralized. A centralized system, which is adopted in this thesis, means that there is a central node which performs the different allocations needed in the transmission and retransmission. Speaking of which, the fourth factor to take into consideration in a cooperative system is the selection strategy used. The scheduling problem consists in the organization of the retransmission of the relaying nodes. In other words, the scheduling strategy decides which relaying nodes are going to be active and send redundancies and which relaying nodes are not. The fifth factor is resource allocation. The resource allo-cation problem poses the question of how should we use the available scarce resources in an optimized way depending on the different scenarios of the network. These factors are tackled in this thesis with the goal to propose suitable/practical solutions for the related problems leading to optimal performance.
Different cooperative networks
From a historical point of view, cooperative communication goes back to the year 1970, when van der Meulen derived the upper and lower bounds of the channel capacity of a Three-terminal Relay Network (TRN) [START_REF] Van Der | Three-terminal communication channels[END_REF]. In that reference, and in other works of van der Meulen, we see some fundamental principles and general problems related to cooperative communication. Then, other relay channels and further cooperative networks were investigated as in [START_REF] Cover | Capacity theorems for the relay channel[END_REF]. Remarkable work on relaying was done by Cover and El Gammal in several publications as in [START_REF] Cover | Capacity theorems for the relay channel[END_REF][START_REF] El | Multiple user information theory[END_REF][START_REF] El | On information flow in relay networks[END_REF]. Using the min-cut max-flow capacity upper bound, the capacity is established for degraded and reversely degraded feedback relay channels. Until today, the main results of these works have not been surpassed.
Basically, the system model of cooperative communication channels is composed of three main components: source, relay, and destination nodes. According to the number of each of these elements, the nature of the cooperative channel is determined. For a multiple number of relays, the TRN can be extended to Multiple Relay Network (MRN) consisting of a single source, a single destination, and multiple relay nodes. Such kind of channels was investigated in [START_REF] El | On information flow in relay networks[END_REF][START_REF] Reza | Information flow in relay networks[END_REF][START_REF] Yue | Orthogonal df cooperative relay networks with multiple-snr thresholds and multiple hard-decision detections[END_REF]. Similarly, a Relay Broadcast Network (RBN) is composed of a single source, a single relay, and multiple destination nodes [START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF]. As a natural counterpart of the RBN, the Multiple Access Relay Network (MARN) is composed of multiple source nodes, with a single relay, and a single destination [START_REF] Kramer | On the white gaussian multipleaccess relay channel[END_REF][START_REF] Sankaranarayanan | Capacity theorems for the multiple-access relay channel[END_REF][START_REF] Sankaranarayanan | Hierarchical sensor networks: capacity bounds and cooperative strategies using the multiple-access relay channel model[END_REF]. In the mentioned networks, several problems were investigated in the prior art.
In MRN, the reference [START_REF] Reza | Information flow in relay networks[END_REF][START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF] derived novel achievable rates and capacity upper bounds along with corresponding information-theoretic coding schemes. There, the authors present different types of discrete memory-less and fading channels. In [START_REF] Gastpar | On the capacity of large gaussian relay networks[END_REF], large Gaussian MRN is investigated where the number of relays is assumed very large. It is seen that for an infinite number of relays, the upper and lower bounds on the capacity coincide. In [START_REF] Amir | On the power efficiency of sensory and ad hoc wireless networks[END_REF], the authors studied the power efficiency of sensory and ad-hoc MRN. The RBN was introduced in [START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF] where two destinations are considered. In other works, this network is referred to as Dumb Relay Broadcast channel [START_REF] Liang | The impact of relaying on the capacity of broadcast channels[END_REF]. In the latter reference, the capacity region is derived for fully and partially cooperative RBC. In [START_REF] Reznik | Broadcast-relay channel: capacity region bounds[END_REF][START_REF] Shraga | On the discrete memoryless partially cooperative relay broadcast channel and the broadcast channel with cooperating decoders[END_REF], the case of multiple receivers is provided where new coding schemes and the corresponding achievable rate regions were proposed. Several works targeted the MARN which is seen as an important class of relay networks. Such a network is seen interesting in situations where some sources are too weak to cooperate by can send their data to more powerful nodes. This network was introduced in [START_REF] Kramer | On the white gaussian multipleaccess relay channel[END_REF], and the capacity upper bound was derived for Gaussian and discrete memory-less MARN. In [START_REF] Sankaranarayanan | Hierarchical sensor networks: capacity bounds and cooperative strategies using the multiple-access relay channel model[END_REF], the three-tier hierarchical wireless sensor MARN was considered, where the capacity bounds were given for both scenarios with half-duplex and full-duplex relays.
In this thesis, we consider the Multiple Access Multiple Relay Network (MAMRN) composed of multiple source nodes, multiple relay nodes and a single destination. This
Cooperative Network
Reference
Three-terminal Relay Network (TRN) [START_REF] Van Der | Three-terminal communication channels[END_REF][START_REF] Cover | Capacity theorems for the relay channel[END_REF][START_REF] El | Multiple user information theory[END_REF][START_REF] El | On information flow in relay networks[END_REF] Multiple Relay Network (MRN) [START_REF] El | On information flow in relay networks[END_REF][START_REF] Reza | Information flow in relay networks[END_REF][START_REF] Yue | Orthogonal df cooperative relay networks with multiple-snr thresholds and multiple hard-decision detections[END_REF][START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF][START_REF] Gastpar | On the capacity of large gaussian relay networks[END_REF][START_REF] Amir | On the power efficiency of sensory and ad hoc wireless networks[END_REF]] Relay Broadcast Network (RBN) [START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF][START_REF] Liang | The impact of relaying on the capacity of broadcast channels[END_REF][START_REF] Reznik | Broadcast-relay channel: capacity region bounds[END_REF][START_REF] Shraga | On the discrete memoryless partially cooperative relay broadcast channel and the broadcast channel with cooperating decoders[END_REF]] Multipl Access Relay Network (MARN) [START_REF] Kramer | On the white gaussian multipleaccess relay channel[END_REF][START_REF] Sankaranarayanan | Capacity theorems for the multiple-access relay channel[END_REF][START_REF] Sankaranarayanan | Hierarchical sensor networks: capacity bounds and cooperative strategies using the multiple-access relay channel model[END_REF]] Multiple Access Multiple Relay Network (MAMRN) [START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF][START_REF] Cerovic | Cooperative wireless communications in the presence of limited feedback[END_REF][START_REF] Mohamad | Cooperative relaying protocols and distributed coding schemes for wireless multiterminal networks[END_REF], This Work Table 1.1: Different cooperative networks. system can be seen as a generalization of the previously mentioned systems, except for the RBN which includes multiple destination nodes. The considered system is seen in nowadays applications. In [START_REF] Xue | Complex field network coding for multisource multi-relay single-destination uav cooperative surveillance networks[END_REF] for example, it is stated that the considered structure (i.e., the MAMRN) is the main topology structure for UAV cooperative surveillance networks. However, it is seen in [START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF] that the capacity region of the general MAMRN is still unknown. In MAMRN, the multiple access can be either orthogonal (as considered in this thesis and in other works: check [START_REF] Cerovic | Cooperative wireless communications in the presence of limited feedback[END_REF]) or non-orthogonal (check [START_REF] Mohamad | Cooperative relaying protocols and distributed coding schemes for wireless multiterminal networks[END_REF]), where orthogonality may be achieved through time, frequency, or code division multiplexing.
Throughout the different prior arts that targeted the MAMRN, two major (recent) works are the ones done in the theses [START_REF] Cerovic | Cooperative wireless communications in the presence of limited feedback[END_REF] and [START_REF] Mohamad | Cooperative relaying protocols and distributed coding schemes for wireless multiterminal networks[END_REF]. In [START_REF] Mohamad | Cooperative relaying protocols and distributed coding schemes for wireless multiterminal networks[END_REF], the outage analysis of different examples of MAMRN is presented (check [START_REF] Mohamad | Outage achievable rate analysis for the non orthogonal multiple access multiple relay channel[END_REF][START_REF] Mohamad | Outage analysis of various cooperative strategies for the multiple access multiple relay channel[END_REF][START_REF] Mohamad | Outage analysis of dynamic selective decode-and-forward in slow fading wireless relay networks[END_REF]). The analysis covers different coding and decoding schemes as well as different transmission scenarios (orthogonal and non-orthogonal). Also in [START_REF] Mohamad | Cooperative relaying protocols and distributed coding schemes for wireless multiterminal networks[END_REF], several selection strategies are proposed for the MAMRN (check [START_REF] Mohamad | Dynamic selective decode and forward in wireless relay networks[END_REF]). In [START_REF] Cerovic | Cooperative wireless communications in the presence of limited feedback[END_REF], further contributions are presented related to the MAMRN while focusing on orthogonal MAMRN. Three main problems were investigated: resource allocation problems, relaying nodes selection strategies (check [START_REF] Cerovic | Centralized scheduling strategies for cooperative harq retransmissions in multi-source multirelay wireless networks[END_REF]), and control exchange process problems (check [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF]).
In this thesis, user cooperation is considered, where a user that does not have a message to send, acts as a relay node in order to improve the performance [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF]. The relaying nodes (i.e., the sources and the relays) are assumed half-duplex; they can listen to each other while not transmitting. The concept of user cooperation was introduced in the work of Sendonaris et al. [START_REF] Sendonaris | User cooperation diversity. part i. system description[END_REF][START_REF] Sendonaris | User cooperation diversity. part ii. implementation aspects and performance analysis[END_REF] where it is sometimes referred to by "cooperative diversity" [START_REF] Nicholas Laneman | Cooperative diversity in wireless networks: Efficient protocols and outage behavior[END_REF]. In these works, user cooperation is seen as a promising method that have significant gain in various performance metrics (i.e., outage probability, diversity gain, multiplexing gain, diversity-multiplexing trade-off, etc.). We summarize the mentioned cooperative networks in table 1.1.
Different relaying protocols
In our work, the relaying nodes apply the Selective Decode-and-Forward (SDF) relaying protocol, which means that they can forward only a signal representative of successfully decoded source messages. The error detection is based on Cyclic Redundancy Check (CRC) bits that are appended to each source message. The SDF relaying protocol is an advanced version of the Decode-and-Forward (DF) relaying protocol. The principle of DF protocol is introduced in [START_REF] Cover | Capacity theorems for the relay channel[END_REF], where unlike SDF, cooperative nodes are obliged to wait to successfully decode all the source messages before starting to cooperate. In [START_REF] Yue | Orthogonal df cooperative relay networks with multiple-snr thresholds and multiple hard-decision detections[END_REF], an orthogonal multiple DF relaying network was presented, in which a diversity analysis and error probability derivation was carried out. In [START_REF] Mohammad R Javan | Resource allocation in decode-and-forward cooperative communication networks with limited rate feedback channel[END_REF], the problem of resource allocation in DF cooperative communication networks was investigated, considering a limited rate feedback channel. SDF belongs to the category of non-linear (regenerative) relaying protocol.
Other commonly used protocols in the literature that belong to the same category are Compress-and-Forward (CF) [START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF][START_REF] Lim | Noisy network coding[END_REF], Compute-and-Forward (CoF) [START_REF] Nazer | Compute-and-forward: Harnessing interference through structured codes[END_REF][START_REF] Wei | Compute-and-forward network coding design over multisource multi-relay channels[END_REF], and Quantize-Map-and-Forward (QMF) [START_REF] Avestimehr | Wireless network information flow: A deterministic approach[END_REF]. In the CF relaying protocol, the relay transmits an estimated version of its observation of the received signal. The relay node uses source coding to exploit the side information available at the destination. In the CoF relaying protocol, the relay decodes linear equations of the transmitted messages using the noisy linear combinations provided by the channel. Such a protocol is suitable in multi-source networks where more than one source is included. The QMF relaying protocol is another generalization of CF relaying protocol, where the estimated version of the signal is based on quantizing the received signal at the noise level and mapping it to a random Gaussian codeword for forwarding. The final destination decodes the source's message based on the received signal.
The most famous example of the category of the linear relaying protocol is certainly Amplify-and-Forward (AF) [START_REF] Laneman | Cooperative Diversity in Wireless Networks: Algorithms and Architectures[END_REF][START_REF] Hao | Amplify-and-forward relay identification using joint tx/rx i/q imbalance-based device fingerprinting[END_REF], while there exist many other interesting RP, such as Coded Cooperation (CC) [START_REF] Janani | Coded cooperation in wireless communications: space-time transmission and iterative decoding[END_REF]. In the AF relaying protocol, the relay transmits an amplified version of its received message. It can be seen as a repetition code, where a relay is simply forwarding a scaled version of its received signal. For CC, the principle is to partition the codewords of each transmitting node and transmit each part through an independent channel. Other types of RP are Non-Orthogonal Amplify-and-Forward (NOAF) [START_REF] Yu | Amplify-and-forward relay selection in cooperative non-orthogonal multiple access[END_REF] and Dynamic Decode-and-Forward (DDF). These protocols are evaluated in terms of Diversity-Multiplexing Trade-off (DMT) [START_REF] Kambiz Azarian | On the achievable diversity-multiplexing tradeoff in half-duplex cooperative channels[END_REF]. In DDF, a certain approach is needed to choose the point where the relay switches from listening to transmitting. As this point is not fixed, fountain codes [START_REF] Xu | Fountain codes-based two-way multiply-andforward relaying in rayleigh fading channels[END_REF] are considered as they do not have a predetermined rate at the transmitter.
Performance evaluation and comparison of the mentioned protocols are done in the literature (e.g., in [START_REF] Laneman | Distributed space-time-coded protocols for exploiting cooperative diversity in wireless networks[END_REF][START_REF] Laneman | Cooperative diversity in wireless networks: Efficient protocols and outage behavior[END_REF][START_REF] Trung | On the performance gain of hybrid decode-amplify-forward cooperative communications[END_REF]), where we can see that there exist some scenarios where the SDF relaying protocol is outperformed by some other protocols. Nevertheless, there is no decisive conclusion in this research area, and rigorous fair comparisons still have to be done for the slow fading half-duplex MAMRN. We summarize the mentioned RP in tables 1.2 and 1.3.
Relaying Protocol
Type Reference
Decode-and-Forward (DF) Regenerative [START_REF] Cover | Capacity theorems for the relay channel[END_REF][START_REF] Yue | Orthogonal df cooperative relay networks with multiple-snr thresholds and multiple hard-decision detections[END_REF][START_REF] Mohammad R Javan | Resource allocation in decode-and-forward cooperative communication networks with limited rate feedback channel[END_REF]
Method Reference
Obliged to wait to decode all sources before starting relaying [START_REF] Cover | Capacity theorems for the relay channel[END_REF][START_REF] Yue | Orthogonal df cooperative relay networks with multiple-snr thresholds and multiple hard-decision detections[END_REF][START_REF] Mohammad R Javan | Resource allocation in decode-and-forward cooperative communication networks with limited rate feedback channel[END_REF]] Can switch to relaying before decoding all sources [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], This Work Switching point between listening and relaying is not fixed [START_REF] Kambiz Azarian | On the achievable diversity-multiplexing tradeoff in half-duplex cooperative channels[END_REF] An estimated version of the received signal is transmitted [START_REF] Kramer | Cooperative strategies and capacity theorems for relay networks[END_REF][START_REF] Lim | Noisy network coding[END_REF] Uses noisy linear combination to decode transmitted messages [START_REF] Nazer | Compute-and-forward: Harnessing interference through structured codes[END_REF][START_REF] Wei | Compute-and-forward network coding design over multisource multi-relay channels[END_REF] A quantized version of the signal is transmitted [START_REF] Avestimehr | Wireless network information flow: A deterministic approach[END_REF] Repetition code, transmitting amplified version of the signal [START_REF] Laneman | Cooperative Diversity in Wireless Networks: Algorithms and Architectures[END_REF][START_REF] Hao | Amplify-and-forward relay identification using joint tx/rx i/q imbalance-based device fingerprinting[END_REF] Uses partition of the codewords of each transmitting node [START_REF] Janani | Coded cooperation in wireless communications: space-time transmission and iterative decoding[END_REF] Uses AF for NOMA to serve primary and secondary users [START_REF] Yu | Amplify-and-forward relay selection in cooperative non-orthogonal multiple access[END_REF] Table 1.3: Relaying protocols summary: retransmission method.
Link adaptation
The Link Adaptation (LA) problems play a major role in the performance of wireless networks. LA relates to how the nodes in the network adapt to the channel gains and the radio conditions. Accordingly, and based on the state of the network, LA includes the process of allocating the resource elements to the different nodes. The LA process includes different allocations such as power allocation, scheduling, sub-carrier allocation, and any possible scarce resource allocation. Upon following cooperative networks, a new Degree of Freedom (DoF) is seen corresponding to the retransmission phase. In other words, the allocation presented in the state of the art for non-cooperative networks is not (always) applicable for cooperative systems. We focus in this thesis on LA problems for MAMRN where Time Division Multiplexing (TDM) is taken for orthogonality. Specifically, we focus on the rate allocation problem, and furthermore, the joint rate and channel use allocation. Note that the generalization to Frequency Domain Multiplexing (FDM) is quite simple, and is going to be investigated in the last chapter of this thesis.
Different link adaptation problems
In the prior art, several works tackled the problem of resource allocation [START_REF] Zhao | Resource allocation for multiple access channel with conferencing links and shared renewable energy sources[END_REF][START_REF] Abuajwa | Resource allocation for throughput versus fairness trade-offs under user data rate fairness in noma systems in 5g networks[END_REF]. In [START_REF] Lozano | Optimum power allocation for parallel gaussian channels with arbitrary input distributions[END_REF][START_REF] Papandreou | Bit and power allocation in constrained multicarrier systems: The single-user case[END_REF], the power allocation problem is investigated. In [START_REF] Lozano | Optimum power allocation for parallel gaussian channels with arbitrary input distributions[END_REF], the authors give the power allocation that maximizes the mutual information over parallel channels. The presented policy generalizes the water-filling solution and retains some of its intuition. In [START_REF] Papandreou | Bit and power allocation in constrained multicarrier systems: The single-user case[END_REF], the bit and power loading problem is addressed as a constrained multi-variable non-linear optimization problem. The authors present the main classes of loading problems, such as rate maximization and margin maximization. Other works target the power allocation problem in cooperative networks [START_REF] Dai | Distributed power allocation for cooperative wireless network localization[END_REF][START_REF] Festus Kehinde Ojo | Optimal power allocation in cooperative networks with energy-saving protocols[END_REF]. The reference [START_REF] Dai | Distributed power allocation for cooperative wireless network localization[END_REF] targets the Device-to-Device (D2D) communication in cellular networks. It aims at optimizing the power allocation in cooperative wireless network localization. The authors decomposed the power allocation problem into infrastructure and cooperation phases, developed several power allocation algorithms, and their numerical results validated the significant improvement in localization accuracy compared to uniform strategies. On the other hand, the reference [START_REF] Festus Kehinde Ojo | Optimal power allocation in cooperative networks with energy-saving protocols[END_REF] targets the power allocation problem from the energy-saving perspective.
Concerning the sub-carrier allocation problem, several papers targeted this problem in non-cooperative networks [START_REF] Iqbal | Channel allocation in multi-radio multi-channel wireless mesh networks: a categorized survey[END_REF][START_REF] Kumar | Channel allocation in cognitive radio networks using evolutionary technique[END_REF][START_REF]Survey on channel allocation techniques for wireless mesh network to reduce contention with energy requirement[END_REF][START_REF] Yilmazel | A novel approach for channel allocation in ofdm based cognitive radio technology[END_REF] and in cooperative networks [START_REF] Zhao | Channel allocation for cooperative relays in cognitive radio networks[END_REF][START_REF] Deng | Cooperative channel allocation and scheduling in multi-interface wireless mesh networks[END_REF]. The reference [START_REF] Iqbal | Channel allocation in multi-radio multi-channel wireless mesh networks: a categorized survey[END_REF] represents a survey of the different heuristic channel allocation methods. Another survey is presented in [START_REF]Survey on channel allocation techniques for wireless mesh network to reduce contention with energy requirement[END_REF], where the aim of the channel allocation techniques presented was to reduce contention with energy requirements. In [START_REF] Kumar | Channel allocation in cognitive radio networks using evolutionary technique[END_REF], a novel channel allocation technique is presented following different non-dominated sets of solutions following different objectives. In a recent publication [START_REF] Yilmazel | A novel approach for channel allocation in ofdm based cognitive radio technology[END_REF], the authors propose a novel channel allocation technique that uses artificial intelligence and specifically genetic algorithm. It was found that the optimized result with the help of genetic algorithm was better than the results without using genetic algorithm.
On the other hand, a lot of research interest is seen in the rate allocation problem. In [START_REF] Toumpis | Capacity regions for wireless ad hoc networks[END_REF][START_REF] Kodialam | Characterizing the capacity region in multi-radio multi-channel wireless mesh networks[END_REF][START_REF] Akbar | Multi-cell multiuser massive mimo networks: User capacity analysis and pilot design[END_REF][START_REF] Sridharan | On the capacity of a class of mimo cognitive radios[END_REF][START_REF] Liu | Capacity and rate regions of a class of broadcast interference channels[END_REF], the authors present the capacity region of different networks. The capacity region represents the set of source rates of a network that can possibly be decoded by the destination. Following this definition, we see the link between rate allocation and capacity region analysis. On the other hand, and for the networks where the capacity region is not well-known, the rate allocation problem answers the question of what rate each source should use to improve the transmission and reception. In [START_REF] Hao Qiu | Energy-efficient rate allocation for noma-mec offloading under outage constraints[END_REF], the rate allocation problem from the perspective of energy efficiency is presented for Non-Orthogonal Multiple Access (NOMA) networks. In this letter, statistical state information is assumed, and the energy minimization problem under latency and outage constraints is formulated. The authors propose an iterative water-filling-based rate allocation algorithm that reduces energy consumption. In [START_REF] Jung | Optimized data rate allocation for dynamic sensor fusion over resource constrained communication networks[END_REF], on the other hand, a dynamic sensor fusion communication network is considered, and the rate allocation is optimized based on a heuristic approach that minimizes a weighted sum of communication costs subject to a constraint on the state estimation error at the fusion center. Furthermore, several works tackled the LA for cooperative networks [START_REF] Yin | Resource allocation in cooperative networks with wireless information and power transfer[END_REF]. The authors of [START_REF] Kaiser | Neuro-fuzzy based relay selection and resource allocation for cooperative networks[END_REF] propose a Neuro-Fuzzy (NF) algorithm to perform the relay selection and resource allocation process. More recent works (see [START_REF] Zeng | Toward ul-dl rate balancing: Joint resource allocation and hybrid-mode multiple access for uav-bs-assisted communication systems[END_REF][START_REF] Dayarathna | Maximizing sumrate via relay selection and power control in dual-hop networks[END_REF]) tackled the LA problems for multisource cooperative systems. In [START_REF] Naeem | Resource allocation techniques in cooperative cognitive radio networks[END_REF], a survey is presented related to the LA problem of Most of the papers that deal with an efficient rate allocation problem in the combination with a control exchange process consider a single source, a single or eventually multiple relays, and a single destination. In [START_REF] Alves | Throughput performance of incremental decode-and-forward using infra-structured relays and rate allocation[END_REF] a presence of a fixed infra-structured relay is considered, and it is shown that large gains in throughput and coverage area can be obtained when the source and the relay are allowed to use different spectral efficiencies, where the simple selection combining technique is used. A scenario of a single ad-hoc relay with no direct link available from the source to the destination where the repetition coding is adopted, i.e. Chase Combining (CC)-type of Hybrid Automatic Repeat Request (HARQ), is studied in [START_REF] Seong Hwan Kim | Optimal rate selection scheme in a two-hop relay network adopting chase combining harq in rayleigh block-fading channels[END_REF]. There, a rate allocation strategy of both source and relay is proposed under the outage probability constraint and both finite and infinite allowed number of retransmissions constraint. In [START_REF] Saeed R Khosravirad | Rate adaptation for cooperative harq[END_REF], a rate adaptation problem is presented as a Markov decision process where dynamic programming is employed for optimization, in a single relay scenario with the feedback available from both the relay and the destination to the source. Incremental Redundancy (IR)-type HARQ technique is adopted in that work, and the advantage in terms of average throughput and the outage probability compared with the non-adaptive HARQ is demonstrated, where the analysis is also extended to multiple relay scenarios.
Other interesting works concerning the link adaptation problems are the ones that target joint allocation strategies. In such works, authors target the optimization of more than one resource element. In [START_REF] Maric | Bandwidth and power allocation for cooperative strategies in gaussian relay networks[END_REF][START_REF] Cheng | Joint relay ordering and linear finite field network coding for multiple-source multiple-relay wireless sensor networks[END_REF], the authors target the joint allocation of channel and power for different wireless networks. In [START_REF] Combes | Dynamic rate and channel selection in cognitive radio systems[END_REF] on the other hand, a joint allocation of channel and rate is presented. Joint power and rate optimization for multihop relay network (where multiple relays are serially connected from the source to the destination) where the IR-HARQ technique is used is investigated in [START_REF] Hwan | On the optimal link adaptation in linear relay networks with incremental redundancy harq[END_REF]. It is shown that the proposed scheme outperforms both IR-HARQ scheme with fixed power and rate and CC-HARQ scheme in terms of the long-term average transmission rate. We summarize the mentioned LA works in table 1.4.
Link adaptation problems in learning and decentralized networks
In most of the previously mentioned works, a central node was responsible for the different allocations of resources. Also, in most of the mentioned works the allocation was based on some knowledge of the channel states. Nevertheless, other methods for LA are seen in other frameworks such as game theory (decentralized system) or in learning framework (no knowledge of the channel states). In the learning framework, some of the LA problems can be formulated as Multi-Armed Bandit (MAB) problem. The main issue which MAB framework tackles is the exploration-exploitation dilemma. In scenarios where multiple choices are possible (multiple arms), each with an unknown average reward, MAB algorithms give sequential steps to decide whether we need to learn more (exploration), or to stay with the option that gave the best rewards in the past (exploitation).
There are different types of MAB problems, each based on the assumptions of the problem. In the survey [START_REF] Bubeck | Regret analysis of stochastic and nonstochastic multi-armed bandit problems[END_REF], three different fundamental types of MAB problems were mentioned, stochastic, adversarial, and Markovian. In this manuscript, we are more interested in the stochastic MAB problem, as it aligns with the case of the rate allocation problem (the reward is stochastic) as we will see next. From a historical point of view, Lai and Robbins [START_REF] Leung | Asymptotically efficient adaptive allocation rules[END_REF] introduced the first analysis of stochastic bandits with an asymptotic analysis of regret. There, the principle of optimism in the face of uncertainty (to be optimistic while thinking about the not well explored choices) was used and the Upper Confidence Bound (UCB) algorithm was proposed. This concept is widely used in most of the MAB literature.
In UCB-like algorithms, we favor the exploration of actions with a strong potential to have an optimal value [START_REF] Weng | The multi-armed bandit problem and its solutions[END_REF], and UCB measures this potential by upper confidence bound of the reward value. Based on this type of literature, a lot of algorithms have been further proposed [START_REF] Bubeck | Bandits games and clustering foundations[END_REF] (section 2.2) and [START_REF] Garivier | The kl-ucb algorithm for bounded stochastic bandits and beyond[END_REF]. In [START_REF] Garivier | The kl-ucb algorithm for bounded stochastic bandits and beyond[END_REF], the authors proved that the proposed KL-UCB algorithm attains the optimal rate in finite-time. In addition, they proved that this algorithm is optimal for Bernoulli distributions (problems with the reward of Bernoulli distribution).
Another type of algorithms widely used is based on Thompson Sampling (TS) (also known as posterior sampling and probability matching) [START_REF] William R Thompson | On the likelihood that one unknown probability exceeds another in view of the evidence of two samples[END_REF]. Contrary to UCB-type algorithms, the TS algorithms are based on the assumption of posterior distribution for the unknown metric we are trying to learn. The algorithm chooses the arm which maximizes the expected reward based on the current distribution. Then, after each iteration, the posterior distribution is updated. Although this type of algorithms was ignored in the academic literature until recently, several nowadays problems are using these strategies [START_REF] Chapelle | An empirical evaluation of thompson sampling[END_REF]. For interested readers, [START_REF] Russo | A tutorial on thompson sampling[END_REF] gives a detailed discussion on when, why, and how to apply TS.
Besides UCB-type and TS-type algorithms, there are also different approaches to tackling the MAB problem. In [START_REF] Combes | Minimal exploration in structured stochastic bandits[END_REF], rather than using the concept of optimism in the face of uncertainty, a new general algorithm is proposed aiming at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. In this algorithm, rather than only performing exploration and exploitation, a third process is taken into consideration as well: estimation. For simpler algorithms, ϵ-greedy is a well-known algorithm, where a fixed value ϵ ∈ [0, 1], decides the percentage of time you spend on exploration and exploitation. Since, with a fixed value of ϵ, we will reach a linear (not logarithmic) regret, decreasing ϵ-greedy algorithms are used, taking ϵ as a decreasing variable with time (usually it is in the form of a fraction between a constant and time). We find in the literature several papers comparing the previously mentioned algorithms, as in [START_REF] Ben Ameur | Autonomous power decision for the grant free access musa scheme in the mmtc scenario[END_REF], where the power allocation problem was solved using several algorithms, such as the UCB, TS, and ϵ-greedy.
The MAB selection strategy can be generalized to a case where multiple arms are selected jointly at each decision instance. Such kind of MAB problems is given under the name of Combinatorial MAB (CMAB), where a subset of arms is selected at each step, forming a Super Arm. In the literature, CMAB was investigated in several applications. In [START_REF] Nasim | Learning-based beamforming for multi-user vehicular communications: A combinatorial multi-armed bandit approach[END_REF], the problem of beam selection in a vehicular network was solved using CMAB algorithms, based on TS. In [START_REF] Chen | Combinatorial multi-armed bandit: General framework and applications[END_REF], CMAB was also presented but this time using UCB-type algorithms. There, two applications were selected, online advertising and social influence maximization for viral marketing. In [START_REF] Kuchibhotla | Combinatorial sleeping bandits with fairness constraints and long-term non-availability of arms[END_REF], Combinatorial Sleeping MAB model with Fairness constraints (CSMAB-F) was presented. The concept of sleeping arm is when some arms are not always available.
In the literature, the LA problem is also seen in decentralized networks. In such networks, the notion of games (and game theory) is presented. In LA games, there are multiple players competing on scarce resources with the goal of getting the optimal reward. The reward of each player is a function of the decision of the player itself and the other competing players. In addition, game theory is seen in cooperative networks such as in interference relay networks [START_REF] Zappone | Energyaware competitive power control in relay-assisted interference wireless networks[END_REF][START_REF] Veronica Belmega | Power allocation games in interference relay channels: Existence analysis of nash equilibria[END_REF]. In [START_REF] Zhang | Sub-channel and power allocation for non-orthogonal multiple access relay networks with amplify-andforward protocol[END_REF], the problem of joint allocation of power and sub-channel is investigated for cooperative networks. There, an AF relay assigns the sub-channels and allocate the different level of power to a set of sources and destinations pairs. Each pair consists of a source node and a destination node, where the source node transmits through the relay to its paired destination node. Joint sub-channel and power allocation is then formulated as a non-convex optimization problem to maximize the total sum-rate. The optimal solution for this problem is NP-hard and requires an exhaustive search. Therefore, an efficient low-complexity resource allocation algorithm is required. In [START_REF] Zhang | Sub-channel and power allocation for non-orthogonal multiple access relay networks with amplify-andforward protocol[END_REF], this problem was tackled using matching theory. There, two lowcomplexity algorithms were proposed. Another method is to tackle the system using game theory perspective, aiming to study the equilibria points of the system (similar to the analysis presented in [START_REF] Shams | Energy-efficient power control for multiple-relay cooperative networks using q-learning[END_REF]). The previous problem can be further separated into two sub-problems, a sub-channel allocation problem and a power allocation problem. Such separation can lead to a bi-form problem (check for example [START_REF] Driouech | D2d mobile relaying meets noma-part i: A biform game analysis[END_REF][START_REF] Driouech | D2d mobile relaying meets noma-part ii: A reinforcement learning perspective[END_REF]), i.e., a problem decomposed into a competitive sub-problem (sub-channel allocation) and a cooperative sub-problem (relay power allocation). In [START_REF] Eirini | Joint utility-based uplink power and rate allocation in wireless networks: A noncooperative game theoretic framework[END_REF], the joint power and rate allocation problem in the game theoretical framework is presented, followed by the analysis of convergence and uniqueness. Finally, in [START_REF] Sharma | Implementing game theory in cognitive radio network for channel allocation: An overview[END_REF], a survey over implementing the game theoretical framework in cognitive radio network for channel allocation. We summarize the mentioned LA learning and game theory works in table 1.5. [START_REF] Zappone | Energyaware competitive power control in relay-assisted interference wireless networks[END_REF][START_REF] Veronica Belmega | Power allocation games in interference relay channels: Existence analysis of nash equilibria[END_REF][START_REF] Shams | Energy-efficient power control for multiple-relay cooperative networks using q-learning[END_REF][START_REF] Eirini | Joint utility-based uplink power and rate allocation in wireless networks: A noncooperative game theoretic framework[END_REF][START_REF] Sharma | Implementing game theory in cognitive radio network for channel allocation: An overview[END_REF]] Bi-form problem [START_REF] Driouech | D2d mobile relaying meets noma-part i: A biform game analysis[END_REF][START_REF] Driouech | D2d mobile relaying meets noma-part ii: A reinforcement learning perspective[END_REF] Table 1.5: Different link adaptation problems in learning and decentralized networks.
Relaying nodes selection strategies
The relaying nodes selection strategy or the retransmission scheduling problem consists in the organization of the retransmission of the relaying nodes. It describes the algorithm that the destination uses to decide which relaying nodes are going to be active and send redundancies and which relaying nodes are not. In the state of the art, several works targeted the selection strategy problem. In [START_REF] Ai | Performance analysis of hybrid-arq with chase combining over cooperative relay network with asymmetric fading channels[END_REF], the analysis of HARQ mechanism in single relay cooperative networks is presented. In [START_REF] Wang | User scheduling and relay selection with fairness concerns in multi-source cooperative networks[END_REF], user scheduling for cooperative communication systems is studied. There, a perfect links assumption is adopted between the sources and the relays and which might be unrealistic from a practical point of view. The same assumption is considered in reference [START_REF] Koç | Relay selection in two-way full-duplex relay networks over nakagami-m fading channels[END_REF], where a Two-Way Full-Duplex Amplify-and-Forward (TWFD-AF) paradigm is studied. The authors propose a maxmin relay selection strategy which selects the relay that maximizes the minimum Signalto-Interference-and-Noise-Ratio (SINR) at the respective sources. Besides the unrealistic assumption that the nodes have full knowledge of the channel fading coefficients, another drawback of this work is that the analysis is limited to a scenario with only two sources.
The references [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF][START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Cheng | Joint relay ordering and linear finite field network coding for multiple-source multiple-relay wireless sensor networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] consider the relay selection problem for the MAMRN. In [START_REF] Cheng | Joint relay ordering and linear finite field network coding for multiple-source multiple-relay wireless sensor networks[END_REF], a relay ordering algorithm is proposed aiming at exploiting the communication between the relays. In [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], a relaying node selection strategy is proposed which aims at minimizing the common outage while performing Multi-User (MU) encoding. It is meant by MU encoding that a relaying node helps multiple source nodes at the same time. In [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], a novel strategy is used. Rather than minimizing the common outage as in [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], the strategy of [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] selects the relay having the highest mutual information with the destination. It is seen that this strategy outperforms the one proposed in [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]. This can be justified by the fact that although minimizing the common outage might increase the spectral efficiency, it does not necessarily maximize it. A similar method is seen in [START_REF] Yu | Efficient relay selection scheme utilizing superposition modulation in cooperative communication[END_REF], with the limitation of considering a single source system, with multiple relays. The authors of [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] investigated Single-User (SU) encoding where a selected relaying node only helps a single source node. Although the MU encoding used in [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] is promising, it lacks practicality. SU encoding, on the other hand, is practical as it is a protocol based on existing Low-Density Parity-Check (LDPC) and Turbo codes which are used in the 3GPP LTE standards.
To our interest, the publication [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] shows that the performance of MU and SU encoding is quite similar. Following this result, we propose a novel selection scheme based on the practical SU encoding [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] (investigated in detail in chapter 4 section 1). In all the previously mentioned works [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Ai | Performance analysis of hybrid-arq with chase combining over cooperative relay network with asymmetric fading channels[END_REF][START_REF] Wang | User scheduling and relay selection with fairness concerns in multi-source cooperative networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], a single relay is selected at each time slot in the retransmission phase. As SU is being used, the strategy proposed in [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] aims at exploiting the multi-path diversity of the relaying nodes. The idea is based on the fact that each relaying node has its own power budget, and thus, several relaying nodes can be activated at the same time. Thus, [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] proposes a strategy where multiple relaying nodes are activated to help one selected source node. The idea of this strategy which is also called Parallel Retransmission (PR), is that rather than choosing the best relaying node to send redundancies, the destination chooses the best source node which can be helped by multiple relaying nodes. Upon activating several relaying nodes, a better redundancy version of the considered source could be received at the destination.
It is seen that PR introduced in [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] outperforms Single Retransmission (SR) used in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]. In [START_REF] Zahedi | Effective capacity and outage analysis using momentgenerating function over nakagami-m and rayleigh fading channels in cooperative communication system[END_REF], a similar concept is adopted for a two-way communication network. There, the authors consider a one-source multiple-relay network, where the Maximum Ratio Combining (MRC) technique is used at the receiver (i.e., the destination).
In the prior art [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF][START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], the authors first design a control exchange strategy to give the destination useful information about the state of the relaying nodes (and their decoding sets: a decoding set is a set which includes the source nodes which a relaying node decoded correctly at a given time instant). Then, they present some relaying nodes selection strategies. The drawback in the prior art is that the control exchange design is heavy (leads to a heavy overhead). There, at each selection, a control exchange process is performed (even if it was not needed at all). In [START_REF] Al Khansa | Centralized scheduling for mamrn with optimized control channel design[END_REF] (investigated in detail in chapter 4 section 2), we tackle the relaying node selection problem, aiming at maximizing the Average Spectral Efficiency (ASE) while optimizing the control exchange design in the system. Our intuition is that, upon wisely using the available information at the scheduler, a lighter control exchange design can be used while maintaining good performance. In tables 1.6 and 1.7, we present a summary of comparisons between the prior arts and the current work.
Ref
Selection Strategy [START_REF] Koç | Relay selection in two-way full-duplex relay networks over nakagami-m fading channels[END_REF] Max-min of the SINR [START_REF] Cheng | Joint relay ordering and linear finite field network coding for multiple-source multiple-relay wireless sensor networks[END_REF] No selection, all relays transmit successively [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] Relaying node that minimizes the common OP [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Yu | Efficient relay selection scheme utilizing superposition modulation in cooperative communication[END_REF] Relaying node with the highest mutual information [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF][START_REF] Zahedi | Effective capacity and outage analysis using momentgenerating function over nakagami-m and rayleigh fading channels in cooperative communication system[END_REF] Relaying nodes with the highest equivalent mutual information [START_REF] Al Khansa | Centralized scheduling for mamrn with optimized control channel design[END_REF] Using [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF][START_REF] Zahedi | Effective capacity and outage analysis using momentgenerating function over nakagami-m and rayleigh fading channels in cooperative communication system[END_REF], select a vector of sources which maximizes the sum-rate Table 1.7: Different selection strategies of the current and the prior arts.
Motivation and scope of the thesis
The global aim of this work is to optimize the cooperative communication in the orthogonal MAMRN. This leads to solve the related problems for such networks. The motivation behind adopting a cooperative scheme is due to its potential in achieving some Quality of Service (QoS) constraints and in its ability to improve the communication. On the other hand, the motivation behind targeting a MAMRN is that it can be included in different typical wireless communication scenarios such as:
• Wireless Sensor Networks (WSN)
• Cellular Networks (CN)
• Integrated Access and Backhaul (IAB) in New Radio (NR)
• Relay assisted D2D communications
In WSN (e.g., [START_REF] Sankaranarayanan | Hierarchical sensor networks: capacity bounds and cooperative strategies using the multiple-access relay channel model[END_REF]), a central node receives data from the cooperative sensors, possibly using intermediate nodes with better capabilities (i.e., computation and communication capabilities). In CN, mobile terminals of users in good radio conditions can act as relays helping the base station being the destination in decoding of the transmitted messages of the users in bad ratio conditions. Similar scenario can be used upon using a dedicated relays, where these relays can be fixed in some suitable locations or even can be moving (e.g., a relay can be placed on a moving vehicle). In IAB scenarios [START_REF] Gpp | Study on integrated access and backhaul[END_REF], the resources are shared between access and backaul links. For example, we can see scenarios where there is a deployment of NR cells with no need of deploying the transport network proportionally. In such scenarios, the IAB-nodes are acting as relays.
We follow a slow fading assumption meaning that channel gains are assumed to be constant during a frame. The transmission of the sources are divided into frames consisting of time slots. Orthogonality between the sources is achieved via TDM. The simplest way to reduce the effect of interference is to avoid it using orthogonality. From theoretical information theory point of view, NOMA is known to be more optimal for slow fading channel. Nevertheless, the required complexity of the NOMA receiver design is still not used in practice. We define a relaying node as a node which is able to help other nodes. Thus, a relaying node can either be a dedicated relay node, or, on the other hand, a source which performs user cooperation. Such a source acts as a relay when it does not have an own message to transmit. User cooperation acts as an additional DoF in the RP and in the cooperative communication.
In a MAMRN, we have M > 1 sources communicating with a single destination with the help of M + L > 1 relaying nodes. The first M relaying nodes are the sources which perform user cooperation and the L relaying nodes are dedicated relays which are used only for retransmission aspects (they have no own message to send). The relaying nodes, being half-duplex, are only capable of either receiving or transmitting information flows in the same channel resource. In the literature, the majority of the works that tackle the cooperative networks assume Orthogonal Multiple Access (OMA) and half-duplex relaying nodes. Nevertheless, some approaches proposed to combat the half-duplex constraint such as seen in [START_REF] Ding | On combating the half-duplex constraint in modern cooperative networks: protocols and techniques[END_REF]. We assume that the Channel State Information (CSI) is available at the receiver of each direct link.
Another motivation for considering the MAMRN is that we believe that there are a lot of contributions that could be investigated. Due to the multiple different factors that might be optimized, we seek in this manuscript to propose novel solutions and novel strategies to the problems that are not targeted before. In other words, we suspect that there are many DoF in the considered network that we could explore. This is seen in the different patents we filed throughout this thesis, ensuring the novelty and variety of the different directions of the considered network.
In [START_REF] Mohamad | Dynamic selective decode and forward in wireless relay networks[END_REF], we see that even advanced RP are limited according to the half-duplex constraint. In other words, the cooperation process using a half-duplex relay depends on the listening/retransmitting process that leads to the problem called "switching problem". The switching problem tackles the issue of when to stop listening and decoding more sources and when to start helping the successively decoded source messages. In fact, a relaying node might prefer to cooperate with a limited number of successfully decoded sources for a longer duration instead of waiting too long to get to cooperate with more number of sources. One way to solve this problem is to use full-duplex relays. Another method is to use limited feedback control channels from the relaying nodes to the destination and from the destination to the relaying nodes. We assume the availability of an unicast forward coordination control channel from all the relaying nodes towards the destination. On the other side, we assume the availability of a broadcast control channel from the destination to the relaying nodes. These control channels are of limited rates. Now, using the exchanged information in the mentioned control channels, the destination could decide for each time slot which node(s) to transmit and which node(s) to listen. Thus, we follow a two phase frame transmission, where sources transmits successively on their dedicated time slots in the transmission phase (first phase). Then, in the second phase (retransmission phase), the destination being the central node, organizes the scheduling process of the relaying nodes choosing which nodes to be activated at each different time slot. An initialization phase occurs before these two phases, where LA process is performed, where different resources are allocated to the different source nodes.
Recalling the goal of this thesis and which is to propose effective cooperation schemes for the considered MAMRN, we focus in this work on two main problems: 1) Link adaptation, and 2) Scheduling strategies. Each of these two problems branches to different sub-problems. LA, as its name indicates, is the process of adapting of the channel states of the different links of the network. Specifically, it is the process of allocating the different resources available to the different nodes based on the channel conditions. First, we target the problem of rate allocation of the sources which occurs at the initialization phase. We assume the presence of a predefined set of Modulation and Coding Scheme (MCS) rate values. The central node, i.e., the destination, has to allocate for each source a given rate from the predefined set with the aim to optimize the spectral efficiency and respect the QoS constraints. The complexity of the optimal rate allocation comes from the fact that the destination has to allocate the rates of the sources jointly and this makes the exhaustive search algorithm impractical. In other words, the choice of one source effects the choice of other sources as we will see in the expression of the outage events which depends not only on the rates of the sources but also on the retransmission protocol being used in the retransmission phase. Speaking of which, the other main problem when talking about cooperative MAMRN is the problem of scheduling strategies. This problem tackles the methodology/strategies/protocol followed in the retransmission phase. In this problem, the destination being the central node has to organize the retransmissions occurring in the second phase. In other words, the destination is going to select which relaying nodes are going to be activated and which relaying nodes are going to remain silent. This scheduling process plays a major role in cooperative networks and have a high effect on the performance as we will see next.
Upon tackling these two main problems, other sub-problems arise. For example, the LA problem can be taken into a higher level by including a novel DoF composed of a variable number of channel uses in the transmission phase. This means that the different sources will have different time slot duration in the transmission phase. According to the channel gains, the time slot duration can be optimized when optimizing the rate allocation, leading to better performance. Another sub-problem related to LA concerns the information used while performing the LA algorithms. To this end, we propose different LA algorithm which can be used following different availability of the CSI and the Channel Distribution Information (CDI) of the direct and the indirect links. Specifically, we propose an intermediate LA which performs its allocation using the CSI of the direct links and the CDI of the indirect link. The motivation behind this intermediate LA process is to achieve the performance of the Fast-Link Adaptation (FLA) (which outperforms that of the Slow-Link Adaptation (SLA)) while avoiding its heavy overhead.
Similarly, upon tackling the scheduling strategy problem, several sub-problems arise. One issue which is addressed in the chapters of this thesis is the overhead of the control exchange process. Recalling the assumption of the presence of limited feedback links, it is seen that each different selection strategy needs a different control exchange process. Thus, one important factor to be taken into consideration when analysing a given scheduling strategy is the control exchange process needed by this strategy and the overhead penalty included.
As the goal of the previously mentioned problems is to reach the best possible solu-tion, the best solution will be to solve the problems jointly. The rate allocation strategy seen in the prior art assumes that there is a certain relaying node scheduling in the retransmission phase, and thus, the rate allocation depends on the scheduling process used in the retransmission phase. Similarly, we see that the selection strategy depends on the rates of the sources, and thus, depends on the rate allocation problem. We notice that when solving any of these two problems (rate allocation or the relaying node scheduling), the second problem is considered fixed and a given solution is adopted.
According, solving the two problems jointly is one of the motivations of this thesis at it is the way to reach the best performance. Nevertheless, such allocation is going to be challenging. The issue relates to the constraint that the destination needs the full CSI of the network to perform such a joint allocation.
Thesis contribution and outline
The organization of the remainder of this thesis is as follows. In Chapter 2, we present the common system model assumptions that are adopted throughout the whole thesis. The mentioned description is assumed in all the following chapters unless stated otherwise. In this chapter, the specifications of the network is presented, as well as the process of a frame transmission. In addition, the performance metric and the outage events are given. The main contributions of this thesis are described in four following chapters, which we briefly outline in the following.
In Chapter 3, we tackle the problem of LA. In the prior art, this problem consists in allocating a rate value chosen from a MCS family to each source node. In other words, this problem consists in choosing for each source node a rate value taken from a predefined set of possible rate values. In the first section of this chapter, we propose a two-step rate allocation strategy based on the Best-Response Dynamics (BRD) tools. The first step in this algorithm consists in a "Genie-Aided" (GA) starting point, and the second step consists in correcting the previously chosen values. In the second section of this chapter, we propose a new DoF composed of a variable number of channel uses in the transmission phase. This means that each source node will be allocated a different time slot duration during the first phase. Such a modification in the allocation problem is promising due to the improvement it gives in the performance. The numerical results show that the BRD algorithms proposed achieve a good performance and approach the upper bound. This is seen for both: fixed time slot duration and variable time slot duration, and for both: SU and MU encoding methods. Furthermore, the numerical results validate the importance of the SU encoding case, being a practical solution which approaches the MU case. Finally, in the last section of this chapter, we propose an intermediate LA strategy that consists in allocating the rate values based on the CSI of the direct links. This approach aims to achieve the practicality of SLA method, and the optimality of the FLA method while avoiding its heavy overhead. This chapter lead to the following publications: In Chapter 4, we tackle second main problem seen in the 2-phase (transmission and retransmission) cooperative system. Specifically, we tackle the problem of relaying nodes scheduling process. In this chapter, we propose a selection strategy based on the practical SU encoding method. We organize the scheduling process in a way that we exploit the multi-path diversity based on the fact that each relay node has its own power budget. In other words, rather than activating a single relaying node in each retransmission time slot, we propose the notion of PR, where multiple relaying nodes are activated to help a single source node. This means that the problem of selecting a relaying node is changed to be a problem of choosing a source node where all the relaying nodes which decoded this source are going to be activated to help the source together. In the second section of this chapter, we further propose a new selection strategy which aims at reducing the control exchange process seen in the prior art. This method builds its selection on an estimation of the number of retransmission time slots needed to decode a given source. Using this estimator, we reduce the number of control exchange processes needed before doing the scheduling strategy. The numerical results validate the optimality of using PR compared to the prior art SR methods. In addition, it present the effect of the overhead of the control exchange process seen in the different scheduling strategies and how we would reduce this overhead using estimation. This chapter lead to the following publications: In Chapter 5, we solve the two previously mentioned problems jointly. Rather than assuming a given relaying node strategy when doing the rate allocation (as in chapter 3), and rather than doing the scheduling in the retransmission phase assuming a preallocated rates (as in chapter 4), we propose an optimal solution which performs rate allocation and selection scheduling jointly. In this way, we address the problem of suboptimality of solving the two allocations sequentially. The algorithms proposed in this chapter tackles the complexity of passing through a discrete set of possible rates that is needed in the BRD strategy. The numerical results validate that the joint rate and scheduling allocation outperforms the sequential allocation. Furthermore, it ensure the sub-optimality of following a discrete set of rate values and how this sub-optimality can be reduced using the presented strategy. This chapter lead to the following publication: Finally, in Chapter 6, we conclude this work and pave the way for some future works. Specifically, we tackle the rate allocation problem using a different approach. In this approach, we propose a solution when there is no information even for the CDI of the network's links. The solution belongs to online Reinforcement Learning (RL) framework, and specifically, the MAB framework. Other than this direction, we generalize our work to the FDM domain. We first present the utility metric and the outage events in the FDM domain. Then, we propose some solutions for the scheduling strategies when going to the FDM regime. This chapter lead to the following publications: In this chapter, we present the system model considered in this thesis as well as some common assumptions adopted throughout the manuscript. The mentioned description is assumed in all our work unless stated otherwise. In the first section, we present the TDM orthogonal MAMRN considered and we describe the different nodes included in the network followed by the assumptions of the radio conditions investigated. In addition, we describe the transmission of a frame with all the needed elements: the retransmission strategy used and the control exchange process between the destination and the relaying nodes. In the second section, we present the analytical expression of the utility metric considered as well as the outage events.
• Ali Al Khansa,
• Ali Al Khansa,
• Ali Al Khansa,
• Ali Al Khansa,
System description
We consider an (M , L, 1) system with M sources, L relays, and one destination node. The M sources communicate with a single destination, using the help of M + L relaying nodes. The relaying nodes consist of L half-duplex dedicated relays in addition to the M sources, where the latter sources perform user cooperation. User cooperation means that sources act as relays when they have no messages of their own to send. A message u s ∈ F Ks 2 of a source s has a length of K s information bits, where F 2 represents the binary Galois field. In addition, the length K s depends on the MCS for that source. The messages of all sources are mutually independent. It is assumed that all nodes transmit with the same power, where each node is equipped with one antenna only. To be clear with the notations, we define the source nodes set as S = {1, . . . , M }, the relay nodes set as R = {M + 1, . . . , M + L}, and all cooperative nodes set as N = S ∪ R = {1, . . . , M + L}. In other words, a source s i is the node i in set N , and a relay r i is the node i + M in set N .
In Fig. 2.1, we present a simple realization of the considered MAMRN. In this figure, we see that all the nodes (sources, relays, and the destination) can listen to each other. Furthermore, we see that there is a link from the destination (the central node) toward the different relaying nodes (sources and relays) representing the feedback information flow. Accordingly, the destination uses these links to share its different decisions and allocations with the different relaying nodes (e.g., allocated rates, selected relaying node, etc.).
We follow a slow fading assumption which means that the radio links between the nodes do not change within a frame transmission. The channel realization is considered independent from frame to frame. It simplifies the analysis and the convergence of the system, and it captures the performance of practical systems assuming ergodicity of the underlying random processes. We assume that the CSI of only the direct links is available at the receiver, i.e.,
H dir = [h s,d , h r,d ] = [h 1,d , . . . , h M +L,d ] of Source-to-Destination (S- D)
, and Relay-to-Destination (R-D) links are perfectly known by the destination. On the other hand, the knowledge of the CSI of other indirect links, Source-to-Source (S-S), Source-to-Relay (S-R), and Relay-to-Relay (R-R), might not always be possible. Basically, based on the cost of the control overhead, the source and the relay nodes might be able to report the CSI of these indirect links. In the case where the overhead of reporting the exact CSI is high, the relaying nodes (sources and relays) will only report the CDI of these indirect links. The overhead is mainly correlated to the mobility of the system, and the destination accordingly chooses which information is reported from the other relaying nodes (CSI or CDI).
In other words, we investigate two scenarios : 1-Fast changing radio conditions: where the acquisition of the CSI of all the links is too costly in terms of the feedback overhead. Instead, CDI (e.g., average Signal-to-Noise-Ratio (SNR) of the corresponding links) is used in the allocation process, and we call this type of LA Slow-Link Adaptation (SLA). It follows that the initial phase occurs once every few hundred frames, once the CDI of network links changes. Between two occurrences of the initial phase, the sources' rates are kept fixed. 2-Slow changing radio conditions: where the acquisition of the CSI of all the links is assumed given. This can be practical in scenarios where channel states of all the links change slowly and can be assumed constant during tens of frames. We call this type of LA Fast-Link Adaptation (FLA).
The transmissions and retransmissions of source messages occur in time frames structured as shown in Fig. 2.2. Following an initial LA phase, where a rate allocation process is performed (the rates of the sources are allocated), a time frame is divided into two phases. The first phase is the transmission phase during which the sources transmit their messages in turn over U channel uses. The second phase, called retransmission, is composed of T used ∈ {0, ..., T max } retransmissions scheduled by the destination using Q channel uses. T max represents the maximum number of possible retransmissions before declaring an outage event (event of not decoding messages of some source nodes). Thus, the whole frame size is M + T used which is limited to M + T max where T max is a fixed system parameter.
In our work, relaying nodes apply the SDF relaying protocol, which means that they can forward only a signal representative of successfully decoded source messages. The error detection is based on CRC bits that are appended to each source message. The SDF relaying protocol is an advanced version of the DF relaying protocol. The principle of DF protocol is introduced in [START_REF] Cover | Capacity theorems for the relay channel[END_REF], where unlike SDF, cooperative nodes are obliged to wait to successfully decode all the source messages before starting to cooperate.
Furthermore, we investigate both MU encoding and SU encoding. The reference [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] investigated different cooperative HARQ protocols for MAMRN. To our interest, the authors of [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] tackled both MU encoding and SU encoding, and both schemes gave promising results. While our work in the next chapter is mainly based on the MU encoding where Joint Network Channel Coding (JNCC) is used, we analyze as well the performance of our algorithms for the SU encoding sub-case. In MU encoding, a relaying node sends an IR-type of HARQ. This IR signal is representative of all its successfully decoded source messages S. From both a practical and an information theory viewpoints, the signal sent can help the destination to decode any subset S ′ ⊆ S knowing S \S ′ where \ is the minus in set theory. In SU encoding on the contrary, the relaying node chooses (randomly) one source message from its decoding set to retransmit. From a practical point of view, and following the state of the art punctured codes, the SU encoding sub-case is attractive being compatible with codes such as LDPC codes or Turbo codes.
A control exchange process represents the steps that the nodes and the destination do in order to exchange useful information so that the destination is able to perform its selection algorithm. As seen in Fig. 2.2, a control exchange process between the destination and the relaying nodes runs before each retransmission scheduling. As the destination does not know the CSI of the indirect links of the networks (S-S links, S-R links, and R-R links), it does not know the decoding set of the relaying nodes. In other words, the decoding sets at the relaying nodes depend on the channel state between the relaying nodes, and thus, the destination needs to know the CSI of these links to deduce the decoding set of the relaying nodes. Accordingly, and as it lacks this information, a control exchange process is needed to give the destination useful information to perform its scheduling strategy. We note that K s is assumed large enough to neglect the effect of control channels on the transmission rate (that is why we see no time slot reserved for the control exchange process). In other words, we assume the presence of "limited control channels" with a large enough packet length. For short packet lengths, however, the control channel overhead cannot be neglected. Nevertheless, this will not change the contribution of this manuscript.
The control exchange process depends on the selection strategy used in the retransmission phase. In this chapter, we present the control exchange process which is used when using the selection strategy of [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF]. This selection strategy (and this control exchange process) are adopted in chapter 3 and further optimized in chapter 4. First, we recall the selection strategy used in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], then, we describe the needed control exchange process when using this selection strategy. In [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], in each retransmission round, the destination selects the node with the highest mutual information between itself and that node, among all nodes which were able to decode at least one source from the set of non-successfully decoded sources at the destination. It is demonstrated that using the described strategy, we can reach the ASE close to the upper bound obtained by an exhaustive search for the best possible node activation sequence, under much smaller computational complexity.
We present in Fig. 2.3 a toy example that shows how the selection strategy of [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] works. In Fig. 2.3a, we consider a (3, 2, 1)-MAMRN. At the considered time slot (any time slot in the retransmission phase), the decoding sets of all the nodes are presented. We see that the destination decoded the message of source s 1 , the sources did not decode any source message (but their own message), and the relays r 1 and r 2 decoded respectively the set of sources {s 1 } and {s 1 , s 2 , s 3 }. In [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], the candidate relaying nodes to be selected are the nodes that can help at least one source which is not decoded by the destination. As the destination decoded source s 1 , the candidate nodes (in this example) are the relaying nodes that can help either source s 2 or s 3 or both. Following the toy example, the candidate relaying nodes are s 2 , s 3 , and r 2 (highlighted in Fig.
2.3b
). After fixing the set of candidate relaying nodes, the destination chooses the node with the highest mutual information. Such a relaying node is the node having the best direct link with the destination. In Fig. 2.3c, we assume that the best direct link is the link between r 2 and the destination, and thus, r 2 is going to be selected. Finally, in Fig. 2.3d, we see that r 2 is going to send a symbol representative of all correctly decoded sources which are not decoded yet by the destination, i.e., sources s 2 and s 3 . This process is repeated at the beginning of each retransmission time slot while using the updated decoding sets of the nodes. Now, following the described selection strategy, the control exchange process needed before a retransmission time slot t ∈ {1, . . . , T max } can be described as seen in Fig. 2.4.
• The destination first shares its decoding set S d,t-1 with the relaying nodes. Thus, it broadcasts M bits that indicate its decoding set.
• If the destination decoded all the sources, i.e., if the decoding set of the destination consists of all source messages, a new frame begins. Otherwise, each relaying node which was able to decode at least one source message that is not included in the decoding set of the destination sends 1 bit on a dedicated unicast forward coordination control channel. Each relaying node which did not decode any message needed by the destination, i.e., any message that is not included in the decoding set of the destination after round t -1, remains silent.
• Using the described selection strategy and the information exchanged with the relaying nodes, the destination chooses the relaying node a t with the highest mutual information between itself and that node. Its decision is broadcasted using the feedback broadcast control channel. Note that the candidate relaying nodes at this step are only the nodes which were not silent in step 2. In other words, only the relaying nodes which were able to decode at least one source message that is not included in the decoding set of the destination.
• The selected relaying node transmits redundancies using MU encoding with the messages of the source that it was able to decode. Now, for a given transmitting node a ∈ S ∪ R, and a receiving node b ∈ S ∪ R ∪ {d}, at a given channel use k, the received signal y a,b,k can be written as:
y a,b,k = h a,b m a,k + n a,b,k , (2.1)
where
• m a,k ∈ C is the coded modulated symbol whose power is normalized to unity.
• h a,b are the channel fading gains, which are independent and follow a zero-mean circularly symmetric complex Gaussian distribution with variance γ a,b .
• n a,b,k represents the independent and identically distributed Additive White Gaussian Noise (AWGN) samples, which follow a zero-mean circularly-symmetric complex Gaussian distribution with unit variance.
Performance metric and outage events 2.3.1 Average spectral efficiency
In this subsection, we present our utility metric called average spectral efficiency η which is maximized for both FLA and SLA. It is defined as the limit of the average of the ratio between the number of successfully received bits and the number of channel uses when the number of frame transmissions tends to infinity (η = E{η frame }). Our analysis relies on the definition of the outage event O i,t which occurs when source i is not decoded correctly after the transmission phase (t = 0) and at each retransmission l up to t (l = 1, ..., t). We define, accordingly, O i,t as a binary Bernoulli random variable which indicates an outage event. In other words, O i,t takes the value 1 if the event O i,t happens, and 0 otherwise. Or, in mathematical terms, for any elementary event w,
O i,t (w) = [w ∈ O i,t
] where [q] denotes the Inverson bracket which takes the value 1 if q is true, and 0 otherwise. The metric η is derived from the spectral efficiency per frame η frame and which depends on the channel realization H, and the LA strategy used (strategy of allocating the rate of the source nodes) denoted P . Also, it depends on the relaying protocol used, relaying nodes selection strategy, and the parameters of the system (e.g., M ; L; T max ).
For simplicity, we only include within the following equations the dependency on H and P . Now, η frame is defined as:
η frame (H, P ) = nb bits successfully received nb channel uses = M i=1 K i (1 -O i,T used ) M U + QT used = M i=1 R i (1 -O i,T used ) M + αT used , (2.2)
where
• R i = K i /U
is the rate of a source i, with K i being the number of bits that can be transmitted by source i given U channel uses.
• T used ∈ {0, . . . , T max } is the number of retransmission time slots activated in a frame.
• O i,T used is a binary Bernoulli random variable which indicates an outage event O i,t . In other words, O i,t takes the value 1 if the event O i,t happens, and 0 otherwise.
• α = Q/U is the ratio of number of channel uses in a retransmission time slot by that in a transmission slot.
Outage events
Now, we present the outage events of the system which are the individual outage event for a given source, and the common outage event for a subset of sources. The latter occurs when at least one user in the subset of sources is in outage. In [START_REF] Mohamad | Outage analysis of various cooperative strategies for the multiple access multiple relay channel[END_REF], Mohamad et al provide an outage analysis for various cooperative schemes. Here, we build on that work and we present the outage derivations for our considered system. In general, the "individual outage event of a source s after round t", O s,t (a t , S at,t-1 |h dir , L t-1 ), depends directly on the rates we are scheduling. In addition, it depends on the selected node a t ∈ N and its associated decoding set S at,t-1 . It is conditional on the knowledge of h dir and L t-1 , where L t-1 denotes the set collecting the nodes a l which were selected in rounds l ∈ {1, . . . , t -1} prior to round t together with their associated decoding sets S a l ,l-1 , and the decoding set of the destination S d,t-1 (S d,0 is the destination's decoding set after the first phase). Similarly, we define E t (a t , S at,t-1 |h dir , L t-1 ) the "common outage event after round t" as the event that at least one source is not decoded correctly at the destination at the end of round t. The probability of the individual outage event of source s after round t, Pr(O s,t (a t , S at,t-1 |h dir , L t-1 ) = 1), for a candidate node a t using the expectation operator E{.} can be formulated as E{O s,t (a t , S at,t-1 |h dir , L t-1 )}. We can similarly define the probability of the common outage event. In the rest of the manuscript, and in order to simplify the notation, the dependency on h dir and L t-1 is omitted.
Analytically, the common outage event of a given subset of sources is declared if the vector of their rates lies outside the corresponding MAC capacity region. We recall that although this is an orthogonal transmission framework, the outage events encounter the interference effects. Clearly, this follows the JNCC used, where a retransmitted message can include information about different source messages. Now, for some subset of sources B ⊆ S d,t-1 , where S d,t-1 = S \ S d,t-1 is the set of non-successfully decoded sources at the destination after round t -1, and for a candidate node a t , this event can be expressed as:
E t,B (a t , S at,t-1 ) = U ⊆B i∈U R i > i∈U I i,d + α t-1 l=1 I a l ,d C a l (U) + αI at,d C at (U) , (2.3)
where I a,b denotes the mutual information between the nodes a and b (the mutual information is defined based on the channel inputs, check numerical results sections in the following chapters for Gaussian inputs example), and where C a l and C at have the following definitions:
C a l (U) = (S a l ,l-1 ∩ U ̸ = ∅) ∧ (S a l ,l-1 ∩ I = ∅) , C at (U) = (S at,t-1 ∩ U ̸ = ∅) ∧ (S at,t-1 ∩ I = ∅) .
(2.4)
In (2.4), the sources that belong to I = S d,t-1 \ B are considered as interference, with ∧ standing for the logical and. In (2.3), for each subset U of set B, we check if the sum-rate of sources contained in U is higher than the accumulated mutual information at the destination (since IR-type of HARQ is used). The accumulated mutual information is split into three summations, which originate from:
• The direct transmissions from sources contained in U towards the destination during the first phase: i∈U I i,d .
• The transmission of previously activated nodes during the second phase: α t-1 l=1 I a l ,d C a l (U). Node a l for l = {1, . . . , t -1} is involved in the calculation only if it was able to successfully decode at least one source from the subset U (JNCC is used), but at the same time, if it does not belong to the set I (otherwise, the signal would represent an interference).
• The transmission of the candidate node a t during the second phase: αI at,d C at (U), under the same conditions as for the previously activated nodes.
The individual outage event of a source s after round t for a candidate node a t can be defined as:
O s,t (a t , S at,t-1 ) = I⊂S d,t-1 ,B=I,s∈B E t,B (a t , S at,t-1 ) = I⊂S d,t-1 U ⊆I:s∈U i∈U R i > i∈U I i,d + α t-1 l=1 I a l ,d C a l (U) + αI at,d C at (U) , (2.5)
where I = S d,t-1 \ I. The detailed analysis behind the relation between the individual outage event and the common outage event can be revisited in [START_REF] Mohamad | Outage analysis of various cooperative strategies for the multiple access multiple relay channel[END_REF]. We finally define the outage events equations for the SU encoding sub-case. Since the relaying node transmits redundancies of a single source (which is chosen randomly from its decoding set), the individual outage event of a source s after round t for a candidate node a t can be written as:
O SU s,t (a t , S at,t-1 ) = R i > I i,d + α i t-1 l=1 I a l ,d C SU a l + α i I at,d C SU at , (2.6)
where C SU a l (respectively C SU at ) takes the value 1 if the source s is chosen by a l (respectively a t ) and zero otherwise. For the common outage event, in the SU encoding sub-case, it is simply the union of the individual outage events of all the sources included in the considered subset B, and can be written as:
E SU t,B (a t , S at,t-1 ) = s∈B O SU s,t (a t , S at,t-1 ). (2.7)
In the outage equations, we see that the expressions depend on the mutual information between the nodes and the destination. The definition of the mutual information is based on the channel inputs, e.g. Gaussian inputs, discrete inputs, etc.. With no loss of generality, we used Gaussian inputs in our simulations. Nevertheless, other channel inputs might be used without changing the findings of this thesis.
Chapter 3
Dynamic Rate and Channel Use Allocation Algorithms
Chapter summary
In this chapter, we aim at studying LA in the TDM orthogonal MAMRN presented in the previous chapter. As using cooperative systems aims to optimize spectral efficiency, the LA problem is always an open challenge to achieve better spectral efficiency and to satisfy QoS demands. LA is a fundamental mechanism allowing the source nodes to adapt the coding and modulation scheme depending on the radio channel conditions. The destination has to choose a rate for each source from a finite set of rates with the objective to maximize the spectral efficiency. In addition, the system is subject to QoS constraints on individual Block Error Rate (BLER) per source.
In the first section, we present the GA initialization algorithm followed by the BRD correction algorithm used for rate allocation assuming a fixed time slot duration in the transmission phase. In the second section, we investigate a new DoF represented by a variable time slot duration in the transmission phase. Following the new DoF, we present the updated utility metric and the outage events, followed by the new algorithms that perform LA for both rate and time slot duration. The presented algorithms can be used in SLA and FLA. In the third section, we validate our proposals via MC simulations. Our numerical analysis validates the gain of using a variable time slot duration in the transmission phase and the performance of the proposed BRD algorithm in different scenarios. Finally, in section four, we propose an intermediate practical LA that combines the benefits of SLA and FLA. The novel LA is based on a FLA strategy with partial CSI.
Fixed time slot duration
In the literature, in non-cooperative scenarios, where nodes are competing on sparse resources, the BRD appears as a natural approach to solving game theory problems. In BRD, a player chooses his most favorable outcome taking other players' choices as given. Such tools are seen in several domains [START_REF] Douros | Power control under best response dynamics for interference mitigation in a two-tier femtocell network[END_REF], especially in decentralized wireless problems, such as rate and power allocation in decentralized cellular networks [START_REF] Han | Game theory in wireless and communication networks: theory, models, and applications[END_REF][START_REF] Bistritz | Convergence of approximate best-response dynamics in interference games[END_REF][START_REF] Lasaulce | Game theory and learning for wireless networks: fundamentals and applications[END_REF]. In our system, on the contrary, we use a centralized LA, where the destination is the center node that determines the source choices. Accordingly, we have a typical multi-variable optimization problem that aims at optimizing the total ASE, subject to QoS constraints. In our considered problem, and due to the high complexity of its exact solution, we adopt here the methodology used in the BRD tools trying to give a substantial solution/algorithm to the given centralized multi-variable problem. The main idea is to decrease the complexity of the problem by considering each variable independently while taking the other variables as known information. In our approach, rather than choosing the rates of all the sources at the same time, each source will be handled by the destination successively. Such sequential strategy is sometimes referred to as the Gauss-Seidel procedure (e.g., [START_REF] Kim | Optimal resource allocation for mimo ad hoc cognitive radio networks[END_REF]) when used in cooperative scenarios (which is our case).
In MAMRN, the knowledge of instantaneous CSI of all the links allows the LA algorithm to allocate the rates of the sources in the most accurate way (FLA). Since the number of channels in such a network grows exponentially with the number of sources and relays, frequent changes in the channel states (for ex. in a high mobility scenario) can incur an excessive amount of control signaling on the forward coordination control channels. In that case, FLA is deemed impractical, and SLA is a more suitable solution. The idea of SLA is to adapt the source rates to the CDI of all links, which remain constant for longer periods of time. It is important to stress that the time-scale of the SLA differs from the one used by the retransmission algorithm.
Following the considered orthogonal MAMRN system model, the individual outage event (resp. the probability of outage event) of any node depends on the vector of rates allocated for all the source nodes considered in the system. In other words, O s,T used (resp. Pr(O s,T used )) depends on the vector of rates {R 1 , . . . , R M }. To understand this dependency, we should be aware that at a given retransmission time slot, the decoding set of the selected node to retransmit, depends on all the rates allocated. Also, with a small observation of the analytical definition of the individual outage event, i.e., equation (2.5), we see that theoretically, the vector of pairs should be jointly optimized. Now, before we present the optimization problem, we define the corresponding notations:
• n MCS is the number of different MCS available. • R = { R 1 , . . . , R n MCS } is the set of possible rates available.
• R i , is the rate of source i after the optimization.
• R i , one possible value of R i taken from the set of possible rates available.
Using these notations, our given optimization problem can be written as:
{ R 1 , . . . , R M } = argmax {R 1 ,...,R M }∈ R M E M i=1 R i (1 -O i,T used ) M + αT used , subject to: Pr(O i,T used = 1) ≤ BLER QoS,i , ∀i ∈ {1, . . . , M }, R i ≥ R min,i , ∀i ∈ {1, . . . , M }. (3.1)
In SLA, a QoS constraint per source is introduced as a minimum rate with an outage probability threshold (BLER). For FLA, since the full CSI is known at the destination, it is possible to avoid any individual outage per frame by simply not transmitting or, equivalently, transmitting with zero rate. Furthermore, we chose not to introduce any constraint on minimum rates in order to have a benchmark on the maximum achievable spectral efficiency.
To our knowledge, a closed-form solution for this multi-variable optimization problem is not found yet. Indeed, there is always the possibility to find the solution to such a problem by checking exhaustively all n M MCS possible combinations of allocated rates, and choosing the one which maximizes the ASE subject to individual QoS constraints. Clearly, such an approach is computationally very expensive. Accordingly, sub-optimal solutions are needed to relax the complexity of the problem, and thus we resort to BRD tools.
Another reason for looking for sub-optimal solutions is the fact that even in case there is a closed-form solution, this solution would be different from one cooperative network to another. This is seen as another difficulty in the considered problem. In other words, the optimal rate allocation of the cooperative network depends on the assumptions of the given network (e.g., relaying protocol used, selection strategy adopted, etc.). Thus, we investigate here the BRD approach, being a solution that can be used in different cooperative networks and not only the considered one. As mentioned in chapter 1, most of the papers that deal with efficient rate allocation strategies consider a single source, a single or eventually multiple relays, and a single destination; which ensures the novelty of the targeted problem in the considered MAMRN.
Following the BRD approach, the problem is solved in two steps: in step one, the destination chooses the initial source rates, then in step two, the destination uses the BRD methods to update the initial allocations by searching for a better result. The correction is done iteratively for each source node (not jointly), and the correction process is repeated until convergence to the "optimal" solution is reached. In the following subsection, we attempt to find a clever algorithm to reach a good starting point, since the optimal solution and the convergence speed will depend on it. Thus, rather than using random source rate values, we follow a GA assumption and try to find a suitable initialization algorithm.
Starting point using the "Genie-Aided" assumption
In order to reduce the complexity, we can resort to an approach that is based on a GA assumption, where all the sources s ∈ S \ i = {1, 2, . . . , i -1, i + 1, . . . , M } except the one for which we want to allocate the rate, i, are assumed to be decoded correctly at the destination and the relaying nodes. Following this relaxation of the problem, the dependence of the decoding set of potentially cooperating nodes on sources other than the considered one is avoided. From a viewpoint of a source i, the MAMRN reduces to (1,L + M -1,1) multiple relay network. An example is given in Fig. 3.1 for i = s 1 , where the sources {s 2 , . . . , s M } are symbolically denoted by {r L+1 , . . . , r L+M -1 }, as they only serve as relays. Under such an assumption, the node selection strategy will give a different sequence of selected nodes than the case where the GA assumption is not taken. Indeed, since source i is the only one that is not decoded correctly at the destination, all the scheduling decisions are oriented towards helping this source exclusively, which results in an allocated rate higher than the optimal one. Possibly a better approximation of the realistic node selection sequence while evaluating rate R i in the GA algorithm, is a random node activation sequence, and that approach is adopted in the rest of the manuscript when using GA.
Hence, although the initial rates found under the GA assumption are not the exact solutions to the maximization problem (3.1), they can serve as a good starting point for finding the optimal solutions. Indeed, even though we always consider that only one source is not decoded correctly, which is not a realistic assumption, and that the node activation sequence is purely random, we take into account the quality of all the links which can potentially help the transmission of a given source in the calculation.
In this manuscript, in the SLA scenario, we assume that the channel statistics of each link follow a centered circularly complex Gaussian distribution. Since the links are independent of each other, the average SNR of each link is sufficient as an input to trace back the statistics of each link. Given the simplified, (1,L + M -1,1) network, the problem of finding the optimal rate R i for the source i subject to a BLER QoS,i constraint has the following form:
R SLA i = argmax R i ∈ R E R i (1 -O i,T used ) M + αT used , subject to Pr{O s,T used = 1} ≤ BLER QoS,s , ∀s ∈ {1, . . . , M }, and R i ≥ R min,i , (3.2)
where Pr{O i,T used = 1} = 1 -
R i > I s,d + T used l=1 αI a l ,d [s ∈ S a l ,l-1 ]
Pr(H)dH with Pr(H) is the joint probability of channel realizations of all the links in the network. Note that equation (3.2) is reached after a direct simplification of equation (3.1). Simply, only one source node is considered. The problem of finding the maximum rate R i for source i in the case of FLA simplifies to the following:
R FLA s = argmax R i ∈ R R i M + αT used 1 -R i > I s,d + T used l=1 αI a l ,d [s ∈ S a l ,l-1 ] . (3.3)
A detailed step-by-step algorithm in which a rate is allocated to source i under GA assumption with CDI available at the destination (SLA) is given by Algo. 1. First, we set the initial best efficiency. Then, each possible candidate rate from the set { R 1 , . . . , R n MCS } is considered one after another in the first "for loop" on j. We only consider the rates satisfying the minimum rate constraint R j ≥ R min,i . The second "for loop" allows to average out the Pr(O i,T used = 1), for the given rate R j over Nb MC realizations of all channels. The averaging is done according to statistics given by the average SNRs of all links. Hence, inside the loop cnt, the quality of each channel is known, since they result from the random realization of all channels. Therefore, in order to calculate the probability of outage seen in equation (3.2), it is sufficient to use the Monte-Carlo (MC) simulations approach, where the integral is replaced by the sum:
R i > I i,d + T used l=1 αI a l ,d [i ∈ S a l ,l-1 ] Pr(H)dH = 1 Nb MC Nb MC cnt=1 R i > I i,d (H cnt ) + T used l=1 αI a l ,d (H cnt )[i ∈ S a l ,l-1 ] .
The FLA algorithm is very similar to the SLA one, so it is left out of the text. The main difference is the absence of the averaging of the individual outage probability over Nb MC realizations of all channels. For that reason, variables out, T , and P out i,R j are not used, nor is the "for" loop on cnt. Additionally, instead of drawing the channels H cnt , it is assumed that H is already known at the destination due to available CSI information of all channels.
Sequential Best-Response Dynamic solution
After setting up the starting point of the rates (using GA approach), the BRD algorithm follows. The idea is to modify, iteratively, the chosen rates. Since the joint allocation is very complex, we will correct the starting point chosen for each source node successively. In that case, the rate of source i is a function of the sources' rates updated in the same iteration prior to source i (sources with index i ′ < i), and the rates updated for the last time in the previous iteration, t -1 (sources with index i ′′ > i). The correction is repeated until the algorithm converges when no source node modifies its rate any further. Algorithm 1 Slow-link adaptation algorithm based on "Genie-Aided" assumption for source i s.t. BLER QoS,i target.
1: S best ← -1.
▷ Initialize the best efficiency to -1. 2: for j ← 1 to n MCS do ▷ Number of candidate rates.
3:
Pick sequentially R j ∈ { R 1 , . . . , R n MCS } s.t. R j ≥ R min,i . 4:
S ← 0 ▷ Initialize the efficiency sum to 0.
5:
out ← 0. ▷ Counter of MC samples leading to outage.
6:
T used ← 0. ▷ Accumulated nb. of rounds used in the 2. phase. for t ← 1 to T max do ▷ For each round we do:
16:
Random node selection by the scheduler: a t . 17: S at,t-1 ← S at,t-1 ∪ {i}.
C 1 ← I i, at + α t-1 k=1 I a k , at 1 {i∈S a k ,k-1 } . ▷ Acc. mut. inf. between i and a t . 18: if R j ≤ C 1 then ▷ Check if a t has decoded i.
C 2 ← I i,d + α t k=1 I a k ,d 1 {i∈S a k ,k-1 } . 22: if R j ≤ C 2 then
▷ Check if the dest. has decoded i.
23:
S d,t ← S d,t ∪ {i}.
24:
T used ← t. ▷ Nb. of rounds used for the current MC sample.
25:
break. ▷ out does not change.
26:
end if
27:
if t = T max then 28:
out ← out + 1.
29:
out ← 1.
30:
T used ← T max .
31:
end if
32:
end for
33:
T used ← T used + T used .
34:
S ← S + R j (1-out) M +αT used
▷ Compute the current efficiency.
35:
end for 36:
P out i,R j ← out Nb MC
▷ The avg. outage prob. of i with R j .
37:
if S > S best and P out i,R j ≤ BLER QoS,i then 38:
S best ← S and R i ← R j 39:
end if 40: end for Algorithm 2 Best-Response algorithm under the QoS constraints on individual BLER targets BLER QoS,i , ∀i ∈ {1, . . . , M }.
1: t ← 0.
▷ Counter of iterations. 2: Set the candidate rates: R = { R 1 , . . . , R n MCS }. 3: Rate initialization under GA assumption with a random node selection:
{ R 1 (0), . . . , R M (0)} ← {R GA 1 , . . . , R GA M } 4: R i (-1) ← 0 for all i ∈ {1, . . . , M } ▷ To force loop to start 5: while (| R i (t) -R i (t -1)| > 0 for some i ∈ {1, . . . , M } do 6: t ← t + 1.
7:
for i ← 1 to M do ▷ for all sources, choose the rate that maximizes the ASE
R i (t), ← argmax R i ∈ R η R 1 (t), . . . , R i-1 (t), R i , R i+1 (t -1), . . . , R M (t -1) lll such that Pr{O s,T used = 1} ≤ BLER QoS,s for all s ∈ {1, . . . , M } lll and R i ≥ R min,i
▷ while satisfying the constraints
9:
end for 10: end while In the correction method described above, a given source i chooses the best rate corresponding to the maximum spectral efficiency while meeting the QoS constraints. Based again on equation (3.1), we write the optimization problem for a given source node i as:
R i = argmax R i ∈ R E M j=1,j̸ =i R j (1 -O j,T used ) + R i (1 -O i,T used ) M + αT used ,
subject to: Pr(O s,T used = 1) ≤ BLER QoS,s , ∀s ∈ {1, . . . , M }, and R i ≥ R min,i .
(3.4)
Algo. 2 presents the BRD algorithm in the case of the SLA scenario. The same modifications mentioned in the algorithm of the GA approach are needed here to adapt to the algorithm of the FLA. The destination first initializes the rates following the result of the GA approach, and then it performs the correction process for all the M source nodes. The algorithm terminates once we have no further changes in the rate choices for all the source nodes. A small remark should be mentioned about step 8 of the BRD algorithm. By this step, we have captured all the details previously mentioned in the GA algorithm (between step 7 and step 35), i.e., the details of the MC simulation used to obtain the outage probability and the average number of rounds used. This is done for the sake of brevity, but both algorithms do individual outage tests to achieve the QoS constraint while allocating the rates.
By performing MC simulations, where the results are presented next, we witness that the number of iterations needed for the algorithm to converge is relatively small. In addition, we witness that the MC method is robust to the number of samples, as the degradation seen with decreasing the number of samples is not significant i.e., the results barely change even when simulations were based on only 10 samples. Moreover, we have demonstrated that the utility function is not always convex since the BRD, when initialized with other starting points than the GA, can converge to a local optimum far from the global one depending on the simulation scenario (not presented for brevity). It confirms that the convergence analysis is simulation scenario dependent, which makes it extremely difficult to tackle analytically. In the next subsection, we talk about the convergence and the complexity of the BRD algorithm used.
Convergence and complexity
Convergence: Theorem 3.2.1. The BRD algorithm converges to an optimal (local or global) rate allocation after a limited number of iterations.
Proof: The BRD is composed of an initialization step and an iterative correction step. The initialization step is always fixed to 1 iteration. Concerning the correction step, the stopping condition (step 5 in Algo. 2) happens when the allocation of all the sources' rates is not changed from the previous iteration for all sources i ∈ {1, ..., M }. Since the number of possible rate for all sources is finite (n MCS ), and since the argmax at step 8 only updates the rates if η strictly increases, the process of BRD completes with a finite number of iterations.
Complexity:
The complexity of the proposed BRD algorithm is much smaller than the case of the exhaustive search approach algorithm. In the latter, the calculation of each Pr(O i,T used = 1) is performed (n MCS ) M times, while in the proposed algorithm, in one iteration the same calculation is performed n MCS × M times. Note that the GA algorithm is only an initialization phase where rates are fixed. Thus, we need n MCS × M times of the same calculation. As we will see next, the number of BRD iterations is relatively small, ensuring the practicality of the used BRD algorithm.
Variable time slot duration
In this section, we consider LA for both rate and channel use allocation. In sharp contrast with existing cooperative transmission schemes, we consider the packet size to be timevarying. Although this assumption extends the complexity of the allocation problem, it is an interesting DoF which plays an important role in improving the efficiency of the network. The idea is that based on the channel conditions of different sources, it would be better to give more channel uses for a given source in good radio conditions and fewer channel uses for another source in bad radio conditions. Thus, we propose in this section to generalize the previously presented algorithms to tackle both rate and channel use allocation.
Novel system model
In the prior art [START_REF] Mohamad | Outage achievable rate analysis for the non orthogonal multiple access multiple relay channel[END_REF][START_REF] Mohamad | Outage analysis of various cooperative strategies for the multiple access multiple relay channel[END_REF][START_REF] Mohamad | Outage analysis of dynamic selective decode-and-forward in slow fading wireless relay networks[END_REF][START_REF] Mohamad | Dynamic selective decode and forward in wireless relay networks[END_REF][START_REF] Cerovic | Centralized scheduling strategies for cooperative harq retransmissions in multi-source multirelay wireless networks[END_REF][START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], the number of channel uses at each of the transmission phase U and the retransmission phase Q was fixed, and accordingly, the ratio of channel uses between the two phases was also fixed. Here, we introduce a new DoF composed of a variable ratio of the number of channel uses. In particular, we fix the number of channel uses at the retransmission phase, and we define a variable number of channel uses for the transmission phase for each different source node s. In other words, the ratio of channel uses for a source node s ∈ {1, . . . , M } is denoted as α s = Q U 1,s . As a result of the introduced variable parameter, the LA problem we are investigating gets more complex. Indeed, rather than adapting source rates at the initial phase before starting the transmission of a new frame, the destination should now allocate both the rate and the ratio for each given source node. In figure 3.2, we see the initialization phase where rates are initialized, the transmission phase where a different number of channel uses is used for each different source node and the retransmission phase where there is a fixed number of channel uses and different allocated relaying nodes. A key assumption of our work is that the sources can use packets with variable sizes at the transmission phase (i.e., U 1,i in the first phase), accordingly, the initialization phase includes both rate and channel use allocation. Fixing Q and varying U (and not the contrary) is simpler since during the retransmission phase a time slot is not dedicated to a specific node (being a relay or a source).
Performance metric and outage events for variable channel uses
Upon making the channel use in the transmission phase variable, we define
• n CUR is the number of different Channel Use Ratios (CUR) available.
• A = { α 1 , . . . , α n CUR } is the set of possible channel use ratios available.
• α s , is the channel use ratio of source s after the optimization.
• α i , one possible value of α s taken from the set of possible channel use ratios available.
Now, the spectral efficiency per frame with Variable Channel Use (VCU) η frame-VCU can be written as:
η frame-VCU (H, P ) = nb bits successfully received nb channel uses
= M i=1 K i (1 -O i,T used ) M l=1 U 1,l + QT used = M i=1 R i α i (1 -O i,T used ) α + T used (3.5)
where α = M l=1 1/α l denotes the sum of inverse of all the channel use ratios, and R i = K i /U 1,i represents the rate of a source i. Note that R i and α i have a fixed value for SLA, while they change from frame to frame for FLA. In [START_REF] Mohamad | Outage analysis of various cooperative strategies for the multiple access multiple relay channel[END_REF], Mohamad et al provide an outage analysis for various cooperative schemes. Nevertheless, the analysis was based on a fixed channel uses in the transmission phase. Accordingly, we present here the outage derivations when the number of channel uses in the transmission phase is considered variable. Specifically, and in the case of a variable number of channel uses, the equations that represent the common and the individual outage events for MU and SU cases (i.e., equations (2.3-2.7) can be generalized to:
E VCU t,B (a t , S at,t-1 ) = U ⊆B i∈U R i α i > i∈U I i,d α i + t-1 l=1 I a l ,d C a l (U) + I at,d C at (U) , (3.6)
O VCU s,t (a t , S at,t-1 ) =
I⊂S d,t-1 ,B=I,s∈B E VCU t,B (a t , S at,t-1 ) = I⊂S d,t-1 U ⊆I:s∈U i∈U R i α i > i∈U I i,d α i + t-1 l=1 I a l ,d C a l (U) + I at,d C at (U) , (3.7)
O SU-VCU s,t (a t , S at,t-1 ) = R i α i > I i,d α i + t-1 l=1 I a l ,d C SU a l + I at,d C SU at , (3.8)
E SU-VCU t,B (a t , S at,t-1 ) = s∈B O SU-VCU s,t
(a t , S at,t-1 ).
(3.9)
We see that the ratio values are included in the outage events, and thus, the optimal allocation of ratios should be performed jointly. Similarly, we generalize the optimization equations to the VCU case (i.e., the equations (3.1) and (3.4)) as:
({ R 1 , α 1 }, . . . , { R M , α M }) = argmax {(R 1 ,α 1 },...,{(R M ,α M })∈{ R, A} M E M i=1 R i α i (1 -O i,T used ) α + T used , subject to: Pr(O i,T used = 1) ≤ BLER QoS,i , ∀i ∈ {1, . . . , M }, R i ≥ R min,i , ∀i ∈ {1, . . . , M }. (3.10) { R i , α i } = argmax {R i ,α i }∈{ R, A} E M j=1,j̸ =i R j α j (1 -O j,T used ) + R i α i (1 -O i,T used ) α + T used , subject to: Pr(O s,T used = 1) ≤ BLER QoS,s , ∀s ∈ {1, . . . , M }, and R i ≥ R min,i , (3.11)
where (3.10) represents the joint (rate-ratio) pair optimization and (3.11) represents the (rate-ratio) pair optimization per source (when using the BRD to be presented next). The difference seen in the latter two equations as compared to the case of fixed channel use allocation is that we are optimizing now η frame-VCU which includes not only the rate values but also the ratios.
Rate and channel use allocation
In this subsection, we present the BRD algorithm that allocates both the rate and the ratios of the sources (i.e., the {R i , α i } pairs). The methodology of the BRD algorithm remains the same. First, we have an initialization step. Then, a sequential correction follows. As seen in the optimization equations (3.10) and (3.11), upon including the VCU DoF, the ratios are included jointly in the optimizations. In other words, in order to reach the optimal allocation, the rates and the ratios of all sources should be allocated jointly (3.10). Again, and to avoid the exponential number of possible allocations, BRD is used to perform allocations sequentially (3.11).
For the initialization step, we use a similar algorithm as that presented in Algo. 1. Concerning the ratios, we set each source ratio as the average value of the possible ratios. In other words, for all sources i ∈ {1, . . . , M }, we set α i to α average ← 1 | A| | A| q=1 α q where A = { α 1 , . . . , α CUR }. In case α average is not in the set of possible ratios, we choose the closest one to it: α i ← argmin α∈ A |α -α avgerage |. Concerning the rates, the same steps as Algo. 1 are performed. So, to sum up, the initialization steps are the same as in the case of fixed channel use case, while using the average ratio for all the sources.
For the BRD correction step, the algorithm goes in a similar manner as Algo. 2. The difference here is that we are allocating both, the rate and the ratio of each source successively. In addition, the algorithm terminates when there is no change in either Algorithm 3 Best-Response algorithm under the QoS constraints on individual BLER targets BLER QoS,i , ∀i ∈ {1, . . . , M } for VCU case.
1: t ← 0.
▷ Counter of iterations. 2: Set the candidate rates and ratios: R = { R 1 , . . . , R n MCS }, A = { α 1 , . . . , α n CUR }. 3: Rate and Ratio initialization under GA assumption with a random node selection:
[
R 1 (0), . . . , R M (0)] ← [R GA 1 , . . . , R GA M ], [ α 1 (0), . . . , α M (0)] ← [α GA 1 , . . . , α GA M ]. 4: { R i (-1), α i (-1)} ← {0, 0} for all i ∈ {1, . . . , M } ▷ To force loop to start 5: while (| R i (t) -R i (t -1)| > 0 or | α i (t) -α i (t -1)| > 0), for some i ∈ {1, . . . , M } do 6:
t ← t + 1.
7:
for i ← 1 to M do ▷ for all sources, choose the pair which maximizes η end for 10: end while the rates nor the ratios of all the sources. The steps of this algorithm are presented in Algo. 3. The convergence of the BRD algorithm for the VCU case holds following the same explanations of 3.2.1. Concerning the complexity, upon following a VCU case, the complexity increases. Nevertheless, the increase is not exponential. The only difference is that rather than passing through n MCS possible rates, we have to pass through n MCS × n CUR pairs of rates and ratios. Thus, and as we will see in the numerical results section, using a VCU transmission is seen as a trade-off between complexity and performance.
8: { R i (t), α i (t)} ← argmax {R i ,α i }∈{ R, A} η { R 1 (t), α 1 (t)}, . . . , { R i-1 (t), α i-1 (t)}, lf g; glllf g; glllf g; gll{R i , α i }, { R i+1 (t -1), α i+1 (t -1)}, . . . , { R M (t -1), α M (t -
In table 3.1, we summarize the complexity of different allocation methods to be presented in the next section. This table includes the allocation using exhaustive search, BRD, GA allocations with fixed and variable channel use ratios. In this table, we present the complexity and the performance of these methods showing the importance of the BRD algorithm being a practical method with lower complexity and that approaches the benchmark of the complex exhaustive search approach.
Numerical results
In this section, we present the performance results of several scenarios using MC simulations. We consider the (3,3,1)-MAMRN, with T max fixed to 4. In addition, we assume independent Gaussian distributed channel inputs (with zero mean and unit variance), with I a,b = log 2 (1 + |h a,b | 2 ). Note that some other formulas could be also used for calculating I a,b where for example discrete entries, finite length of the (JNCC/JNCD) architectures, etc. would be taken into account. In addition, we can calibrate the mutual information by using weight factors as in [START_REF] Brueninghaus | Link performance models for system level simulations of broadband radio access systems[END_REF]. As mentioned in [START_REF] Polyanskiy | Channel coding rate in the finite blocklength regime[END_REF], our main conclusions would still apply to these different functions of mutual information. Moreover, we adopt an asymmetric link configuration setting, where each link has a different average SNR. The average SNR of each link is obtained from a unique value γ following the ordered steps:
1. All links are set to γ.
2. Links including source 2 are set to γ-4dB.
3. Links including source 3 are set to γ-7dB.
4. Links including both sources 2 and 3 are set to γ-5dB.
We carefully chose such kind of asymmetric configuration in order to make source 1 the source with the best links, followed by source 2, and source 3 is the source in the worst conditions. We recall that the destination scheduling strategy is the one described in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], i.e., in each retransmission round the selected node is the one that has i) the highest mutual information with the destination and ii) which can help (its decoding set includes at least one message that has not yet been decoded correctly by the destination). The set of rates and ratios used are {0,0.75,1.5,2.25,3,3.75} [bits per channel use], and {0.1,0.55,1,1.45,1.9}, respectively.
We define two QoS scenarios. QoS 1 : Pr(O i,T used = 1) ≤ BLER QoS,i = 1, and R i ≥ R min,i = 0 [bits per channel use], ∀i ∈ {1, . . . , M }. QoS 2 : where R min,i = 0.5 [bits per channel use] and BLER QoS,i = 10 -3 ∀i ∈ {1, . . . , M }. Clearly, with QoS 1 , no constraint is taken into consideration. Note that although we investigated many different QoS constraints, we chose these two QoS scenarios since we believe they cover two representative cases: no constraint and a severe constraint. As mentioned before, for FLA, we only use QoS 1 . This is due to the fact that since the full CSI is known at the destination, it is possible to avoid any individual outage per frame by simply not transmitting or, equivalently, transmitting with zero rate. In SLA on the contrary, we investigate both cases, no constraint (i.e., QoS 1 ) and a severe constraint (i.e., QoS 2 ).
We divide the results into two main parts. In part 1, we investigate the effect of the new DoF of variable number of channel uses at the transmission phase. There, we check the gain of the proposed idea, and we investigate how this gain is changing with respect to the channel conditions and the system parameters (e.g., number of sources and number of relays). In part 2, we investigate the performance of the BRD algorithm allocating the rate and the channel uses for the sources. The performance is compared with an exhaustive search approach for both MU and SU encoding. We also test the practicality of the algorithm with respect to the channel conditions, number of samples needed, and the system parameters.
In the first part, we compare the ASE with respect to γ of 3 communication schemes, namely, no cooperation, cooperation, and cooperation with variable ratios. In the case of no cooperation, T max is fixed to zero, meaning that we only have a transmission phase, and no notion of cooperation or retransmission is included. For the case of cooperation, the ratios of all the sources are fixed to 1 (the average value of the possible ratio set). Then, at each retransmission time slot, the scheduling strategy is the one recalled above [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF]. Finally, for the case of cooperation with variable ratios, the channel use ratios are optimized per source exploiting the proposed DoF.
In Fig. 3.3, we see the performance of the three schemes: no cooperation (T max = 0), cooperation (T max = 4) with fixed ratios (α = 1), and cooperation with variable ratios (α is optimized per source node) with: a) FLA with QoS 1 , b) SLA with QoS 1 , and c) SLA with QoS 2 . In the FLA scenario (i.e., in Fig. 3.3 (a)), the gain of cooperation with fixed ratios compared with no cooperation increases for low SNR values (low γ). This gain decreases for high SNR values. On the other hand, and upon introducing variable ratios at the transmission phase, the gain of cooperation increases and become significant over all the considered SNR range (-5dB to 20dB). In Fig. 3.3 (b), a similar performance is seen with SLA with QoS 1 . Once again, optimizing the channel use at the transmission phase is leading to a significant gain compared with fixed channel use and with no cooperation at all. Finally, in Fig. 3.3 (c), the performance of the three schemes is presented for SLA with QoS 2 . Upon using this severe constraint, we see that with no cooperation, the system is always in outage. In other words, no possible allocation can be used to achieve the required constraint. On the other hand, upon using cooperation and cooperation with optimized channel use allocation, we see that starting from γ = 4dB, the system is not in outage. Here again, optimizing α per source is leading to better performance over all the considered SNR range larger than γ = 4dB. To summarize, Fig. 3.3 gives the following findings: 1-using cooperation can help improve the performance, and is necessary with severe QoS constraints; 2-optimization of the channel use ratio can further improve the performance leading to a significant gain compared to fixed ratios scheme.
To our interest, we aim to investigate the operational conditions under which the gain of the proposed DoF (variable ratio) is significant. Also, we aim to investigate this gain for different channel conditions (e.g., high SNR), and different system parameters. Accordingly, in the next two figures, we present the ratio of the (ASE with optimized α) and the (ASE with the fixed α). We present this ratio for the three cases: FLA, SLA with QoS 1 , and SLA with QoS 2 .
In Fig. 3.4, the ratio presenting the gain of variable α compared with fixed α is seen over the SNR range (5dB to 35dB). We aim here to investigate how the gain is changing for high SNR values. We see that for the three different LA considered, the gain is acting in a similar way. The gain starts to increase from low SNR values reaching its peak at an intermediate SNR value, and then it decreases for high SNR reaching the ratio 1 (meaning that we have no gain). To explain this asymptotic behavior, we recall that at high SNR, the destination is able to decode all the messages sent by all the sources no matter what rate or ratio is being used. Accordingly, the difference between the channel conditions of the different sources is insignificant (all sources are facing similar channel conditions of high SNR). Moreover, having a fixed possible rates and possible channel ratios sets, will lead to a limitation of the gain. For the rates, the destination will select the highest possible rate for all the sources. And finally, upon having a fixed rate for all the sources, the channel use allocation will be indifferent.
Such analysis can also be deduced directly by analyzing the spectral efficiency per frame equation (i.e., equation (3.5)). Following that equation, and at high SNR, we can fix R i to R max , and we can fix the outage and the T used to zero (at high SNR there is no outage and no need for retransmission phases). Then, it is directly seen that the ASE is limited to R max which justifies why we reach no gain at high SNR.
A final comment about the ratio at low SNR. As we see in the previous figure (i.e., Fig. 3.3) the ASE at low SNR is very small (and sometimes equal to zero), accordingly Nb. of sources/relays checking the ratio (ASE variable α/ASE fixed α) for such small values is not important.
In other words, at low SNR, we might have a gain which is big, just because the ASE is very small. This might mislead the conclusion that the proposed DoF is important at low SNR. On the contrary, we say that after checking both the ASE and the ratio (ASE variable α/ASE fixed α), the proposed DoF is most significant at intermediate SNR values. Following Fig. 3.4, the gain is most significant in the range of 5dB to 15dB. Accordingly, in Fig. 3.5, we fix γ to 10dB and we investigate the effect of the number of sources and relays on the gain of the VCU case. The x-axis represents the value of M and L considered. In other words, for a given x, we consider a (x,x,1) MAMRN. Note that we are still in an asymmetric link configuration, and for any value of M /L, the links of the sources are organized in a way making source i in better channel conditions from source j for i < j. Specifically, for a (x,x,1) system, the average SNR of each link is obtained from a unique value γ = 10 following the ordered steps:
1. All links are set to γ.
2. The links including source i ∈ {1, ..., x} are set to γ -2(i -1)dB.
For the three considered schemes (FLA, SLA with QoS 1 , SLA with QoS 2 ), we see that the gain of using variable ratios increases with the increase of the number of sources and relays. Also, we notice that the gain is approximately linear with respect to the size of the system. For FLA, the gain ratio reaches 1.7 with (8,8,1) system, whereas for SLA, the ratio is 1.45 and 1.3 with QoS 1 and QoS 2 respectively. This concludes the first part of the simulations. So to summarize, our findings are
• Using cooperation (with fixed ratios or variable ratios) can improve the spectral efficiency and is a must for severe QoS constraints.
• Using a variable number of channel uses at the transmission phase can help improve the spectral efficiency.
• The gain of using the new DoF is mostly significant for intermediate SNR values. This gain is limited at high SNR following the highest possible rate value in the set of possible rates considered.
• The gain of VCU increases with the increase in the system considered (i.e., with an increase in the number of sources and relays).
Keeping the same parameter settings used in the first part of simulations, and specifically, the configurations used inf Fig. 3.3, we now evaluate the performance of the BRD LA algorithm for α optimized per source with respect to other possible LA strategies including the LA utility metric optimization based on the exhaustive search approach. In particular, seven algorithms are presented:
• Exhaustive search approach: acting as the performance upper bound (exhaustive search over the possible vectors of pairs {α, rate}). This algorithm is presented for the MU and the SU encoding.
• Best-Response Dynamic algorithm. Again, this algorithm is presented for the MU and the SU encoding.
• Genie Aided approach: being the starting point of the BRD algorithm in the MU case.
• Maximum Rate approach: a trivial approach using the average α and the maximum rate available (3.75 [bits per channel use]).
• Minimum Rate approach: a trivial approach using the average α and the minimum positive rate available (0.75 [bits per channel use]).
Here again, we present the results of FLA, SLA with QoS 1 target, and SLA with QoS 2 target. It is evident that in the three different scenarios, the proposed BRD algorithm converges to the optimal exhaustive search approach in both cases: the MU and the SU encoding. In addition, we notice that the GA approach leads to a loss of around 7dB in the FLA case (Fig. 3.6 (a)), at most 5dB in the SLA with QoS 1 target (Fig. 3.6 (b)), and at most 5dB in the SLA with QoS 2 target (Fig. 3.6 (c)). Concerning the fixed allocation strategies, the minimum rate approach is always left behind. On the other hand, the fixed max rate approach gives a close performance to the GA approach when there is no QoS constraint (as in Fig. 3.6 (a) and 3.6 (b)) and an unacceptable performance with a severe QoS constraint (as in Fig. 3.6 (c)) except for very high SNR. This result confirms the performance of the practical low-complexity BRD algorithm. Even with a varying number of channel uses at the transmission phase, the BRD algorithm is approaching the complex exhaustive search approach. In addition, this result validates the observation in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] that the SU encoding strategy while being much simpler behaves similarly to its MU counterpart, and thus presents a great interest in practice since the shelves capacity approaching IR codes can be used. Shorty, this result validates the performance of the BRD for both the MU and the SU case. In the next two figures, we investigate two other aspects of the BRD. In Fig. 3.6, we validate its performance,
Average spectral efficiency
Exhaustive search approach BRD approach Exhaustive search approach SU BRD approach SU GA approach Fixed max rates Fixed min rates
Average spectral efficiency
Exhaustive search approach BRD approach Exhaustive search approach SU BRD approach SU GA approach Fixed max rates Fixed min rates
Average spectral efficiency
Exhaustive search approach BRD approach Exhaustive search approach SU BRD approach SU GA approach Fixed max rates Fixed min rates which gives an ASE close to the upper bound. Next, we investigate its practicality, in terms of the needed number of MC iterations and the number of BRD iterations.
In SLA, the BRD algorithm is performed based on the CDI of the channel conditions. According, a number of samples are needed to be simulated at the destination in order to calculate the argmax seen in step 8 of Algo. 2. In Fig. 3.7, we present the ASE for SLA with QoS 1 while using 10, 100, and 1000 MC samples. As we see in this figure, the ASE is slightly changing with respect to the number of MC samples used. Specifically, even 10 iterations were enough to reach acceptable performance. We conclude that the MC method is robust to the number of samples. Note that this results is also seen with different QoS constraints (e.g., QoS 2 ), but we just show it with QoS 1 .
Finally, in Fig. 3.8, we vary again the size of the system by varying the number of sources and relays used. The link configuration is similar to the one described in Fig. 3.4. Here, we investigate the number of BRD iterations used before reaching convergence. It is well known that the BRD algorithm will converge since the number of possible rates and ratios is finite. In the previous results (Fig. 3.6), we validate that the BRD convergence value is approaching the optimal value. In Fig. 3.7, we validate the practicality of the BRD algorithm being able to be performed using a low number of MC simulations. Finally, in Fig. 3.8, we validate the convergence speed by presenting the number of iterations the BRD algorithm is using for each (x, x, 1)-MAMRN.
In Fig. 3.8, we see that for the three scenarios (FLA, SLA with QoS 1 , and SLA with QoS 2 ) the number of BRD iterations is relatively small. Since in FLA the LA is performed at each new CSI, we present the average number of BRD iterations used. On the contrary, since in SLA the LA is performed over a fixed CDI, we present the exact number of BRD iterations used. In both FLA, SLA QoS 1 , and SLA with QoS 2 , the number of iterations used is robust. For FLA, we see that the average number of iterations used is less than 3 iterations in all the considered systems up to (8,8,1)-MAMRN. Similarly in SLA, we see that with QoS 1 , the highest number of iterations used was 4, and for SLA with QoS 2 , the number of iterations used was always 2. It is seen that upon having a constraint, fewer options are available to the BRD algorithm, leading to a faster convergence than no constraint scenario. This concludes the second part of the simulations. So to summarize, our findings are
• The BRD algorithm is approaching the exhaustive search approach while tackling both rate and ratio allocation.
• The performance is similar for the MU and the SU encoding, which is interesting due to the practicality of the SU encoding case.
• In SLA, the MC method is robust to the number of samples needed, where only 10 iterations were sufficient in the presented scenario.
• The number of BRD iterations is slow in all the considered systems (up to (8,8,1)) which again validates the practicality of the considered algorithm.
FLA with partial CSI
In the previous three sections, two cases were mentioned: SLA and FLA. SLA relies on the CDI of all links at the destination, while FLA relies on the CSI of all links. Clearly, SLA is less demanding in terms of channel information acquisition control overhead while FLA provides closer-to-optimum scheduling decisions. In this section, we propose and analyze a novel practical LA algorithm combining the benefits of both. It is based on a FLA strategy with partial CSI. The proposed LA strategy is a FLA algorithm in the sense that the rates are allocated at the beginning of each frame based on the partial CSI of the direct link. However, it relies on the average values of the unknown indirect links to avoid their heavy acquisition (in terms of signaling overhead). Note that this strategy can be adopted for both fixed channel use or variable channel use cases. For brevity, and to capture the performance of FLA with partial CSI, we present and investigate this proposal for fixed channel use allocation. Nevertheless, the proposal can be generalized to VCU case intuitively.
The proposed scheme is therefore an intermediate LA strategy which outperforms SLA but does not require the control overhead in FLA. Table 3.2 summarizes and compares the control exchange process of the three adaptation schemes, where the first two rows correspond to FLA and SLA proposed in the previous sections, while the last row corresponds to our proposed intermediate solution. Clearly, we see how the proposed LA reduces the information needed at the destination while exploiting all the available information.
From an overhead perspective, the proposed strategy acts similarly to SLA, i.e., it incurs a small overhead. We neglect the overhead of the channel acquisition of the direct links and the one related to the allocation of the rates per frame. We focus on the overhead of the indirect links which is the most costly. In a given (M, L, 1)-MAMRN, the number of the indirect links corresponds to the number of possibilities to select two nodes among M + L which is C M +L 2 where C is the combination operator. This means that the overhead can be calculated as C M +L 2 × nb bits per CSI/CDI × update percentage. As an example, we consider a (3, 3, 1)-MAMRN where 9 bits are needed for the quantification of a real value. In the case of high mobility, where the CSI is changing after each frame, the overhead of FLA is C 6 2 × 18 × 1 = 270 bits. On the other hand, and since SLA and the proposed strategy depend on the CDI which is assumed fixed for a long time (for example 1000 frames) and corresponds to a real value like the SNR, the overhead is C 6 2 × 9 × 0.001 = 0.135 bits. Clearly, FLA overhead quickly becomes prohibitive for a high number of nodes and/or high mobility. Note that in the example above, the 18 bits correspond to the real and imaginary parts of the CSI quantification, while the 9 bits correspond to the real value of the CDI quantification.
Framework
In figures 3.9 and 3.10, we see an illustration of the framework. The control exchange between the transmitting nodes (sources and relays) and the destination is presented within a frame transmission. First, the destination broadcasts Sounding Reference Signals (SRS) request to acquire the CSI of the direct links. Next, after receiving the partial CSI, the destination allocates and broadcasts the rates of the sources. The allocation process is based on the channel distribution of the indirect links. Then, each source transmits its message with its allocated rate while including Demodulation Reference Signals (DMRS) which are necessary for demodulating the signal coherently at the destination and which help update the partial CSI knowledge. Finally, there are two cases. The first case is that the destination decodes all the sources before T max and then broadcasts an ACK which triggers the flushing of the source buffers. The second case is when there is at least one source node that is not decoded after T max in which case the sources flush their buffers (based on timers). In both cases, we reach the initialization of a new frame. On the other hand, and every few hundreds of frames, we see in figure 3.10 the CDI update event, where the destination requests and receives CDI of the updated nodes before performing the rate allocation process, and occasionally the CDI update is event-driven and initiated by the source/relay (when the CDI changes).
Utility of FLA with Partial CSI and the proposed algorithm
The ASE can now be derived from η frame for SLA/FLA as:
SLA
• LA occurrences: Once the CDI of any link needs to be updated η SLA/FLA (H,
P SLA/FLA ) = E{η frame } = E M i=1 R i (1 -O i,Tmax ) M + αT used . (3.12)
In SLA, the rates are allocated based on the CDI which are fixed within hundreds of frames, and thus, the rates are fixed within the expectation. On the other hand, in FLA, the rates are allocated based on the CSI which is updated frequently, and thus, the rates are random variables. We now present the updated utility metric and rate allocation scheme for the proposed FLA with the partial CSI assumption. Following the strategy description mentioned before, the rate allocation process is based on the knowledge of the channel distribution of the direct links. In other words, for each given CSI of the direct links, the destination will allocate the rates of the sources based on the CDI of the indirect links. Here too, the aim is to maximize the ASE per frame. The channel information of the direct links is conveyed over unicast forward coordination control channels (from sources and relays towards the destination) that are assumed to be error-free. Similarly, the CDI of the indirect links is forwarded to the destination once there is any change in it (every few hundred frames). The source nodes forward the CDI of the S-S and S-R links, and the relays forward the CDI of the R-R links.
Using the law of total expectation, i.e., E{X} = E{E{X|Y }}, we update the expression for the FLA utility metric:
η FLA (H, P new ) = E M i=1 R i (1 -O i,Tmax ) M + αT used , = E E M i=1 R i (1 -O i,Tmax ) M + αT used H dir , = E ρ FLA . (3.13)
We propose a novel rate allocation strategy P = P new (coined FLA with partial CSI) which does not depend on H ind but changes for each H dir realization. Thus, the rate R i allocated per source does not change within the conditional expectation on H dir . It is clear that in order to maximize the ASE per frame, the destination should select the rates (i.e., R i , i ∈ {1, . . . , M }) per frame (that is why it is a FLA) based on the knowledge of H dir in a way that maximizes ρ FLA . This is the main advantage of our technique compared to the state of the art FLA: the destination does not need the CSI (i.e., does not need knowledge of H ind ) of the indirect links but only the CDI. The spectral efficiency per frame based on the partial CSI knowledge is a multivariable equation, function of all source rates as well as the node chosen at each time slot in the retransmission phase. We recall that we distinguish here R i , the rate of source i after the optimization, and R i , one possible value of R i taken from the set of possible Algorithm 4 Best-Response algorithm for FLA with Partial CSI.
1: t ← 0.
▷ Counter of iterations. 2: Set the candidate rates: R = { R 1 , . . . , R n MCS }. t ← t + 1.
7:
for i ← 1 to M do ▷ for all sources, choose:
8: R i (t) ← argmax R i ∈ R ρ FLA R 1 (t), . . . R i-1 (t), R i , R i+1 (t -1), . . . , R M (t -1)
▷ the rate which maximizes ρ FLA .
9:
end for 10: end while rates R. Accordingly, our multi-variable optimization problem can be written as:
( R 1 , . . . , R M ) = argmax {R 1 ,...,R M }∈ R M E M i=1 R i (1 -O i,Tmax ) M + αT used H dir (3.14)
where R = { R 1 , . . . , R n MCS } is a predefined finite set of rates of a cardinality that is equal to the number of available MCS (n MCS ). Note that we omit including a QoS constraint in the optimization problem. The reason behind this is that we want to compare FLA, SLA, and FLA with partial CSI (the proposal of this section). Since a zero rate is included in the set of possible rates, the FLA strategy with full CSI can avoid any outage. Thus, and to make the comparison possible with FLA, we omit including a QoS constraint. Nevertheless, a QoS constraint can be easily introduced by defining, for example, a minimum rate allowed and/or an outage probability limit in the case of SLA or FLA with partial CSI. One simplification of the problem is to consider that the CDI of the indirect links is a Dirac distribution around their average SNR (AWGN approximation of the unknown links with the same SNR) assuming a noise variance of 1. Since the optimal solution (the exhaustive search approach) for the optimization problem in (3.14) costs a high complexity and is inefficient in practice, we retain again for the BRD algorithm, where instead of solving the problem jointly, the solution is given to each user iteratively. The detailed algorithm of the BRD approach is presented in Algo. 4.
Numerical result
We present here the results of our MC simulations that validate the effectiveness of the proposed LA scheme. Particularly, we compare the performance of four different LA strategies: FLA, SLA, FLA Partial CSI (our proposed strategy), and FLA Partial CSI Dirac (our approximation strategy described at the end of the previous subsection. To this end, we consider a (3,3,1)-MAMRN scenario, and we set T max = 4 and α = 2. The allocated rates are chosen from a discrete MCS family of rates R = {0,0.5,1,1.5,2,2.5,3,3.5} [bits per channel use]. We further assume that the channel inputs are independent and Gaussian distributed with zero mean and unit variance, while noting that other channel inputs might be considered without changing the conclusions of this work. Finally, we consider the following asymmetric link configuration: first, the average SNR of each link is set to γ; second, the average SNR of each link that includes source 2 is set to γ -4dB and which includes source 3 is set to γ -7dB; lastly, the average SNR of the link between the sources 2 and 3 is set to γ -5dB. Thus we have purposefully set the source 1 to be in the best propagation conditions, while the source 3 is in the worst conditions.
The results are presented in figure 3.11, where it is noticed that for the whole interval of γ ∈ [-5, 20]dB, the proposed scheme is approaching the upper bound FLA strategy with insignificant loss. This approach outperforms the SLA approach with a significant improvement, up to 6dB gain for a high SNR regime. Moreover, we see that the Dirac approximation of the proposed algorithm performs in a similar manner with a slight reduction in the gain. That is, using the Dirac approximation of the CDI (i.e., the average SNR) rather than doing MC simulations, can gain up to 4dB compared to SLA strategy, with less than 2dB loss compared to the upper bound FLA approach. In conclusion, the performance of the proposed FLA with partial CSI approaches that of FLA while incurring significantly less control overhead, as it does not require frequent CSI updates from the indirect links.
Conclusion
In this chapter, we investigated different LA algorithms for orthogonal MAMRN conditioned on the available channel information at the destination. Furthermore, we proposed a new degree of freedom by adapting the time slot duration of each source during the transmission phase. Both SLA and FLA are investigated, as well as MU encoding and the SU encoding sub-case. Finally, FLA with partial CSI is proposed as an intermediate algorithm that approaches the FLA strategy while incurring less control overhead similar to the way seen in SLA. MC simulations show the significant impact of user cooperation on the spectral efficiency (up to 4 dB shift) as well as the importance of exploring the degree of freedom of the time slot duration associated with each source during the first transmission phase (up to 6dB shift). This gain is seen to be increasing with the size of the system (number of relaying nodes) and is limited to the maximum possible rate value. Furthermore, the numerical results validate the proposed BRD strategies (including a GA initial point determination), which tackle the complexity issue of the LA utility metric optimization. We see that the efficiency of the proposed algorithms holds in both cases: SLA and FLA, and in both encoding strategies: MU and SU encoding. We also see the practicality of the proposed solution being robust to the number of MC samples and facing a low number of BRD iterations.
Chapter 4
Centralized Scheduling and Relaying Nodes Selection Algorithms
Chapter summary
In this chapter, we target the problem of scheduling in the retransmission phase. In the first section, we tackle the degree of freedom seen in PR. In PR, we exploit all the possible relaying nodes available in the network. Rather than activating a single relaying node at each retransmission time slot, we propose activating multiple relaying nodes to send a better redundancy version of the message of a source node chosen to be helped. In the second section of this chapter, we further investigate the effect of the control exchange process in each of the scheduling strategies presented. Then, a novel selection strategy is proposed based on the estimation of the number of time slots needed to decode a given source message. The latter strategy uses PR but simultaneously reduces the control exchange process needed in the traditional selection strategies. The numerical results validate the gain of using PR compared to SR. In addition, they show the importance of reducing overhead by tackling the control exchange process performed at the retransmission phase.
Parallel retransmission 4.2.1 Framework
In the previous chapter, we investigated both MU and SU retransmission. To our interest, SU is a simplified orthogonal MAMRN protocol which is based on existing LDPC and Turbo codes which are used in the 3GPP LTE and NR standards. The protocol allows a retransmission of the IR per source, that is, transmitting bits of different parities on the basis of a single coding with a very low rate. In this chapter, we build on this protocol and we propose exploiting the diversity of activating several relays at a given retransmission to help a selected source node. We propose an improved node selection strategy which takes advantage of the multi-path diversity of the relaying nodes.
The idea is based on the fact that each relaying node has its own power budget, and accordingly, several relaying nodes can be activated at the same time.
In the previous chapter, for each retransmission, the destination chooses the unique active node which has the best connection to the destination and which can assist the destination. We say that a node can assist when its decoding set includes some source nodes which are not decoded at the destination yet. The scheduling decisions are based on the CSI of the direct links which is assumed to be available at the destination. We recall that the direct links are the S-D links, and the R-D links. The CSI of the indirect links is assumed not available at the destination due to the costly acquisition process needed and the overhead included in this process. We recall that the indirect links are the links of S-S, S-R, and R-R. Finally, we recall that we assume a slow fading scenario where the radio links between the nodes do not change within a frame transmission. Additionally, the channel realization is assumed independent from frame to frame, which simplifies the analysis and is sufficient to capture the performance of practical systems assuming ergodicity of the underlying random processes.
A Toy example
The proposed selection strategy is quite different than the one used in the prior art and in the previous chapter. Rather than selecting a single relaying node to help one source node at a given retransmission time slot, we propose allocating one source to be helped by multiple relaying nodes. In such a way, we make use of the multi-path diversity of the different relaying nodes and we exploit the available power budget at each relaying node. We present a toy example that describes how our proposal works (recheck the toy example presented in chapter 2, Fig. 2.3). Next, we present the control exchange process in the novel selection strategy, followed by the algorithm of how the destination will choose the source node that will be helped by multiple relaying nodes.
We present in Fig. 4.1 a toy example that shows how the selection strategy of our proposal works. We used the same parameters used in chapter 2 to make it easier to compare between our proposal and the prior art. Accordingly, in Fig. 4.1a, we consider a (3, 2, 1)-MAMRN. At the considered time slot (any time slot in the retransmission phase), the decoding sets of all the nodes are presented. We see that the destination decoded the message of source s 1 , the source did not decode any source message (but their own message), and that the relays r 1 and r 2 decoded respectively the set of sources {s 1 } and {s 1 , s 2 , s 3 }. In [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], the candidate relaying nodes to be selected are the nodes that can help at least one source which is not decoded by the destination. Here, on the contrary, the destination does not look for candidate relaying nodes, but looks for candidate source nodes. In other words, in our proposal, the destination first set the candidate sources to be helped by multiple relaying nodes.
As the destination decoded source s 1 , the candidate source nodes (in this example) are the sources s 2 and s 3 . In Fig. 4.1b, we see that for each candidate source, a set of relaying nodes can be activated to help this source. For example, if source s 2 is selected, both nodes s 2 and r 2 can be activated. Similarly, if s 3 is selected, both nodes s 3 and r 2 can be activated. After fixing the set of candidate source nodes, as well as the set of relaying nodes that can be activated with each of the source nodes, the destination chooses the source node with the highest equivalent mutual information. In other words, the destination checks the equivalent channel that will help sources s 2 (equivalent channel of the PR of nodes s 2 and r 2 together) and s 3 (equivalent channel of the PR of nodes s 3 and r 2 together). The source which has a better equivalent channel will be selected. Then, all the relaying nodes that decoded this source message will be activated to send the same redundancy of the message of the selected source. In Fig. 4.1c, we see that the destination chooses to help source s 2 , and accordingly, the relaying nodes s 2 and r 2 are going to be activated. This process is repeated at the beginning of each retransmission time slot while using the updated decoding sets of the nodes. The control exchange process of the mentioned strategy, as well as the method of calculating the equivalent channel of multiple relaying nodes are presented next.
Control exchange process and algorithm
First, the destination sends an ACK/NACK bit, then the relaying nodes send their decoding sets. The ACK bit indicates that all the sources have been decoded correctly, and the NACK indicates the contrary. The selection is performed after that. The destination calculates for each source i ∈ {1, ..., M } the SNR i associated with the transmission of the redundancy version of the source i. This is calculated on the basis of the number of relaying nodes that were able to decode this source, as well as their channel with the destination (check the three cases described below). The channel from each relaying node j ∈ {1, ..., M + L} to the destination is denoted h j,d and the set of relaying nodes j which can help the source i is denoted Help i . Accordingly, the destination selects the source s t with the best equivalent channel (highest equivalent SNR), and then, all the relaying nodes which decoded the chosen source s t retransmit redundancies. We consider three cases for estimating the SNR i :
• Case 1: each relaying node j ∈ {1, ..., M + L} does not know the channel h j,d
SNR i = P j∈Help i h j,d 2 /N 0 , (4.1)
where P is the transmission power of each node, N 0 is the noise spectral density, and h j,d is the channel whose power is normalized to 1.
• Case 2 "Equal Gain Combining (EGC)": each node j ∈ {1, ..., M +L} knows the phase Φ j of its channel toward the destination e -iΦ j = h * j,d /|h j,d | with i 2 = -1
SNR i = P j∈Help i |h j,d | 2 /N 0 . (4.2)
• Case 3: Assuming that the subset Help i = Help1 i Help2 i breaks down into a subset Help1 i of nodes knowing their phase with the destination (sent by the destination) and Help2 i not knowing it, in this case, SNR i for i ∈ {1, ..., M } is written as:
SNR i = P j∈Help1 i |h j,d | + j∈Help2 i h j,d 2 /N 0 . (4.3)
If the node i is selected, the transmission of each node belonging to Help1 i will be multiplied by e -iΦ j (coherent reception for the nodes belonging to Help1 i ). In Fig. 4.2, we present the control exchange process in each of the prior art (in blue) and the proposed (in bold red) selection strategies. For the prior art, it is similar to the one presented in chapter 2, with the only difference being that here we are using SU retransmission. This means that at the last step, the selected relaying node a t is going to help one source node b t from its decoding set and which is not decoded at the destination yet.
In our proposal, the destination returns the source index s t which has the best SNR. Following the receipt of the source index s t broadcast by the destination, the nodes having decoded s t simultaneously transmit the same version of the modulated message Algorithm 5 Parallel retransmissions selection strategy. end if 14: end for of source s t , i.e., m st (Fig. 4.2). In the case where each node j ∈ Help st knows the phase Φ j of its channel towards the destination, the modulated transmission of m st is multiplied by e -iΦ j (the conjugate of the channel divided by its norm) to obtain a coherent combination at the destination (case 2). The phase Φ j is quantized in practice (e.g., 2 bits are sufficient), and the quantized phase relating to each node can be sent from the destination to the nodes during the initialization phase or just after the first transmission phase. Finally, Algo. 5 presents the pseudo-code of the proposed selection strategy using PR at a given retransmission time slot t. Note that in step 8, and if we were in case 1 (no EGC), the calculation of the highest SNR i needs to pass through all subsets of Help i due to the possible destructive retransmissions of the relaying nodes which are out of phase. In other words, and in such a case, the optimal selection would choose a subset of relaying nodes to be active rather than the whole set Help i . Accordingly, the destination should send the selected source node to be helped, as well as the selected relaying nodes to send redundancies.
Numerical results
In this subsection, we validate the proposed selection strategy using MC simulations. We consider a (3,6,1)-MAMRN scenario, and we set α to 0.25 and T max to 4. The channel inputs are assumed independent and Gaussian distributed with zero mean and unit variance. Note that other channel inputs might be considered without changing the conclusions of this work. We further assume that the rate of each source is allocated using the BRD algorithm presented in the previous chapter. We consider two link configuration scenarios: symmetric and asymmetric. In the symmetric link configuration (Fig. 4.3), all the links are considered the same (the average SNR of each link is set to γ). On the other hand, in the asymmetric link configuration (Fig. 4.4), we design a scenario where the direct links between the source nodes and the destination are bad. Such a scenario helps in showing the importance of the relaying nodes and the gain of the proposed retransmission strategy. Particularly, the links are set as follows: first, the average SNR of each link is set to γ; second, the average SNR of each direct link between the source nodes and the destination is set to γ -100dB. In both scenarios, each source is given a rate using the BRD algorithm presented in the previous chapter from the set of possible rates {0.75, 1, 1.25, 1.5} bits per channel use, and thus, rates are optimized based on γ. Three different curves are seen in the two figures 4.3 and 4.4. The first curve corresponds to the proposed selection strategy with PR in the case of EGC. The second curve corresponds to the same strategy, assuming no available information concerning the phase shift at the relaying nodes (no EGC). Finally, the third curve corresponds to simple retransmissions (i.e., SR), as proposed in the prior art. In Fig. 4.3, we see that for the symmetric scenario, and for the considered SNR range (-5dB to 15dB), the proposed strategy outperforms the prior art in both cases, with EGC (∼ 1.5dB) or without EGC (∼ 1dB). In Fig. 4.4, we encounter a significantly higher gain in the asymmetric scenario over the same SNR range, where the proposed strategy outperforms the prior art in both scenarios: with EGC (up to 7dB) or without EGC (up to 4dB). Numer of Relays
Gain Ratio
Gain ratio with EGC Gain ratio without EGC Finally, in Fig. 4.5, we investigate the effect of the size of the system on the gain of the proposed strategy. Specifically, we fix γ to 0, and we vary the number of relays available in the system from 2 relays to 10 relays. The other parameters are the same as those of Fig. 4.4 (i.e., the asymmetric link configuration, the number of sources M = 3, the set of possible rates, α = 0.25, and T max = 3). We present in Fig. 4.5 the gain ratio of the proposal with and without EGC. In other words, we present the ratio: (the ASE of PR) / (the ASE of SR). The figure validates that as the number of relays increases, the gain of PR compared to SR increases. This can be justified by the fact that when extra relays are available, the gain of exploiting the multi-path diversity would be more significant.
We summarize our findings below:
1. The gain of the proposed selection strategy is significant in scenarios where direct links are not available.
2. The gain is seen for different values of γ, even for high values (this can be explained by the fact that even if we are in a high SNR regime, the rate allocation will allocate higher rates corresponding to γ leading to better performance).
3. As the number of relays L increases, the gain of the proposed strategy. increases.
Energy-Efficient (EE)
We mentioned that in case of no EGC, the optimal allocation might be to select a subset of relaying nodes to be activated (rather than all the relaying nodes). Following this notion of choosing a subset of relaying nodes rather than the whole set Help i to help a given source i, we present next the EE strategy which tries to avoid unnecessary activation of relaying nodes. The intuition of the proposed PR strategy is to exploit the multi-path diversity and the power budget available at the relaying nodes. Although this is optimal in terms of performance and spectral efficiency, it is not energy-efficient. In other words, we would be able to reach the same efficiency without activating the whole set of relaying nodes. Recalling that the outage depends on the mutual information between the relaying nodes and the destination, we propose an EE selection strategy, which allocates the source node to be helped as well as the subset of active relaying nodes as follows:
( s t , A t ) ∈ argmax (i,A i )∈(S d,t-1 )×Pow(Help i ) U EE (SNR A i , |A i |) (4.4)
where:
• i ∈ S d,t-1 is one possible source node to be selected.
• A i ∈ Pow(Help i ) is one possible subset of relaying nodes selected to help source i.
Pow(Help i ) thus represents the power set of Help i including all the possible subsets of relaying nodes which can help source i.
• U EE (SNR A i , |A i |)
represent an EE utility metric. It depends on the equivalent SNR (the performance perspective) and the number of activated relays (energy consumption).
In our work, we use U EE =
I A i (SNR A i ) |A i | β
with β a control factor which can be optimized. As we will see in the numerical section, β controls the level of performance we need compared to the energy reduction we save. We omit presenting the algorithm of EE parallel retransmission. Simply, it is similar to Algo. 5 with the difference of using U EE while considering all the subsets of Help i . Also, the control exchange using the EE strategy is similar to the one presented in Fig. 4.2, with the only difference in step 3, where the destination not only shares the selected source s t , but also the selected subset of relaying nodes A t .
Here, we validate the proposed EE selection strategy using MC simulations. We consider a (3,9,1)-MAMRN scenario, and we set α to 0.25 and T max to 3. We consider an asymmetric link configuration scenario. Specifically, we design a scenario where the direct links of the source nodes and some relay nodes with the destination are bad. Such a scenario helps in showing the importance of the relaying nodes and the gain of the proposed retransmission scheme. Particularly, the links are set as follows: first, the average SNR of each link is set to γ; second, the average SNR of each direct link between the source nodes and the last four relay nodes with the destination is set to γ -100dB. The rate allocation of each source is given using the BRD algorithm presented in the previous chapter from the set of possible rates {0.75, 1, 1.25, 1.5} bits per channel use, and thus, rates are optimized based on γ.
Five different curves are seen in Fig. 4.6. The first curve corresponds to the PR strategy. The following three curves correspond to three EE strategies with different β values (0.1, 0.3, and 0.5). Finally, the fifth curve corresponds to the prior art which uses simple retransmissions (several prior art strategies can be presented as a lower bound benchmark but we adopt the one used in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] as it was shown to be optimal for simple retransmissions). We see that the EE strategies are performing intermediately compared to optimal PR and SR. Also, we notice that the performance is dependent on the value of β.
To further investigate, we present in Fig. 4.7, the percentage of energy reduction corresponding to the three β values used in the EE strategy. This percentage is calculated as: (energy consumed using PR -energy consumed using EE) / (energy consumed using PR). We see that our intuition is correct. Using EE strategies can lead to a high percentage of energy saving as a small cost in the performance. Thus, EE strategies are seen as a good trade-off between optimality and energy consumption. In these two figures, we see that with β = 0.1 we can converge to the optimal solution while saving more than half of the energy consumption.
Figures 3 and4 validate our proposal by showing that:
1. The gain of the proposed PR selection strategy is significant in scenarios where direct links are not available.
2. The gain is seen for different values of γ, even for high values (this can be explained by the fact that the rate allocation will allocate higher rates corresponding to γ leading to better performance).
3. The EE strategy is a promising strategy which makes the trade-off between performance and energy consumption.
4. Choosing β plays a significant role in tuning the EE strategy and should be done wisely, i.e., with β = 0.1, we converged to the PR strategy while saving 60% of the power budget.
5. As the number of relays L and/or the maximum number of possible retransmissions T max increase, the gain of the two proposed strategies increases.
6. The previous findings hold in symmetric link configuration and hold in the case of no EGC (the corresponding curves of results 5,6 are omitted for brevity).
To sum up this section, we proposed a novel selection strategy for orthogonal MAMRN. Rather than selecting a single relaying node to send redundancies at a given retransmission time slot, the PR strategy allows several relaying nodes to send redundancies for a common source node selected to be helped. The proposed strategy outperforms the prior art (i.e., SR) by making use of the power budget available at each relaying node included in the system. The numerical results show that the gain is seen with and without EGC, whereas in the case of EGC, the system encounters a higher gain. Also, the gain is seen with symmetric and asymmetric scenarios, where in the latter, the gain is higher. Finally, we presented a modified version of the PR strategy which is EE. This strategy not only selects the source node to be helped, but also selects the set of relaying nodes to be activated aiming to avoid any unnecessary energy consumption.
Optimized control exchange process
In the prior art (e.g., [START_REF] Mohamad | Outage achievable rate analysis for the non orthogonal multiple access multiple relay channel[END_REF][START_REF] Mohamad | Outage analysis of various cooperative strategies for the multiple access multiple relay channel[END_REF][START_REF] Mohamad | Outage analysis of dynamic selective decode-and-forward in slow fading wireless relay networks[END_REF][START_REF] Mohamad | Dynamic selective decode and forward in wireless relay networks[END_REF][START_REF] Cerovic | Centralized scheduling strategies for cooperative harq retransmissions in multi-source multirelay wireless networks[END_REF][START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], as well as in my previous chapter), the authors first design a control exchange strategy to give the destination useful information about the state of the relaying nodes (and their decoding sets: a decoding set is a set which includes the source nodes which a relaying node decoded correctly at a given time instant). Then, they present some relaying nodes selection strategies. The drawback in the prior art is that the control exchange design is heavy (leads to a heavy overhead). There, at each selection, a control exchange process is performed (even if it was not needed at all). In this section, we tackle the relaying node selection problem, aiming at maximizing the ASE while optimizing the control exchange design in the system. Our intuition is that, upon wisely using the available information at the scheduler, a lighter control exchange design can be used while maintaining good performance. More precisely, and based on the analytical expression of the outage events, we derive an upper bound to the number of retransmissions needed for each source to be decoded successively at the destination. Then, we use this information to propose a selection strategy that can be used when no control exchange process is available between the destination and the relaying nodes. Now, in order to capture the effect of the overhead of the control exchange process seen in the different selection strategies, we define the effective spectral efficiency per frame as:
η frame eff (H, P ) = nb bits successfully received nb channel uses = M i=1 K i (1 -O i,T used ) M U + QT used + Γ/C = M i=1 R i (1 -O i,T used ) M + αT used + Γ/(C • U ) , (4.5)
where Γ/C represents the overhead of the control exchange process with Γ denoting the number of bits needed for control exchange and C denoting the capacity of the control exchange channel. The metric Γ depends on the selection strategy being used and is analyzed in the following subsection. In the prior art [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], we see no consideration of the overhead included by the control exchange process (i.e., Γ is assumed negligible). This follows the assumption of infinite frame length (K, U , and Q are assumed large enough). For short packet lengths, however, and in realistic scenarios, the control channel overhead cannot be neglected.
Analytically, and following the PR strategy, the individual outage event of a source i ∈ S after a retransmission time slot t ∈ {0, ..., T max } (t = 0 corresponds to the end of the transmission phase) can be written as:
O PR i,t (H, P ) = R i > I i,d + α t l=1 J i,d (l)1 { s l =i} , (4.6)
where
• I i,d
represents the mutual information between the source i and the destination d. The mutual information is defined based on the channel inputs.
• s l represents the chosen node to be helped at the retransmission time slot l.
• J i,d (l) represents the mutual information between the equivalent channel (all the active relaying nodes) with the destination if the source node i is chosen to be helped at a retransmission time slot l.
• 1 { s l =i} is the indicator function which takes the value 1 when the destination chooses to help the source i at the retransmission time slot l (i.e., if s l = i) and zero otherwise.
Note that the difference between this equation and the one presented in chapter 2 (check equation (2.6)) is that here, the selection strategy is PR rather than SR. Also note that J i,d (l) is a function of time, as the equivalent channel might be different from one time slot to another (due to the update in the relaying nodes decoding sets). As a decoding set can not decrease in size (a relaying node might decode more sources at each transmission/retransmission time slot), J i,d (l) is an increasing function of time. As seen in the previous section, J i,d (l) depends on the channel state of all the active relaying nodes helping the source node i.
Novel selection strategy 4.3.1.1 Definitions
At a given retransmission time slot t, we recall the definition of the decoding set of the destination and the that of a given relaying node j ∈ {1, • • • , M + L} as S d,t-1 and S j,t-1 , respectively (S d,0 and S j,0 correspond to the decoding sets at the end of the transmission phase). In order to reduce the control exchange overhead, the relaying nodes will only send their decoding sets when the destination asks for. So, rather than running a control exchange process at each retransmission time slot, we assume that the destination may ask for a decoding set update before any retransmission time slot t. At a given retransmission time slot t > 1, we define X(t) as the number of retransmission time slots passed without asking for a decoding set update. Due to the fact that we do not know the decoding sets of the relaying nodes in all the time slots, the prior art strategies are not applicable. Accordingly, we propose a new selection strategy, with an optimized control exchange process, that exploits the limited information available at the destination (e.g., rates, T max , etc.). The intuition of our proposal is based on analyzing the individual outage events and trying to propose some estimation of the number of time slots needed to decode the non-decoded source messages. Using this estimation, we try to avoid unnecessary control exchange processes. Let ⌈q⌉ represent the ceiling function which takes the least integer greater than or equal to q, e.g., ⌈2.3⌉ = 3. Recalling the outage event equation (4.6), and at a given retransmission time slot t > 0, we define x i (t) for a non-decoded source node i ∈ S d,t-1 as: where
x i (t) = x b i (t) = ⌈y b i (t)
y i (t) = y b i (t) = y a i (t-X(t))(αJ i,d (t-X(t)))-α t-1 l=t-X(t) J i,d (t-X(t))1 { s t =i} αJ i,d (t-X(t)) y a i (t) = y a i (t-X(t))(αJ i,d (t-X(t)))-α t-1 l=t-X(t) J i,d (t-X(t))1 { s t =i} αJ i,d (t) (4.8) with y i (1) = y b i (1) = R i -I i,d αI i,d y a i (1) = R i -I i,d αJ i,d (1)
(4.9)
Note that in the case when no control exchange is done at all, x a i and y a i are undefined (due to the lack of the knowledge of J i,d (t), and thus
x i = x b i (1)
). In the case when there are one or more control exchange requests, we assume that the initial request is done at the beginning of the retransmission phase (check the initialization in equation 4.9). Theorem 4.3.1. At a given retransmission time slot t > 0, for a given non-decoded source node i ∈ S d,t-1 , if the destination chooses to help the source i for x i (t) time slots, the destination guarantees the correct decoding of source i. In other words, x i (t) is an upper bound to the number of time slots needed to guarantee decoding the message of source i.
Proof. Check appendix A. Theorem 4.3.1 means that if the destination chooses to help source i for x i time slots, it guarantees that this source will be decoded correctly. Note that x i (t) is defined based on the available information at the destination, and accordingly, we define two forms of x i (t), one before the control exchange process and one after it (before and after updating the relaying nodes decoding sets). Additionally, the value of y a i (t) in the second line depends on J i,d (t) which means that the destination needs to know the decoding sets of the relaying nodes to get x i (t). In fact, the difference between x b i (t) and x a i (t) is that the latter is a better estimator (i.e., a tighter upper bound) of the number of needed time slots to decode source i. This is due to the fact that after running the control exchange process and updating the decoding sets of the relaying nodes, J i,d (t) is used to help estimate the correct number of needed time slots.
Selection strategy
Assume that at a given retransmission time slot t, there are T av available time slots in the retransmission phase. One advantage of knowing x i (t) is that in case T av ≥ i∈S d,t-1 x i (t), the solution of the selection strategy problem would be easy. Simply, the destination chooses the non-decoded source nodes successively and randomly. On the contrary, if T av < i∈S d,t-1 x i (t), the problem becomes non-trivial. In this section, we propose the following: at the end of the transmission phase, the destination calculates x i (1) using equation (4.7) (before the decoding set update). In case T max ≥ i∈S d,0 x i (1), then no control exchange is needed. The destination simply allocates in each round any non-decoded source node to be helped. In case T max < i∈S d,0 x i (1), an optimized control exchange process is performed (to be presented below), and the destination asks for an update of the relaying nodes decoding sets. Then, the x i (1) values are recalculated using equation (4.7) (after the decoding set update). During the following retransmission time slots, the relaying nodes do not perform any control exchange process. The destination then performs a novel selection strategy based on the available information (x i (1), R i , T max , and its decoding set) (to be presented below). At a given retransmission time slot t during which the destination chooses to send a decoding set update request, the previous two steps of calculating x i are repeated using the available retransmission time slots calculated as: T av = T max -t + 1. For such an event, we propose a selection strategy which takes into consideration the rates of the sources, and can be written as:
A ∈ argmax A∈Pow(S d,t-1 ) i∈A R i such that i∈A x i (t) ≤ T av (4.10)
where:
• Pow(S d,t-1 ) represents the power set of S d,t-1 (set of all possible subsets).
• A represents one possible subset of the source nodes taken from all possible subsets:
A ∈ Pow(S d,t-1 ).
• A represents the selected subset of source nodes after the maximization process.
The intuition of the above utility is quite simple: we select a subset of sources A having the highest sum-rate, while guaranteeing all the source nodes in the selected set to be decoded. After choosing the subset A, the destination allocates all the source nodes i ∈ A successively, source by source, until they are decoded (or until it asks for a decoding sets update request). For example, if we have three source nodes: S = {1, 2, 3}, and the selected subset of nodes is: A = {1, 3}, then, the destination will select source 1 until it is decoded then source 3 until it is decoded. The order between the source nodes included in A does not matter, as we ensure that each source node i ∈ A will be allocated enough rounds till it gets decoded (following the constraint i∈A x i (t) ≤ T av ).
Proposed control exchange
We distinguish here between our proposal and the different strategies used in the prior art (check Fig. 4.8). In [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], the destination chooses the relaying node which minimizes the common outage probability. In order to be able to do this selection, the destination first sends an ACK/NACK bit, then the relaying nodes send their decoding sets. The ACK bit indicates that all the sources have been decoded correctly, and the NACK indicates the contrary. The selection is performed after that. In [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] (and in the previous chapter), the selection strategy is based on choosing the relaying node with the highest mutual information. Thus, the destination only needs to know which nodes can help some non-decoded source nodes at the destination. Thus, the destination first shares its decoding set with the relaying nodes, then the relaying nodes which are able to help some non-decoded source messages at the destination send a notifying bit. This update in the control exchange will reduce the overhead in the strategy of [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] compared to that of [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]. Finally, in PR (proposed in the previous section), we go back to the control exchange in [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] as we again need to know the decoding sets of the relaying nodes to choose the source node which will be selected to be helped by all the relaying nodes which decoded it. The control exchange processes of the prior art are seen in the first part of Fig. 4.8 (in non-bold black). Note that in step 3, for reference [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], s t represents the selected relaying node activated while in reference [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] (previous section), it represents the selected source node to be helped by multiple relaying nodes. m st represents the redundancy version shared by either a single activated relaying node (as in [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]) or by multiple relaying nodes (as in [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF]).
To our interest, we want to reduce the control exchange overhead by reducing the number of requests done in the retransmission phase. As described before, we will run a control exchange process only when the destination asks for one. Thus, in our proposal, and as we use PR, the control exchange process will be as the ones of [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] and [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] when there is a control exchange request, and nothing when there is no request. In the first part (in black), we see the control exchange process when we have a decoding set update request (as in [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]). As described above, the destination sends 1 ACK/NACK bit. Then, all the relaying nodes transmit their decoding sets. After that, the destination calculates x i (t) = x a i (t) and performs the proposed strategy to get the subset A. Finally, the destination selects a source node from the selected subset A, and the relaying nodes which decoded this source send redundancies. In the second part (in bold orange), we see the optimized control exchange process in the retransmission time slots when no control exchange process is done from the relaying nodes side. Simply, the destination allocates source nodes from the previously obtained set A, and the relaying nodes which decoded the selected source node send redundancies. Following this scheme, Γ in eq. (4.5) can be written as: Γ n req = n(1 + M (M + L)) + T used ⌈log 2 M ⌉ if there were n control exchange requests. Concerning the prior art, and since we have a control exchange request at each retransmission time slot, we write: for [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] and [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], Γ [6, 117] = (1 + M (M + L) + ⌈log 2 M ⌉)T used , and for [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF]: Γ [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] = (M + 1(M + L) + ⌈log 2 M + L⌉)T used .
Proposed algorithm
x i (t) represents the sufficient number (i.e., upper bound) of retransmission selections so that i is decoded correctly. Nevertheless, the real needed number of selections might be less than x i (t). This is due to the fact that the destination does not know the updates of the decoding sets at the relaying nodes. In case the estimator x i is not the exact needed number, the proposed strategy is still valid. We simply have some extra retransmission time slots. Thus, we propose to repeat the selection of the subset A when such an event occurs (i.e., when extra retransmission time slots are available).
Finally, we answer below two possible questions:
1. When should we select or reselect the subset A?
2. When should the destination send a request for an update in the relaying nodes decoding sets?
To answer the first question, we clarify that we can reselect the subset A each time a source node is decoded before x i (t). Simply, some extra time slots are available and it might give a better sum-rate if we select a new subset. To answer the second question, we build on our proposed selection strategy mentioned above (the selection of A). In fact, our selection strategy is suitable to reduce the control exchange to a single control exchange at the beginning of the retransmission phase. We propose that at the beginning of the retransmission phase, if T max ≤ i∈S d,0 x b i (1), the destination makes a control exchange process with the relaying nodes and gets x a i (1). After that, no request is needed. The idea is based on the analysis of the selection strategy. In our proposal, the destination is allocating sources successively from the subset A. This means that when we update x i (t) (after a possible decoding sets update request), we might have three types of source nodes i: some decoded source nodes (no need for an estimator for those nodes), some non-decoded source nodes which were not selected in the previous rounds (no change in their estimators), at most one non-decoded source node which was selected but not decoded yet. Thus, at most one x i (t) might be updated. Nevertheless, this update in this x i (t) is not interesting, as following the definition of x i (t), it is a decreasing function, and the update in this x i (t) will be decreasing it. The intuition is, as the source i was previously selected with a bigger x i (t) it will be selected again with this decreased x i (t). Thus, the changes in the selection strategy will only be included in A \ i).
In the case when the chosen subset A is empty (i.e., no subset is guaranteed to be decoded successively), we can either stop the transmission to avoid wasting the channel resources, or we can select the source node with the least x i . The latter case (which we use in our numerical analysis next) is an optimistic method for which the choice is based on the possibility that the real number of needed retransmissions is less than x i . In addition, this case aligns with the method used in all the thesis where the frame transmission terminates when reaching T max (or when decoding all the source messages). This also explains the utility metric we used when choosing the subset A; we focused on the sum-rate rather than the spectral efficiency. In other words, since we know that the sources we are selecting are going to be decoded (no outage in the numerator of the spectral efficiency), and that the frame will not terminate before T max (the denominator of the spectral efficiency is independent of A), choosing the sum-rate as an utility metric will lead to the optimal spectral efficiency. Algo. 6 presents the steps of the proposed selection strategy with the optimized control exchange process.
Numerical results
In this subsection, we validate the proposed selection strategy using MC simulations. We consider an orthogonal (6,6,1)-MAMRN scenario, with T max = 6 and α = 0.25. The channel inputs are assumed independent and Gaussian distributed with zero mean and unit variance with I a,b = log 2 (1 + |h a,b | 2 ) being the mutual information between the transmitting node a and the receiving node b. As relaying nodes use PR with EGC [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF], the mutual information of the equivalent channel with the destination is written as
J i,d = log 2 (1 + a∈Help i |h a,d | 2 )
where Help i represents the set of all the relays which will help the source i. We consider a symmetric rate and link configuration scenario, i.e., all the links are considered the same (the average SNR of each link is set to γ), and all the rates are fixed to 1 [bits per channel use]. We present and compare five strategies: Our proposal with no control exchange, our proposal with a single control exchange (as in Algo. 5), and the strategies of references [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF][START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]. Note that the strategies used in [START_REF] Yu | Efficient relay selection scheme utilizing superposition modulation in cooperative communication[END_REF] and [START_REF] Zahedi | Effective capacity and outage analysis using momentgenerating function over nakagami-m and rayleigh fading channels in cooperative communication system[END_REF] are the same as the ones used (and presented) by [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] and [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] respectively. In Fig. 4.9 and Fig. 4.10, we present the ASE as a function of γ for two benchmark scenarios: Γ = 0 (overhead is negligible as seen in the prior art), and Γ ̸ = 0 (overhead is considered). For the latter case, we set U = 512 channel uses in the transmission phase, and C = 0.1 [bits per channel use] as the capacity in the control exchange channel.
For the case of no overhead consideration (i.e., Fig. 4.9), we observe that our proposals (with 1 or 0 request) converge to the PR proposed in [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] (as well as the first section Algorithm 6 The proposed selection strategy with optimized control exchange process.
1: t ← 0, T av ← T max , and A = ϕ ▷ Initialization 2: Calculate x i : x i ← x b i (1) for all i ∈ S d,t 3: if (T av < i∈S d,t x i ) then ▷ if no enough time slots 4:
Relaying nodes send their updated decoding sets 5:
x i ← x a i (1) for all i ∈ S d,t after decoding set update 6: end if 7: Compute A using equation (4.10) 8: while (t < T max and A ̸ = ϕ) do 9:
Select source i ← argmin i∈ A x i 10:
while (i ̸ ∈ S d,t ) do ▷ destination tries to decode i until it is decoded 11:
Destination requests that the relaying nodes help source i 12: end while 20: end while of this chapter) while outperforming the strategies of SR [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]. The strategy used in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] (and in the previous chapter of this manuscript) outperforms the strategy used in [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] which verifies the results deduced in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF]. Now, when overhead is considered (i.e., Fig. 4.10), the proposed strategies outperform the prior art strategies, and the proposal with zero requests outperforms the proposal with 1 request. Additionally, we see again that the strategy in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] outperforms that of [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]. Interestingly, we notice that when we consider the overhead, the PR is not optimal. As it is seen in Fig. 4.10, the strategy of [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] gives an intermediate result between the SR strategies used in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] and [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF].
t ← t + 1, T av ← T av -1, x i ← x i -1 ▷ update counters 13: if (i ∈ S d,t ) then ▷ if i is decoded at the end ofround t 14: A ← A \ {i} ▷ remove i from A 15: if (x i > 0 and A ̸ = S d,t ) then ▷ If a new
Following these observations, we deduce that: This highlights the importance of our proposal, being optimal in both scenarios: with and without overhead consideration.
4. Note that similar results are seen in asymmetric rate and link configuration but are not presented for brevity.
Finally, in our analysis, it is seen that when we consider the overhead, as the size of the network increases (the number of relaying nodes), the gain of the proposed strategies increase (due to the increase of the effect of the overhead of the control exchange process). Thus, we present in Fig. 4.11 the ratio of the ASE of the upper bound (the proposal with 0 request) to that of the different strategies. The x-axis represents the number of sources and relays and γ is set to 0dB. Thus, we are in an (x, x, 1)-MAMRN, where x represents the different points of the x-axis from (2,2,1)-MAMRN to (10,10,1)-MAMRN. The other parameters are kept the same as in the case of the scenario of Fig. 4.10. In Fig. 4.11, it is seen that as the system size increases, the gain of the proposal increases. We observe that the gain compared to [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] and [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] gets significant faster than that of [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF]. This verifies again the importance of the reduced control exchange process seen in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] compared to that of [START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] and [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]. It further validates our proposals, as we see that the difference between zero requests and 1 request is small even for (10,10,1) MAMRN.
To sum up, we presented in this section a TDM-based orthogonal MAMRN. Using a two-phase system, we tackled the scheduling problem with a centralized strategy. Using estimation of the number of retransmissions needed for every source to be correctly decoded, we proposed a low-complexity low-overhead selection strategy which is applicable without a heavy control exchange process. The proposed algorithm outperforms the strategies of the prior art as it reduces the overhead of the control exchange process.
Conclusion
In this chapter, a novel selection strategy is proposed. We first presented PR followed by the control exchange process when using it. Then, the calculation of the equivalent SNR is derived for the different cases. Then, an EE method is further proposed to reduce the power consumption when we avoid activating the whole set of relaying nodes.
In the second part of this chapter, the overhead problem is tackled. Using estimation, we reduced the number for control exchanges by proposing a novel selection strategy that can be applied without the need of a control exchange. Numerical results show the gain of using PR as compared to SR. In addition, it validate the significant effect of the control exchange on the performance.
Chapter 5
Joint Rate and Relaying Nodes Allocation
Chapter summary
In chapter 3, we tackled the rate allocation problem, and the solution presented consists in a sequential BRD allocation. The rate allocation strategy assumes that there is a certain relaying node scheduling in the retransmission phase, and thus, the rate allocation depends on the scheduling process used in the retransmission phase. In chapter 4, on the other hand, we tackled the scheduling problem in the retransmission phase, and the solution presented chooses to help a set of sources which can be guaranteed to be decoded before the end of the frame (parallel retransmission). As the selection depends on the rates, we see that the selection strategy depends on the rate allocation problem. We notice that when solving any of these two problems (rate allocation or the relaying node scheduling), the second problem is considered fixed and a given solution is adopted. In other words, when solving problem 1, problem 2 is not considered, and similarly, when solving problem 2, problem 1 is not considered.
In this chapter, we propose an optimal joint rate and relaying nodes allocation strategy. The proposal determines jointly the rates of the sources and the sources that will be helped in the retransmission phase. In the first section, we present the different steps leading to the optimal solution. First, we present the possible allocations in the retransmission phase. Then, we give the optimal rate allocation for a given scheduling in the retransmission phase. Finally, we present two joint allocations: optimal joint allocation and sequential joint allocation. In the second section, we present the MC simulations that validate our proposal. It is seen that using the joint allocation can lead to better performance compared to the non-joint allocations previously seen (in the prior art and in the previous chapters).
Optimal rate and scheduling allocation
To our interest, we aim in this chapter to solve the two problems jointly. Rather than assuming a given relaying node strategy when doing the rate allocation (as in chapter 3), and rather than doing the scheduling in the retransmission phase assuming a preallocated rates (as in chapter 4), we propose an optimal solution which performs rate allocation and selection scheduling jointly. The idea tackles the sub-optimality of solving the two problems sequentially, and aims to reach an optimal joint allocation that leads to the highest spectral efficiency. Note that the outage event definition depends on both the rates of the sources and on the activated relaying nodes in the retransmission phase and the selected sources for help. Thus, in order to optimize the spectral efficiency which depends on the outage of the sources, we need to optimize the rate and the scheduling process jointly.
Furthermore, another motivation for the joint allocation is one limitation seen in the BRD algorithm presented. This limitation comes from the fact that the allocation follows a finite discrete set of rates R = { R 1 , . . . , R n MCS }. Upon following a discrete set of possible rates, the performance optimization is limited to the possible choices available, and the best performance is limited by the highest rate in the discrete set. One way to reduce this limitation is to increase the size of the set of possible rates R. Nevertheless, the complexity of the BRD and its convergence speed depend on the size of the network and the size of the discrete set (on M and | R|)). So, we encounter a typical trade-off of practicality and performance. When we try to improve the performance of the BRD algorithm, we face a problem of practicality. In this chapter, we tackle the mentioned issues, by proposing a joint rate and scheduling algorithm where there is no need for an exhaustive search over the discrete set of possible rates, instead, the rates are chosen following the channel realization and the optimal scheduling in the retransmission phase. In other words, in our proposal, increasing the size of the set R does not increase (linearly) the complexity of the proposed allocation.
In this chapter, we propose a FLA joint solution. As the channel state might be different from one frame to another, the best allocation would be done dynamically per frame following any possible update in the channel state. Such allocation (i.e., FLA), assumes the presence of the full CSI of all the links of the network at the destination side (the central node). Such an assumption limits our contributions to the scenarios with slow-changing radio conditions (e.g., low mobility cases). The generalization to the case where the CSI is not known by the destination is left for future work and is not included in this manuscript. Now, in order to solve the two problems jointly, we recall the assumptions followed in this chapter:
1. The destination knows the full CSI.
2. The rates and the relaying nodes scheduling are performed per frame (FLA) and jointly.
3. The SU encoding method with PR is used in the retransmission phase (at each time slot in the retransmission phase, a selected source node is helped by multiple relaying nodes).
4. The retransmission phase is limited to T max time slots and the frame terminates if it exceeds T max .
The possible source allocation in the retransmission phase
The proposed joint allocation follows the above assumptions. To start, and following assumptions 3 and 4, we see that there is a finite possible selections in the retransmission phase. First, the destination needs to determine how many retransmission time slots it would need, i.e., it needs to select T used ∈ {0, . . . , T max }. Then, it needs to select which source node to be helped in each time slot within the T used time slots. Note that the order is not important, but the number of selections for each source node is what really matters. To make it clear, here is an example: Assume T max = 2, and there are two sources in the network, then, the possible selections are:
• T used = 0.
• T used = 1, and then there are two possibilities: choose source 1 or source 2.
• T used = 2, and then there are three possibilities: choose source 1 twice or choose source 2 twice, or choose source 1 one time and source 2 one time.
Note that in case the destination chooses to help source 1 one time and source 2 one time, it is not important which one goes first and which one goes second. In other words, if we select to help source 1 then source 2, it will give the same performance if we select to help source 2 then source 1. The reason we are mentioning this comment is that it would decrease the possibilities of possible selections, and thus, reduces the complexity of the strategy we would propose next. Since rates are not allocated, going exhaustively over all selections in the retransmission phase would not help to determine the scheduling that gives the highest spectral efficiency. Thus, we propose next, for a given selection (assuming it is already selected), how to allocate the source rates optimally. The idea is to give for any possible scheduling in the second phase, the highest possible rate allocation, and then we determine which selection would give the highest spectral efficiency.
The optimal rate allocation for a given allocation in the retransmission phase
Assume the destination chooses to use T used ∈ {0, . . . , T max } time slots in the retransmission phase, and to help a vector of sources A of size T used where each source [ A] i in the vector A is helped at the i th retransmission time slot. We define the vector N of size M representing the number of times each source will be helped in the T used time slots.
Obviously, the vector N can be deduced from the vector A. Going back to the previous example, and assuming that T used = T max = 2, then, there are three possibilities: choose source 1 twice or choose source 2 twice, or choose source 1 one time and source 2 one time. The three possibilities can be written as:
• If source 1 is chosen twice, then: A = [1, 1] T and N = [2, 0] T .
• If source 2 is chosen twice, then: A = [2, 2] T and N = [0, 2] T .
• If both sources 1 and 2 are chosen each, then:
A = [1, 2] T and N = [1, 1] T
The last case would also be written as: A = [2, 1] T and N = [1, 1] T with no change in the performance.
Here, in this subsection, we aim to determine the optimal rate allocation for any selection of A. We say, for a selected vector A, the outage event of a given source i is written as:
O PR i,T used (A) = R i > I i,d + α [N ] i l=1 J i,d (l) , (5.1)
where J i,d (l) is defined as the mutual information between the message of source i and the destination through the equivalent channel (all the active relaying nodes) towards the destination after the l th selection of source i where l ∈ {1, . . . , [ N ] i }. Note that here the index l refers to the number of retransmissions of a given source and not the retransmission time slot l as it was the case for not J i,d (l) seen in the outage event in the previous chapter (we also omitted the dependency on the channel gains H for brevity). So, in order to calculate J i,d (l) for l ∈ {1, . . . , [ N ] i }, it is first needed to compute the set of relaying nodes that have decoded source i at the end of the l -1 retransmissions. To get whether a relaying node j can decode source i at the end of the l th retransmission, a similar outage event computation is performed taking node j as the destination, i.e., all the relaying nodes having decoded source i at the end of retransmission l -1 transmit the message of source i which defines an equivalent channel towards node j or the mutual information J i,d (l). Node j cannot decode source i at the end of the [ N ] th i retransmissions if and only if {R i > I i,j + α
[ N ] i l=1 J i,j (l)}. We define J * i,d as the maximum possible equivalent mutual information which can be reached when all the relaying nodes decoded a given source message. J * i,d is needed next to limit the possible choices of a rate allocation. Now, we see that an optimal rate allocation would be the highest rate allocation which can be decoded before the end of the frame transmission. Specifically, and following the outage event, the optimal rate allocation for a given selection A, would be the highest rate that guarantees that the outage event is not declared, or in other words,
R i (A) = argmax R i ∈IR R i such that : O i,T used (A) = 0. (5.2)
In order to analyze the above "argmax", we note that:
• J i,d (l) depends on R i .
The decoding set of a given relaying node depends on the channel state and on the rate allocated. Accordingly, the equivalent mutual information J i,d (l) depends on the selected rate R i .
• Although the possible rate belongs to the real numbers set (R i ∈ IR), the selection strategy is not that "exhaustive" as it looks (check more details below).
Following the above notes, we say that the optimal rate value for a given source i can be limited following the decoding sets of the relaying nodes. More specifically, the rate value of a given source is limited between: I i,d +α[N ] i I i,d as a lower bound, if no relaying node is activated to help this source (for example if the links with this source are very bad and no relaying node decoded the message of this source after the transmission and the different retransmissions).
I i,d + α[N ] i J * i,d
as an upper bound, if all relaying nodes are activated to help this source (for example if the links with this source are very good and all relaying nodes decoded the message of this source after the transmission phase). Thus, the "argmax" is rewritten as:
R i (A) = argmax R i ∈[I i,d +α[N ] i I i,d ,I i,d +α[N ] i J * i,d ] R i such that : O i,T used (A) = 0.
(
The rate value window is reduced to:
[I i,d + α[N ] i I i,d ,I i,d + α[N ] i J * i,d ].
In other words, the size of the window is only:
α[N ] i (J * i,d -I i,d ).
As the window size is limited now, different approaches can be used to reach the optimal rate.
One practical proposal to solve the above "argmax" is to use binary search algorithm. Since we aim to choose the highest source rate which can be decoded, and since we know the structure of the outage event, the search would be very simple, and the binary search algorithm becomes intuitive. When we say "structure" of the outage event, we mean the fact that when we encounter an outage with a given rate R i , we will encounter an outage with all rates R j ≥ R i . Similarly, when we encounter no outage with a given rate R i , we will encounter no outage with all rates R j ≤ R i . Such structure makes the search easy and gives the intuition to use the binary search algorithm.
The algorithm is quite simple: we choose the intermediate value between the lower and the upper bound. If there is no outage, the lower bound is updated to the intermediate value just checked. If there is an outage, the upper bound is updated to the intermediate value. Check Algo. 7 for the steps of the binary search algorithm. The only issue we should mention is that the stopping condition for this search (since it belongs to real values which are infinite), is simply when the difference between the lower and the upper bounds is smaller than a given constant ϵ. In other words, when the search window is small enough (following a system parameter ϵ), the algorithm terminates.
The optimal joint allocation of source rates and relaying nodes scheduling
To this end, we have proposed for a given selection strategy in the retransmission phase, the optimal source allocation. Recalling again that the possible selections are finite, the optimal joint allocation would be to check for all possible selections in the retransmission phase, the optimal rate allocation strategy. Thus, the optimal joint allocation would be the selection which leads to the highest possible spectral efficiency. This strategy can be written as: A = argmax A∈{1,...,M } T used :T used ∈{0,...,Tmax}
η frame (R(A), A), (5.4)
Algorithm 7 Binary search to get the optimal rate for a given selection A and ϵ. end if 9: end while 10: R i ← Left ▷ Select R i as lower bound to ensure its correctly decoded Algorithm 8 Binary search to get the optimal rate for a given R and A.
1: Fix ϵ, Left ← I i,d + α • [N ] i • I i,d , Right ← I i,d + α • [N ] i • J * i,
1: ϵ ← min i∈{1,...,n MCS -1} ( R i+1 -R i ) Left ← min( R n MCS , I i,d + α • [N ] i • I i,d ), Right ← min( R n MCS , I i,d + α • [N ] i • J * i,d ).
▷ Initialize the boundaries. end if 9: end while 10: R i ← argmin r∈ R such that: r≤Left (Left -r) ▷ Select R i as the closest rate to Left in the set R where R(A) is a vector of size M of allocated rates for the M sources, with the elements R i (A) computed as seen in equation (5.3).
In practice (e.g., [START_REF] Combes | Dynamic rate and channel selection in cognitive radio systems[END_REF][START_REF] Deek | Joint rate and channel width adaptation for 802.11 mimo wireless networks[END_REF]), there is always a MCS family where possible rate values are predefined. In other words, although the optimal rates in our proposal do not depend on a predefined set of rates, we note that our proposal can be used in realistic scenarios where a predefined set of rates is adopted. But here, there is no need to search through all the values to reach the optimal rate value. Simply, the binary search algorithm can be adopted to follow the available set of possible rates. Thus, we conclude that our proposal can be used in realistic scenarios where a predefined set of rates is presented, while avoiding the complexity of searching through all of the set as done in the prior art. The modifications needed in the binary search algorithm are presented in Algo. 8 (mainly check step 1 where the bounds are initialized and step 10 where the rate is selected). Finally, a complete algorithm, for the proposed joint scheduling and rate allocation strategy is presented in the in Algo. 9. for all A such that: A ∈ S T used do ▷ For any possible selection A ← B ▷ Update the chosen set
6: MAX ← η frame (R( A), A) ▷ Update MAX 7:
end if 8: end for 9: Compute R( A) : (R i ( A) for all i ∈ S) ▷ Choose the highest rate for selected A
The proposed algorithm faces a complexity issue following the exponential number of possible allocations A ∈ S T used . For a given T used , the number of possible vectors A is T M used . Recalling again that the order of the sources does not matter, and we only care about the number of allocations of each source, the number of possible vectors A is reduced to:
C M +T used -1 T used = (M + T used -1)! T used !(M -1)! . (5.5)
Although such a reduction is quite interesting, we might still face a practicality issue when using the proposed algorithm. To solve this issue, a sequential allocation strategy might be used. Specifically, when allocating the vector A, we allocate sequentially the sources [A] t leading to a practical allocation where no need for an exponential search over the vectors A. Note that in such a case, and when we are calculating the optimal rate corresponding to the set A, only one rate would be updated; that is the rate of source [A] t : R i=[A]t . The complete algorithm for the sequential joint scheduling and rate allocation strategy is presented in Algo. 10.
The control exchange process in the proposed joint strategy
Finally, the control exchange process between the relaying nodes and the destination is presented in Fig. 5.1. First, the relaying nodes share their CSI with the destination. Using the full CSI, the destination determines the optimal rates and the optimal selected sources for help ( A, R( A)). It then broadcasts the allocated rates with the relaying nodes. Then, each source transmits successively each on its time slot in the transmission phase following its allocated rate. After that, the destination broadcasts the selected vector of source nodes to be helped at the retransmission phase. Finally, at each time slot t ∈ {1, . . . , T used } in the retransmission phase, all the relaying nodes which decoded the selected source [ A] t send redundancies. Note that [ A] t is the t th element in the vector A. Following our proposal in this chapter, the scheme of transmission of a frame can be presented as seen in Fig. 5.2. We see that in this figure (as compared to 2.2), both the rate allocation and the scheduling are done in the initialization phase. In other words, we see the selected vector of sources to be helped A and the set of optimal rates R( A) are both allocated before the transmission of the frame. Also, we see that in the retransmission phase, there is no presence of a control exchange process, and the allocated sources [ A] t are the elements of the allocated vector A.
The reason behind using a full CSI acquisition
The full CSI acquisition means that the destination will know at each frame the state of the different direct and indirect links of the network. For the direct links (S-D, and R-D links), the CSI acquisition is quite simple. The relays and the sources send to the destination some pilot symbols (which can be included within their message), then, the destination performs pilot-based channel estimation messages to know the state of each of the direct links. On the other hand, for the indirect links (S-S, R-R, and S-R links) it is quite heavier. Since these links are not direct to the destination, the relaying nodes themselves should send the CSI to the destination. In other words, the relaying nodes will first receive some pilot symbols from their respective direct links messages to know the state of these links and then it will share this information with the destination. The CSI knowledge is essential in our proposal and is seen in the equations/algorithms next. As we presented, our proposal determines for each possible scheduling, an optimal rate allocation. The latter is based on the knowledge of the outage events of each source. And as we are following PR where all relaying nodes that decoded a given source are activated to help, the knowledge of the CSI is essential to know the decoding sets of the relaying nodes and then to know which relaying nodes are activated and what is the equivalent channel for a given selection. In other words, following every different selection, a different equivalent channel is produced at each retransmission time slot. These channels are needed to calculate the equivalent mutual information and thus needed to know the outage events. For this reason, the full CSI acquisition is essential in our proposal. Example: Assume we are in an orthogonal (2, 2, 1)-MAMRN. The set of sources is S = {1, 2}, and the set of relays is R = {3, 4} and the destination is d. Assume after the transmission phase, the decoding sets of the relaying nodes and the destination are:
S 1,0 = {1}, S 2,0 = {1}, S 3,0 = {1}, S 4,0 = {2}, S d,0 = ϕ.
If the destination chooses to help source 1 at the first retransmission time slot, we see that the first three relaying nodes are going to be activated. Then we have an equivalent channel corresponding to these relaying nodes. Such an equivalent channel will result in an equivalent mutual information. For Gaussian inputs with EGC, the equivalent mutual information can be written as:
J 1,d = log 2 1 + 3 i=1 |h i,d | 2
where h i,d represents the channel gain within the link between the relaying node i and the destination. We see that the destination needed the decoding sets of the relaying nodes to calculate J 1,d (in order to know which relaying nodes to include in the summation due to the equivalent channel). Here comes the need for the CSI of the indirect links. In order to know which nodes are active in the first retransmission, the destination needs to calculate the decoding sets of each relaying node after the transmission phase (following the equations {R 1 > I 1,j }). So, we need to calculate I 1,j which is a function of the channel gain of the indirect link h i,j (for Gaussian inputs, I 1,j = log 2 (1 + |h i,d | 2 ). A similar procedure is needed in the following retransmission time slots. To sum up, in our proposal, the destination needs:
• To calculate the outage events given a fixed selection.
• In order to calculate the outage events, the destination needs the mutual information of the equivalent channels.
• In order to know what the equivalent channel is, the destination needs to know the decoding sets of the relaying nodes.
• To know the decoding sets of the relaying nodes, the destination needs to know the states of the links between the relaying nodes.
Thus, the destination needs the full CSI of the network.
Complexity analysis
The BRD is used to reduce the complexity of the exhaustive search approach. In fact, the BRD reduces the complexity from n M MCS (in the exhaustive search approach) into M n MCS multiplied by the number of BRD iterations needed (which is limited due to the limited set considered). Nevertheless, we see that the complexity is still linear with the number of possible rates. In other words, the complexity is linear with n MCS . Our proposal, on the other hand, and following the complexity of the binary search algorithm, reduces the complexity to M log (n MCS ).
Numerical results
In this subsection, we validate the proposed joint allocation strategy using MC simulations. We consider an orthogonal [6], the mutual information of the equivalent channel with the destination is written as
J i,d = log 2 (1 + a∈Help i |h a,d | 2 )
where Help i represents the set of all the relays which will help the source i. We consider a symmetric link configuration scenario, i.e., all the links are considered the same (the average SNR of each link is set to γ). The set of possible rates is fixed to: R = {0, 0.75, 1.5, 2.25, 3} [bits per channel use]. We present and compare six strategies:
• The joint allocation (denoted in figures by JA)
• The sequential joint allocation (denoted in figures by Seq. JA)
• The joint allocation following the discrete set R
• The sequential joint allocation following the discrete set R
• BRD with optimal PR allocation (as proposed in chapter 4 section 1 and [6])
• BRD with optimal SR allocation (as proposed in chapter 2 and [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF])
The first four strategies are the proposal of this chapter. The first one is the upper bound, when the optimal rates are taken from the real numbers set (as seen in the first algorithm) with the best allocation of vector A. The second one is the sequential version of the first strategy, where the allocation is considered as described in the last algorithm. The third and the fourth strategies are the same as the first two strategies, respectively, with the constraint of taking the rate values from the discrete set R (as seen in the second algorithm). The last two strategies are the strategies of the previous two chapters. Specifically, strategy five is the strategy of chapter 4, and strategy six is the strategy of chapter 3. In Fig. 5.3, we present the ASE as a function of γ. First, we see that the PR strategy is outperforming the strategy of SR. Second, we observe that our proposals (in both cases: with or without the discrete set) outperform the non-joint allocation presented in the previous chapters (both SR and PR). As the gain is seen significant when the rates are allocated from the real numbers set, this gain is quite insignificant when the discrete set of rates R is considered. Finally, we notice that the sequential joint strategies coincide with the optimal joint allocation strategies (in both cases: with or without the discrete set R). Note that similar results are seen in asymmetric rate and link configuration but are not presented for brevity. Accordingly:
• We ensure that the performance of the PR strategy presented in the previous is good as it approaches the optimal allocation seen in the joint allocation strategy and outperforms the SR strategy.
• Although the PR strategy is performing well, the proposed joint allocation is seen interesting as it can be reached with no need of an exhaustive search over the discrete set of rates R.
• The gain of using the joint allocation strategy from the real set numbers is very significant, which ensures the dependency of the performance on the discrete set values.
• The sequential strategies are seen as practical alternatives as they achieve the performance of the joint allocations while facing a reduced complexity.
Following the last two findings, we further investigate:
1. The effect of the discrete set of rates on the performance of the joint allocation strategy.
2. The effect of the size of the network on the performance of the sequential strategy.
Thus, we present in Fig. 5.4 the ASE of the joint allocation strategies with respect to γ for different discrete sets of rates. The investigated sets are:
• R = {0, 2, 4, 6, 8}
• R = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
• R = {0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9}
• R = {0, 0.25, 0. We see that as the size of the shift between the possible rates decreases, the joint allocation performance increases and approaches the upper bound. Specifically, it is seen that for the size shift = 0.25 [bits per channel use], the difference is seen insignificant. This validates the importance of our proposal (compared to PR seen in the previous chapter). As BRD needs to pass through all the values of the discrete set R, it faces a complexity problem when the size of R gets bigger. On the contrary, using a binary search algorithm, the complexity of the search is not linear with the increase of the size of the discrete set. Thus, we conclude that our joint allocation proposal can approach the upper bound with an acceptable complexity by using a discrete set with a small shift between the rates. This is done by using the binary search algorithm presented above. Finally, we present in Fig. 5.5 the ASE of the joint allocation strategies (both: with and without the discrete set R; and both exact and sequential) for γ = 12dB with respect to the network size. Specifically, the x-axis represents the number of sources and relays of the network. Thus, we are in an (x, x, 1)-MAMRN, where x represents the different points of the x-axis from (2,2,1)-MAMRN to (10,10,1)-MAMRN. The other parameters are kept the same as in the case of the scenario of Fig. 5.4. In Fig. 5.5, it is seen that as the system size changes, the performance of the sequential strategies is always approaching the joint allocation strategies. This validates again the performance of the sequential strategies as a practical alternative of the joint allocation strategy. We note that the decrease in the ASE with respect to the increase of the network size is due to the fact that T max is fixed to 4. This means that the needed number of retransmission to decode the source nodes of the network (which increases with the x-axis) is higher than the available time slots. Accordingly, we add to our previous findings:
• Our proposal is more practical than the previous proposal as it can achieve the upper bound when using a larger set of possible rates. This increase does not make the proposal impractical due to the help of the binary search algorithm being used.
• The performance of the sequential strategies is robust to the size of the network which ensures its practicality as an alternative to the joint allocation strategies.
Conclusion
To sum up, we proposed in this chapter a FLA joint strategy for the rate and the relaying nodes allocation. The proposed strategy leads to the highest possible spectral efficiency. It first passes through all the possible relaying nodes allocation, and then, for each allocation, it determines the highest rate allocation for the sources in the network. This strategy makes it possible to choose the joint allocation which gives the highest spectral efficiency. The proposal solves two main issues seen in the prior art:
1. It removes the sub-optimality of solving the two problems separately (the rate allocation problem and the selection strategy problem).
2. It removes the need for an exhaustive search over a discrete finite set of possible rates that used to limit the practicality of the BRD proposed in the previous chapters.
Chapter 6
Future Work: different directions and open challenges
Chapter summary
In this manuscript, we presented different approaches to tackle two main open problems of the considered orthogonal MAMRN: 1-the rate allocation problem, and 2-the relaying node scheduling problem. For the first problem, we presented several strategies concerning the different radio scenarios (FLA, SLA, FLA with partial CSI), and different encoding schemes (SU and MU). In the second problem, we proposed solutions concerning the optimality of the relaying nodes scheduling as well as the optimality of the control exchange process included. Finally, we proposed a FLA joint allocation strategy that outperforms the sequential allocation. To this end, in all the presented work, two assumptions were considered: 1-a certain knowledge of the network (CSI or CDI), and 2-TDM frame transmission.
In this chapter, we present some other directions and some generalizations of our work. First, we present one solution for the rate allocation problem when no knowledge is assumed at the destination side. This solution is a learning solution, which bases its strategy on the MAB framework. Second, we present a generalization of our work in the FDM framework. Finally, we mention some open challenges and some future works which might be tackled for the considered systems.
Rate allocation via learning algorithms
In this section, we consider the problem of LA of orthogonal MAMRN using the MAB online learning framework. We assume that we have no knowledge of neither the CSI nor the CDI. Accordingly, rate allocation must be learned online following a sequential learning algorithm. We aim to solve the LA problem using a different perspective. First, we aim to use an algorithm which is not heuristic, and where the regret is bounded and tractable. Next, we want to solve the problem when no information is given at the destination. In other words, we aim to perform rate allocation using a learning algorithm, where the probability of transmission success at a certain rate is unknown (since the channel state is unknown) and rather needed to be learned. We adopt the well known framework called MAB, where it addresses the exploration-exploitation dilemma. Here, we started with the same assumptions presented in chapter 2 (MU encoding, SR method, etc.,), and the further contributions presented in chapter 4 (SU encoding, PR method) are seen as interesting future work to investigate.
First, we recall the main issue which MAB framework tackles, i.e., the explorationexploitation dilemma. In scenarios where multiple choices are possible (multiple arms), each with an unknown average reward, MAB algorithms give sequential steps to decide whether we need to learn more (exploration), or to stay with the option that gave the best rewards in the past (exploitation). There are different types of MAB problems, each based on the assumptions of the problem. In the survey [START_REF] Bubeck | Regret analysis of stochastic and nonstochastic multi-armed bandit problems[END_REF], three different fundamental types of MAB problems were mentioned, stochastic, adversarial, and Markovian. In this section, we are interested in the stochastic MAB problem, as it aligns with the case of the rate allocation problem (the reward is stochastic). From a historical point of view, Lai and Robbins [START_REF] Leung | Asymptotically efficient adaptive allocation rules[END_REF] introduced the first analysis of stochastic bandits with asymptotic analysis of regret. There, the principle of optimism in the face of uncertainty (to be optimistic while thinking about the not well explored choices) was used and the UCB algorithm was proposed. This concept is widely used in most of the MAB literature.
In our framework, there is a fixed set of MCS representing the available set of rates. These rates represent the possible choices of the MAB problem. Since we are considering MAMRN framework, at each frame transmission, the destination will allocate a rate for each given source. In other words, rather than selecting a single arm of the MAB, we need to select multiple arms, each corresponding to each of the multiple source nodes. Such kind of MAB problems is given under the name of CMAB, where a subset of arms is selected at each step, forming a Super Arm. In the literature, CMAB was investigated in several applications [START_REF] Nasim | Learning-based beamforming for multi-user vehicular communications: A combinatorial multi-armed bandit approach[END_REF][START_REF] Chen | Combinatorial multi-armed bandit: General framework and applications[END_REF][START_REF] Kuchibhotla | Combinatorial sleeping bandits with fairness constraints and long-term non-availability of arms[END_REF]. Check section 1.2.2 for more literature review.
MAB problem formulation
In the MAB framework, a unique utility metric is considered when evaluating the performance of a considered arm. Here, the utility function used is the spectral efficiency per frame. We recall the definition of the spectral efficiency per frame:
η frame (H, P ) = nb bits successfully received nb channel uses = M i=1 K i (1 -O i,T used ) M U + QT used = M i=1 R i (1 -O i,T used ) M + αT used . (6.1)
After defining the utility metric, we can now formulate the considered rate adaptation problem as a MAB problem. We consider a finite set of possible arms of size n MCS (i.e., R = { R 1 , . . . , R n MCS }). We define the RL round as the procedure of: i) the destination selects and broadcasts a super arm, ii) the sources follow the rates allocated by the selected super arm for the RL round t during the transmission of a frame, iii) the destination computes the updated cumulative reward. At each RL round, a super arm of size M is selected for the M source nodes included. This leads us to an equivalent CMAB of arms size n M MCS . The reward of each arm is a stochastic random variable, with an unknown distribution and unknown average. We define the random variable Rew i (t) as the reward given when we select the super arm i at the t th RL round. The reward was defined before as the spectral efficiency per frame, and the randomness is within the variables T used which varies between zero and T max , and the outage event indications of each source node. We define the expected value of the reward of the super arm i as
θ i = E[Rew i (t)].
For a given online sequential algorithm π, where at each frame j, a decision Dec(t) of a super arm i is selected (Dec(t) = i), we define the regret as the difference between the rewards of the optimal algorithm (Oracle algorithm selecting the optimal arm each RL round) and the given algorithm. The regret of algorithm π up to RL round t can be written as:
Reg π (t) = θ * t - n M MCS i=1 θ i E[nb π i (t)], (6.2)
where θ * represents the expected value of the optimal reward (i.e., the reward of the optimal super arm i * ), and E[nb π i (t)] represents the expected value of the number of times arm i was selected after t RL rounds when using algorithm π. We aim to propose a rate allocation algorithm which performs exploration and exploitation in a way that minimizes this regret.
Algorithm
We retain here a well-known algorithm in the literature, specifically, a UCB-like algorithm. Several types of UCB algorithms are seen in the prior art, each depending on the problem considered, the reward type, and the way we choose the upper bound. In our proposal, we use the UCB1 algorithm [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF], where it is known that it achieves a logarithmic regret uniformly over t and without any preliminary knowledge about the reward distributions. The only condition is to assume that the rewards are bounded in [0, 1], and this normalization can be assumed easily with no loss of generality. The sketch of the algorithm is presented in Algorithm 11, where Rew
i (t) = t-1 l=1 Rew i,l nb i (t)
, and:
• Rew i,l = 0 if Dec(l) ̸ = i. Rew i,l = M j=1 R i,j (1-O j,T used ) M +αT used if Dec(l) = i. • i ∈ {1, . . . , n M MCS } super arms • nb i (t)
is the number of times super arm i was chosen until RL round t.
• R i is the rate vector allocated if super arm i is selected (i.e., Dec(l) = i).
After the initialization step, where each arm is explored once, we start choosing the next arms based on the information collected. We see next that the choice is based Algorithm 11 UCB1. (t) , where nb i (t) represents the number of times super arm i was selected up to RL round t. The first term, i.e., Rew i (t), gives the exploitation term, where the history rewards of the arms are taken into consideration. On the other hand, the second term, i.e., 2 ln t nb i (t) , gives the exploration term. The ratio can be understood as, when a given arm i is not selected for enough time, compared to other arms, the fraction increases, and then the index of this arm composed of the sum of the two terms increases. In this way, we tend to compromise between the history of the rewards of each arm and the number of times this arm was selected. One final comment, about the logarithmic in the expression: In UCB1, we try to decrease the exploration coefficient as time increases, trying to set a limit to the exploration phase when enough information is collected through previously selected arms. The mathematical aspect of this result is based on Hoeffding's Inequality, a theorem applicable to any bounded distribution. In theorem 1, the expected regret of the UCB1 algorithm when played t times is presented. Theorem 6.2.1. For all n M MCS > 1, if policy UCB1 is run on n M MCS machines having arbitrary reward distributions Pr 1 , . . . , Pr n M MCS with support in [0, 1], then its expected regret after any number t of plays is at most:
8 i:θ i <θ * ln t ∆ i + 1 + π 2 3 n M MCS j=1 ∆ j ,
where θ 1 , . . . , θ n M MCS are the expected values of Pr 1 , . . . , Pr n M MCS , and ∆ i is defined as:
∆ i = θ * -θ i . Proof. Check appendix B.
In practice, the proposed algorithm suffers mainly from the exponential growth of arms. Specifically, the initialization phase (pure exploration phase) will take too much time before reaching the exploitation-exploration phase. Thus, we propose an Approximated UCB1 (AUCB1) algorithm, which reduces the complexity of the initialization phase.
The goal of the initialization phase is to explore each super arm once, and to set its index, we call index the sum Rew i (t)+ 2 ln t nb i (t) . We propose here setting an approximated Algorithm 12 AUCB1
1: For t = 0, . . . , n MCS × (M -1), for the (t + 1) th RL round, select successively a single arm for each source {1, . . . , M } an arm {1, . . . , n MCS } (play for each single source, all possible arms once). 2: Set the index of all super arms based on the average of the indices of the included arms. For t ≥ n MCS × M , for the (t + 1) th RL round, select the super arm i which maximizes Rew i + 2 ln t nb i .
initial index in order to decrease the complexity of the initialization phase. One way in doing so is by removing the exponential relationship between the sources forming the super arm. In other words, rather than taking super arms initially, we take each arm by itself (each possible rate), and we test this arm with all the possible sources. In this case, when a source is sending with a given rate, other sources send nothing. We repeat this process for a given arm with all the given sources. Finally, we average for this arm the number of transmitted bits (Rate × success or failure), and we save the highest T used needed with all the sources. We repeat this process for all arms (rates). Finally, for each super arm composed of M subset of arms, we calculate the reward (index) as the average of transmitted bits divided by the number of channel uses while using the highest T used of the considered subset of arms (rates). Following these steps, we approximate the reward (recall equation 6.1). The complexity of the initialization phase is reduced from O(n M MCS ) to O(n MCS × M ). The sketch of the algorithm is presented in Algorithm 12.
In SUCB1, the idea is to generalize the AUCB1 algorithm for all iterations rather than only the initialization step. After setting the indices of each arm using AUCB1, SUCB1 chooses each super arms successively, arm by arm. In other words, instead of choosing the super arm directly, we choose for each source of the M sources the arm with the highest index. After each selection, we update the indices' counter. Finally, we update the indices based on the cumulative reward, each based on decoding the signal of the related source. In SUCB1, we have n MCS arms, rather than n M MCS arms, and this reduction will decrease the regret as we will see in the numerical results section. The sketch of the algorithm is presented in Algorithm 13, where Y
i (t) = t-1 l=1 M j=1 Y i,l,j nb i (t)
, and:
• Y i,l,j = 0 if Dec(l, j) ̸ = i. Y i,l,j = R i,j (1-O j,T used ) M +αT used if Dec(l, j) = i.
• i ∈ {1, . . . , n MCS } simple arms, n i (t) is the number of times arm i was chosen until RL round t.
• Dec(l, j) represents the index of the selected simple arm at RL round l for source j.
• R i,j is the rate allocated for source j if arm i is selected (i.e., Dec(l, j) = i). Then, we adopt the UCB-type family, specifically the UCB1 algorithm. In order to solve the problem of complexity of the exponential number of arms included in the MAMRN system, a sequential algorithm SUCB1 is proposed. Within SUCB1, we use an approximated initialization phase AUCB1, then, we choose arms sequentially for the considered set of sources. The numerical results show that the proposed algorithm outperforms the traditional UCB1 algorithm in terms of regret and ASE. Several interesting directions can be further investigated. One important issue would be the regret analysis of the proposed algorithms. Unfortunately, we could not reach an upper bound to the regret of the presented SUCB1. We tried to use the structure seen in the utility metric, but since MU encoding case was used, it was still complex. As mentioned in [START_REF] Combes | Dynamic rate and channel selection in cognitive radio systems[END_REF], the structure stems from inherent properties of the achieved throughput as a function of the selected rates. In the MAB framework, a structure of the utility metric is used to speed up the exploration process. This means that while looking through the different arms, we take into consideration the different properties. For example, in our considered problem, the rewards associated with the various rates on a given channel are stochastically correlated, i.e., the outcomes of transmissions at different rates are not independent: for example, if a transmission at a high rate is successful, it would be also successful at lower rates. In addition, the average efficiency achieved at various rates exhibits natural structural properties. For a given channel, the throughput is an unimodal function of the selected rate. We are interested in revisiting these notions (regret upper bound analysis, exploiting system structure) when the SU encoding case is used.
Generalization to FDM domain
In all the previous chapters, TDM was adopted. All the presented LA and scheduling algorithms were TDM-type strategies. In this section, we give some insights into generalizing the contributions to FDM regime. Concerning the LA strategies using BRD methods, no change is seen upon adopting the FDM regime. Concerning the scheduling strategies, on the other hand, the strategies proposed in the prior art become inapplicable.
Since FDM is used for orthogonality, different nodes are allocated at each sub-band of the transmission and retransmission phases. In this section, we present the system model of the considered orthogonal MAMRN when the FDM mechanism is adopted, while including the analytical derivations of the utility metrics (spectral efficiency and outage events). Then, two centralized node selection strategies are proposed to generalize the methods used in chapter 3 (and in the prior art). The presented algorithm uses SR with SU encoding. The generalization to PR could be seen in the following section. Moreover, we present the control information exchange process between the destination and the different nodes. The proposed strategies allocate for each sub-band the node that will transmit (or retransmit) with the goal of maximizing the spectral efficiency.
Upon adopting the FDM mechanism, we encounter a new DoF represented by the several sub-bands at each transmission or retransmission time slots. To exploit this DoF, the relaying nodes must be able to transmit on a given sub-band while listening to the others, i.e., the relaying nodes are capable of full duplex communication. Guard bands between sub-bands can be inserted to simplify the implementation of duplexer filters.
Proposed selection strategies 6.3.1.1 Utility metric
In the proposed FDM-based orthogonal MAMRN, each time slot is composed of several frequency sub-bands, and each sub-band is made of a time-frequency grid corresponding to F resource elements made of consecutive Orthogonal Frequency Division Multiplexing (OFDM) symbols and consecutive subcarriers set per OFDM symbols. We fix the number of sub-bands to B, and thus, the first ⌈M/B⌉ time slots are reserved for transmission (first phase), while the other T max time slots are dedicated for retransmissions (second phase). We recall that ⌈q⌉ represents the ceiling function which gives the first integer greater than or equal to q. In each time slot, the number of channel uses is defined as: N = B × F resource elements. In the first phase, a scheduler at the destination decides which source node will be allocated to each different sub-band, with the constraint that at least one sub-band is allocated for each source. At a given time slot in the second phase, the scheduler decides which subset of relaying nodes will be active in the retransmission phase. The scheduler also allocates the partition of sub-bands given for each element of this active subset of nodes.
We define the B-dimensional vector of selected nodes in the transmission and retransmission phase at a certain time slot t as a t ∈ (S ∪ R) B . The i th element [a t ] i , of vector a t refers to the i th sub-band and the selected node active during this time slot in sub-band i. Similarly, we define the vector of number of allocated sub-bands for each node at a certain time slot t as the (M +L)-dimensional vector n t ∈ {0, 1, . . . , B} M +L . The i th element [n t ] i of vector n t refers to the number of sub-bands allocated for the node i ∈ N at time slot t. An example is given in Fig. 6.5, where M = 3, L = 2, and B = 5. Following this example, the vector a t is written as: a 0 = [s 1 , s 1 , s 2 , s 3 , s 1 ] T , a 1 = [s 3 , r 2 , r 2 , r 2 , s 2 ] T , and a 2 = [r 1 , r 1 , s 1 , s 1 , r 1 ] T ; and the vector n t is written as: n 0 = [3, 1, 1, 0, 0] T , n 1 = [0, 1, 1, 0, 3] T , and n 2 = [2, 0, 0, 3, 0] T . It can be seen that n t can be directly deduced from a t . The goal is to maximize the ASE (utility metric), which is the expectation of the spectral efficiency per frame η frame FDM . The metric η frame FDM depends on the channel realization H, and the selection strategy used P . In the FDM regime, H contains the channel gains per sub-band of all the links h f,a,b where f is the sub-band, a a source or a relay, and b a source or a relay or the destination. The channel gains h f,a,b are independent and follow a zero-mean circularly symmetric complex Gaussian distribution with variance γ a,b . Also, η frame FDM depends on the RP used, LA considered (how rates are allocated based on the channel information, e.g., SLA), and the parameters of the system (e.g., M, L, T max ). For simplicity, we only include within the following equations the dependency on the channel and the selection strategy. Now, η frame FDM can be defined as:
η frame FDM (H, P ) = nb bits successfully received nb channel uses = M i=1 R i (1 -O i,T used ) ⌈M/B⌉ + T used (6.3)
where R i = K i /N is the rate of a source i, with K i being the number of bits that can be transmitted by source i given N channel uses. R i is allocated based on the SLA process.
Outage events
The individual outage event O s,t (a t , S at,t-1 |h dir , L t-1 ), of a source s after time slot t, depends on the selected vector of nodes a t , the vector of number of allocated subbands n t , and the associated decoding sets S at,t-1 (i.e., the set containing the sets of successfully decoded source messages in the previous time slots at the nodes selected to transmit redundancies at different sub-bands at time slot t). It is conditional on the knowledge of the channel realization of the direct links h dir and on L t-1 which denotes the set collecting the vectors a k and n k that were selected in time slots k ∈ {1, . . . , t-1} prior to time slot t together with their associated decoding sets S a k ,k-1 , and the decoding set of the destination S d,t-1 (a 0 is the selected vector of source nodes allocated in the transmission phase; n 0 is the selected vector of number of sub-bands allocated for each source node in the transmission phase; and S d,0 is the destination's decoding set after the first phase). Here again, we notice that in order to simplify the notation, the dependency on h dir and L t-1 is omitted. Analytically, and following the SU encoding case, where a selected relaying node [a l ] f only helps a random source node chosen from its decoding set which is not decoded yet at the destination (called b l,f such that b l,f ∈ S [a l ] f ,l-1 ∩ S d,l-1 ), the individual outage using SU encoding of a source s can be written as:
O SU-FDM s,t (a t , S at,t-1 ) = BR s > ℓ (s) 0 + t-1 l=1 ℓ (s) l + ℓ (s) t , (6.4)
where
• Index l is for the retransmission time slot with the convention that l = 0 corresponds to the end of the transmission phase; l ∈ {1, . . . , T max }.
• ℓ
(s) l corresponds to the block fading mutual information from the nodes of a t to the destination d allocated at time l over the whole sub-bands:
ℓ (s) l = B f =1 I l,f,[a l ] f ,d [s = b l,f ] (6.5)
where b l,f ∈ S [a l ] f ,l-1 ∩ S d,l-1 is the selected source among the decoding set of node [a l ] f , and [q] represents the Iverson bracket which gives 1 if the event q is satisfied, and 0 otherwise.
For the common outage event, in the SU encoding sub-case, it is simply the union of the individual outage events of all the sources included in the considered subset B, and can be written as:
E SU-FDM t,B (a t , S at,t-1 ) = s∈B O SU-FDM s,t
(a t , S at,t-1 ). (6.6)
I l,f,[a l ] f ,d
is the mutual information between node [a l ] f allocated to sub-band f at time slot l and the destination, and which is defined based on the channel inputs (check section 6.3.2 for the Gaussian inputs example). The mutual information depends on the transmit power on sub-band f which is P
T [n l ] [a l ] f
and the channel between [a l ] f and d, where P T is the total power given for each node.
Although we use the SU encoding in our numerical results to be presented next, we present for completeness the outage events in case of the MU encoding case. The reason behind using SU case follows the results seen in chapter 3, which states the practicality of the SU case. Now, in the case of MU encoding, the outage events can be written as:
E FDM t,B (a t , S at,t-1 ) = U ⊆B s∈U BR i > i∈U ℓ (i) 0 + t-1 l=1 ℓ (U ) l + i∈U ℓ (U ) t , (6.7)
O FDM s,t (a t , S at,t-1 ) =
I⊂S d,t-1 ,B=I,s∈B E t,B (a t , S at,t-1 ) = I⊂S d,t-1 U ⊆I:s∈U i∈U BR i > i∈U ℓ (i) 0 + t-1 l=1 ℓ (U ) l + i∈U ℓ (U ) t , (6.8)
where
ℓ (U ) l = B f =1 I l,f,[a l ] f ,d (S [a l ] f ,l-1 ∩ U ̸ = ∅) ∧ (S [a l ] f ,l-1 ∩ I = ∅) . ( 6
.9)
Selection strategies
Here, rather than choosing a unique node to transmit/retransmit, a subset of nodes are chosen simultaneously. Due to the power distribution over the allocated sub-bands of each node, an optimal selection strategy needs to allocate the sub-bands jointly. In fact, in an exhaustive search strategy (optimal strategy), one can simply check all the possible combinations of vector allocations at all time slots. Conditional on the knowledge of the CSI of all the links in the network (the matrix H), we can find the optimal activation sequence of vectors with respect to the considered utility metric. Since there are T max retransmission time slots and ⌈M/B⌉ transmission time slot, the complexity of this strategy is (M + L) B(⌈M/B⌉+Tmax) . Clearly, this strategy is computationally very expensive.
In addition, we should stress that the knowledge of the CSI of all the links (the matrix H) would cost extremely large feedback overhead. Thus, this strategy is practically infeasible and is only considered as an upper bound to the proposed algorithms.
As the optimal solution costs a high complexity and heavy overhead, we propose a lower-complexity algorithm which does not need the full CSI of the channel. In strategy 1, we allocate the vector which maximizes the mutual information with the destination at each time slot. The idea of this strategy is to go through all the vector selection alternatives and find the one with the highest mutual information with the destination. In other words, we try all the possible values of the vector a t , and we select the one with the highest ℓ t where ℓ t = B f =1 I t,f,[a l ] f ,d . Note that we do not take into consideration the nodes which cannot help any non-decoded source node, i.e., we only consider the nodes i satisfying S d,t-1 ∩ S i,t-1 ̸ = ∅, for i ∈ {1, . . . , M + L}. Finally, the selection Algorithm 14 Selection process of strategy 1: highest mutual information.
a t ∈ argmax at∈Help B S d,t-1 B f =1 I t,f,[at] f ,d (6.10)
where Help S d,t-1 is the set of nodes that can help at time slot t. Note that for t = 0, the only candidate nodes are the source nodes, where their decoding sets are exactly themselves. Other relay nodes have empty decoding sets. Algo. 14 presents strategy 1, which as we can see faces a complexity issue, as the destination needs to exhaustively search all the allocation vectors belonging to Help S d,t-1 . Since the cardinality of Help S d,t-1 is lower or equal to L + M , the complexity is upper bounded by (M + L) B operations, each operation being the sum of B mutual information terms. As a lower complexity approach, we propose selection strategy 2. Here, rather than considering exhaustively all possible allocation vectors, we perform a sequential allocation per sub-band conditional on the increasing order of sub-bands. The active node selection for a given sub-band b is based on the computation of the cumulative mutual information up to that sub-band (f = 1, • • • , b). Indeed, the transmit power per sub-band depends on the number of sub-bands each node (source or relay) occupies. As a result, the mutual information of each previously allocated sub-bands needs to be re-evaluated if the power constraint is modified. Then, after each sub-band selection, the number is incremented for the allocated node. Strategy 2 can be implemented at a given time t and sub-band b as:
[ a t ] b ∈ argmax i∈Help S d,t-1 b-1 f =1 I t,f,[ at] f ,d + I t,b,i,d (6.11)
where Help S d,t-1 is defined above. Strategy 2 is presented in Algo. 15. This algorithm reduces the complexity of Algo. 14 by removing partially the inter-dependency of subband allocations. The number of needed operations is upper bounded by B(M + L) where each operation corresponds to an accumulated mutual information computation which has a lower or equal complexity than the sum of B mutual information terms. Note that the presented algorithms are applicable in the transmission and the retransmission time slots. The only difference in the transmission phase is the presence of an additional constraint, that each source will be allocated at least 1 sub-band. Since relays' decoding sets are empty in the transmission phase, we only pass through all possible combinations of source nodes giving the highest mutual information.
Algorithm 15 Selection process of strategy 2: highest cumulative mutual information per sub-band.
[ a t ] b ← argmax i∈Help S d,t-1 b-1 f =1 I t,f,[ at] f ,d + I t,b,i,d 10:
[ n t ] i = [ n t ] i + 1 ▷ Increment the number of allocated sub-bands for the node i = a t,b 11: end for 6.3.1.4 Control information exchange Fig. 6.6 describes the control information exchange process between the destination and the relay nodes. During the first phase, each source transmits its message at its dedicated sub-band(s) following the vector a 0 . Since the relays and sources are fullduplex, all nodes will be able to listen to the different messages. During the second phase, at the retransmission time slot t, the following control information exchange procedure occurs:
1. The destination broadcasts its decoding set S d,t-1 after the time slot t -1 over the feedback broadcast control channel. M bits are broadcasted in this step. If all the sources are included in the set S d,t-1 (i.e., the CRC succeeds), the process terminates, and a new frame transmission is initiated. Otherwise, the procedure continues through steps 2-4.
2. Each node which was able to decode at least one source message that is not included in the decoding set of the destination S d,t-1 sends one bit on a dedicated unicast forward coordination control channel.
3. The destination allocates the node vector a t which has the highest mutual information with the destination following the strategy mentioned in the previous subsection. Only the nodes described in step 2 are candidates at this step.
4. Each element [ a t ] f ∈ a t retransmits on its dedicated sub-band f . Each node performs SU encoding and chooses to help one source node from its decoding set. We call the vector of chosen nodes to be helped b t .
In the following section, we compare the proposed strategies with three benchmark strategies: the exhaustive search strategy, and the strategies used in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] and [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]. In [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF], the selection strategy is based on minimizing the probability of the common outage event after each retransmission time slot. A common outage event is the event that at least one source node is in outage. Although the individual outage probability is lowered In the symmetric link configuration (Fig. 6.7), all the links are considered the same (the average SNR of each link is set to γ), and all the rates are fixed to 0.5 [bits per channel use]. On the other hand, in the asymmetric link configuration (Fig. 6.8), we design a scenario where source 1 is in the best radio conditions and source 3 is in the worst radio conditions. Particularly, the links are set as follows: first, the average SNR of each link is set to γ; second, the average SNR of each link that includes source 2 is set to γ -1dB and which includes source 3 is set to γ -1.5dB; lastly, the average SNR of the link between the sources 2 and 3 is set to γ -2dB. Here, the rate allocation of each source is given using the SLA algorithm presented in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] from the set of possible rates {0.75, 1, 1.25, 1.5} [bits per channel use], and thus, the rates are optimized based on the value of γ.
In Fig. 6.7, we see the results of the five strategies in the symmetric link and rate scenario. For the considered SNR range (-5dB to 15dB), strategy 1 is approaching the upper bound with a shift less than 2dB. Similarly, strategy 2 is approaching strategy 1 with approximately the same shift. Both proposed strategies (1 and 2) outperform the strategy used in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] for all the SNR range with a significant shift. Finally, the strategy of reference [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] outperforms that of reference [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], but still faces a significant shift at low SNR values. In Fig. 6.8, a similar performance is seen over the same SNR range (-5dB till 15dB) for the asymmetric link and rate scenario. The strategy of reference [START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] is left out of the simulations as it is only considered for symmetric scenarios. For other strategies, we encounter a similar performance as in the symmetric scenario, where strategies 1 and 2 approach the upper bound and perform similarly, outperforming the strategy used in [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF].
We summarize our findings as follows: 1-The selection strategies used in the prior art are not effective in the FDM-based orthogonal MAMRN. 2-The proposed strategy 1 achieves a performance that is close to that of the exhaustive search approach, while including no overhead for full CSI acquisition and reducing the complexity. 3-The suboptimal strategy 2 represents a good trade-off between complexity/optimality, and can be practically used to reduce the complexity included in strategy 1. 4-the previous findings are valid with symmetric/asymmetric channel realizations and with fixed or optimally allocated rates.
In this section, we presented an FDM-based orthogonal MAMRN. Using a two-phase system, we reduce latency trying to reach the requirements of URLLC. We defined the error events and the spectral efficiency utility metric, and proposed two low-complexity low-overhead selection strategies that aim at maximizing this metric. Then, we presented the control information exchange procedure. The proposed algorithms outperform the strategies used in the prior art and achieve a spectral efficiency that is close to the upper bound while incurring no overhead for the full CSI acquisition and lowering the complexity. In future work, we might investigate the effect of PR of relaying nodes and work on reducing the control exchange process between the destination and the different nodes.
Open challenges
Several problems related to the tackled MAMRN remain open challenges that can be tackled in future work. In this section, we shed the light on some of these problems.
PR in FDM domain
In the previous section, we proposed selection strategies that can be used in the FDM orthogonal MAMRN. Nevertheless, the proposal uses SR. To our interest, and following the results of chapter 4 that ensure the significant gain of using PR, one further proposal is to use PR in the FDM regime. In the previous section, the selection strategies proposed aim to maximize the mutual information at the destination. In other words, the destination chooses the vector of relaying nodes which maximizes the mutual infor-mation. Here, upon using PR, rather than selecting a vector of relaying nodes to help (to send redundancies), we select a vector of source nodes to be helped (by multiple relaying nodes). The algorithm of selecting the vector of source nodes to be helped at each different sub-bands depends on the channel realization of the different nodes included in the considered network. In addition, the vector of source nodes which gives the highest mutual information depends on the equivalent channel of all relaying nodes helping all sources in the vector of selected source nodes to be helped. Accordingly, we present first the equivalent channel (similar to the way we did in chapter 4 section 1), followed by two selection strategies which select a vector of source nodes which gives the highest equivalent mutual information at the destination.
We denote the channel from each relaying node j ∈ {1, ..., M + L} to the destination at a given t and a given b as denoted h j,d,b and the set of relaying nodes j which can help the source i is denoted Help i . Accordingly, the destination selects the vector of sources s t of size B (corresponding to the available B sub-bands) with the best equivalent channel (highest equivalent SNR) to be helped at each sub-band. Then, at each subband f ∈ B, all the relaying nodes which decoded the chosen source [ s t ] f retransmit redundancies accordingly. At a given retransmission time slot t, at a given sub-band b, and for a given set of helping relaying nodes Help i , the equivalent SNR for helping the source i can be written as: [n t ] j 2 /N 0 , (6.12)
where P is the transmission power of each node, N 0 is the noise spectral density, and h j,d,b is the channel whose power is normalized to 1. We recall that [n t ] j refers to the number of sub-bands allocated for the node j ∈ N at time slot t.
• Case 2 "Equal Gain Combining (EGC)": each node j ∈ {1, ..., M + L} knows the phase Φ j of its channel toward the destination e -iΦ j = h If the node i is selected, the transmission of each node belonging to Help1 i will be multiplied by e -iΦ j (coherent reception for the nodes belonging to Help1 i ). Now, at a given time slot t, the destination selects the vector s t ∈ {1, . . . , M } B following:
s t ∈ argmax st∈{S d,t-1 } B B f =1 I t,f,[st] f ,d .
(6.15)
In the proposed strategy, the destination passes through all the possible vectors of sources of size B taken from the non-decoded source messages. One drawback of the proposed strategy is that the number of vectors might be huge with a high number of sources and/or sub-bands (for big M and/or B). Accordingly, a lower-complexity choice would be to select the vector of sources sequentially, sub-band by sub-band. At a given time slot t, at each sub-band b, we choose the source s t,b following:
[ s t ] f ∈ argmax s∈{S d,t-1 } b-1 f =1 I t,f,[ st] f ,d + I t,b,s,d . (6.16)
Note that I t,f,[st] f ,d with i ∈ {1, . . . , M }, f ∈ {1, . . . , B}, and t ∈ {0, . . . , T max } depends on the number of sub-bands [n t ] j allocated to each node j belonging to Help i (having decoded source i) since the power per node is shared in frequency and cannot exceed the maximum power P. As a result, it needs to be recomputed if one of these numbers changes. The complete algorithms can be written as in the following two algorithms.
As a comparison with SR in FDM (the previous section), we notice that the generalization to PR was quite simple. Comparing the equations (6.15, 6.16) with the equations (6.10, 6.11), we see that the difference is only in choosing a vector of sources to be helped rather than a vector of relaying nodes to help. In other words, the difference is seen in the way we calculate the equivalent mutual information, wherein the proposal with PR, we aim to exploit the DoF of all the available relaying nodes. Similarly, the algorithms of the two proposed strategies are quite similar to Algorithms 14 and 15, and the control exchange process is quite similar to the exchange process presented in 6.3.1.4 and in Fig. 6.6 (omitted for brevity).
Unfortunately, we did not yet prepare any numerical analysis for PR in the FDM regime. We assume that similar to the TDM case, the PR method will introduce a significant gain compared to the SR method. Also, we hope that using PR would reduce the gap between the upper bound and the two SR proposed strategies in the FDM regime. This analysis is to be investigated in the near future. Another direction would be to investigate PR with MU encoding case. Although the intuition of PR followed the SU encoding case, an interesting problem would be the generalization of PR with MU encoding case. That case is more complicated than PR in SR method and is also left as a future work.
Further generalizations of our work to the FDM regime are possible. One interesting problem to investigate in the FDM orthogonal MAMRN is the cost of the control exchange process. Similarly to the work presented in chapter 4 section 2, future work would be how to propose a similar method in the FDM regime with the aim of reducing the overhead of the control exchange process. Another future work would be proposing a joint allocation that can be applied in FDM orthogonal MAMRN. In fact, the generalization of the contributions of chapter 5 to FDM is straight forward. Nevertheless, in FDM, the dimension of the selected vector is higher (due to the B sub-bands available). This leads to a critical complexity problem that needs novel and different proposals to avoid. In a similar manner, and as mentioned in chapter 5, an important future work is to propose a joint allocation that does not depend on the full CSI acquisition. In fact, and due to the need for the full CSI, the contributions of chapter 5 are limited to slow-changing radio conditions scenarios (e.g., low mobility cases), and thus, interesting work is to propose a CDI-dependent joint allocation. In chapter 3, we used the selection strategy proposed in the prior art [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF]. In chapter 4, we proposed some novel selection strategies based on PR. All these strategies assume that the destination has only 1 receiving antenna. In particular, when using PR with EGC, the relaying nodes multiply its redundancy version by e -iΦ j (the conjugate of the channel divided by its norm). Upon the presence of multiple antennas at the receiver, EGC becomes inapplicable, and novel strategies are needed.
In a given orthogonal MAMRN configuration, the destination which is a base station can be possibly equipped with Ant > 1 receive antennas. In this configuration, the co-phasing or the technique of EGC allows the coherent addition of the channels only for a single reception antenna among the Ant antennas. Thus, the technique of EGC does not make it possible to maximize the global SNR resulting from the reception from the Ant antennas.
From a control information exchange point of view between the destination and the nodes, the first two steps of the control exchange seen in section 1 of chapter 4 remain the same (check Fig. 4.2). First, the destination broadcasts an ACK/NACK bit to all the relaying nodes. Then each node transmits to the destination, the clues of the sources that they can help. On this basis, the destination chooses the source to be helped for the current retransmission s i then sends the vector v i ∈ C |Help i | , a vector of complex coefficients containing the coefficients to be applied for each node having decoded s i in order to maximize the SNR at the destination (MRT) and the vector w i ∈ Help
|Help i | i
, where |Help i | is the cardinality of the set Help i , which establishes the correspondence between the nodes of Help i and the coefficients to apply of v i . Thus, the node [w i ] j ∈ Help i applies the complex coefficient v i,j for all j ∈ {1, . . . , |Help i |} to its transmission based on the receipt of w i and Help i .
Example: Suppose that the nodes Help i = {1, 4, 5} have decoded s i , then w i = [1, 4, 5] T with [w i ] 1 = 1, [w i ] 2 = 4, and [w i ] 3 = 5. The destination transmits the vector v i and w i , and based on the reception of these vectors, the node [w i ] j transmits m s i applying the coefficient v i,j for all j ∈ {1, 2, 3}, i.e.,: node 1: m s i × v i,1 As a reminder, the LA corresponds to the allocation per source of a bit rate (rate) defined by a modulation and a coding rate. Each retransmission associated with a source uses the modulation assigned during LA. On the basis of the reception of the vectors w i and v i , each node [w i ] j transmits the symbol [v i ] j m s i for the considered round. It turns out that the model received at the destination can be written y = H i v i m s i + z i (6.17)
where:
• y ∈ C F represents the vector of Ant samples received by the Ant antennas.
• H i ∈ C F ×Help i is the MIMO channel connecting the Help i nodes with the Ant antennas with [H i ] r,j representing the channel connecting the node [w i ] j and the antenna r of the destination denoted thereafter h r,[w i ] j .
• z i is a vector of noise plus interference samples whose covariance is Cov i .
It is well known by the prior art [START_REF] Bahrami | Maximum ratio combining precoding for multi-antenna relay systems[END_REF] that the vector v i which maximizes the SNR is the eigenvector of H T i Cov -1 i H i is associated with the maximum eigenvalue λ i = λ max (H T i Cov -1 i H i ), i.e., H T i Cov -1 i H i v i = λ i v i where λ i is the maximum eigenvalue. The associated SNR is then maximized SNR = |λ i | 2 . Then, the base station knowing for all the sources s l : l ∈ {1, . . . , M }, the nodes having decoded this source Help i and the channels h r,n l,j for all j ∈ {1, . . . , |Help l |}, chooses the source s i to be helped for a given retransmission following: i = argmax j∈{1,...,M } λ max (H T i Cov -1 i H i ) (6.18) and then sends the vectors w i and v i . Note that for complexity reasons, the covariance Cov i can be approximated by a multiple matrices of the identity. Moreover, the complex vector v i must in practice be quantized over a finite number of bits. The presented proposal is not yet investigated numerically. In addition, the generalization to FDM orthogonal MAMRN is not yet proposed and considered as a near future work. Further interesting idea is to target the case of multiple antennas at the transmitter side and not only at the receiver side.
Reconfigurable intelligent surface
One interesting method to improve the communication is to use Reconfigurable Intelligent Surface (RIS) elements. The RIS topic is widely investigated [START_REF] Basar | Wireless communications through reconfigurable intelligent surfaces[END_REF][START_REF] Huang | Reconfigurable intelligent surfaces for energy efficiency in wireless communication[END_REF] and is seen as a hot topic in wireless communication nowadays [START_REF] Jian | Reconfigurable intelligent surfaces for wireless communications: Overview of hardware designs, channel models, and estimation techniques[END_REF][START_REF] Cheng | Reconfigurable intelligent surfaces: Simplified-architecture transmitters-from theory to implementations[END_REF]. Citing [START_REF] Di Renzo | Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison[END_REF], an RIS is an artificial surface, made of electromagnetic material, that is capable of customizing the propagation of the radio waves impinging upon it. The reason that we mention RIS here is that it shares some of the concepts and notions of relaying. In fact, several works compared relaying and RIS networks, shedding light on the limitations and the advantages of each of the cooperative networks. In [START_REF] Di Renzo | Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison[END_REF], we see an interesting survey that compares the usage of RIS and relaying. The authors present the similarities and differences between the two notions of cooperative communication, following the different factors such as hardware complexity, spectral efficiency, power budget, and noise. The takeaway message of this work is that the RIS-aided transmission may outperform the relay-aided transmission if the size of the RIS is sufficiently large.
Thus, following this reference, as well as other references (check for example [START_REF] Gu | Performance comparisons between reconfigurable intelligent surface and full/half-duplex relays[END_REF][START_REF] Abdullah | Cooperative hybrid networks with active relays and riss for b5g: Applications, challenges, and research directions[END_REF][START_REF] Hussein | Reconfigurable intelligent surfaces-aided joint spatial division and multiplexing for mu-mimo systems[END_REF][START_REF] Hussein | Reconfigurable intelligent surface index modulation with signature constellations[END_REF]), we see that going to the RIS domain would be interesting, and we believe that several contributions proposed in this manuscript could be applicable to RIS networks. In addition, we see that the use of RIS with relaying networks is a DoF that would lead to significant improvements.
Finally, and besides the mentioned perspectives, we mention here some directions that we could further investigate. First, NOMA cooperative network is seen as interesting due to its theoretical gains in capacity. Second, investigating non-centralized networks is another direction that would be analyzed with the aim of reducing the different control exchanges between the different networks. In addition, analyzing the orthogonal MAMRN with non-perfect feedback and its effect on the performance is also interesting.
Chapter 7 Conclusion
Cooperative communication is seen as an innovative concept that allows the enhancement of efficiency of multi-terminal wireless networks. The focus of this thesis is set on two main important aspects of the TDM orthogonal MAMRN: the design of LA algorithms that are applicable in slow and fast changing radio channel conditions, as well as the design of relaying nodes scheduling algorithms that exploit the multi-path diversity of the relaying nodes. As compared to the prior art, these algorithms outperform the performance of the previous works by improving the practicality, reducing the complexity, increasing the efficiency, and reducing the overhead.
In chapter 3, LA algorithms based on the BRD methods are proposed. The algorithms presented tackle both the rate and the channel use allocation. In addition, they are applicable in both MU and SU encoding schemes. In order to reduce the complexity of the proposed algorithms, the rate and the channel use ratio of each source are first determined by using the "Genie-Aided" assumption which consists in considering for a given source that all the other ones are known to the relaying nodes and the destination. In a second step, an iterative correction step is applied. The resulting LA allocations offer a tractable complexity for the different scenarios investigated. In addition, it is shown that there is a significant impact of user cooperation on the spectral efficiency as well as exploring the DoF of the time slot duration associated with each source during the first transmission phase. The performance of MU and SU encoding are seen alike, and thus, it ensures the importance of SU being a practical low-complexity method. In the last section of this chapter, an intermediate LA strategy is proposed. The idea of this strategy is to outperform the SLA strategy by using a FLA with partial CSI. The strategy builds its selection by exploiting the knowledge of the CSI of the direct with the destination. Using this method, it is seen that the performance outperforms the SLA strategy and approaches the FLA with no need for a heavy overhead.
In chapter 4, a novel selection strategy for orthogonal MAMRN is proposed. Rather than selecting a single relaying node to send redundancies at a given retransmission time slot, the PR strategy allows several relaying nodes to send redundancies for a common source node selected to be helped. The proposed strategy outperforms the prior art (i.e., SR) by making use of the power budget available at each relaying node included in the system. Furthermore, the overhead of the control exchange process is tackled. Using estimation of the number of retransmissions needed for every source to be correctly decoded, a low-complexity low-overhead selection strategy which is applicable without a heavy control exchange process is presented. It is seen using MC simulations that the proposed algorithms outperform the performance of the prior art. Specifically, the PR strategy outperforms the SR strategy. In addition, using the low-overhead selection strategy significantly improves the performance by avoiding unnecessary control exchange processes.
In chapter 5, a FLA joint strategy for the rate and the relaying nodes allocation is studied. The proposed strategy leads to the highest possible spectral efficiency. It first passes through all the possible relaying nodes allocation, and then, for each allocation, it determines the highest rate allocation for the sources in the network. This strategy makes it possible to choose the joint allocation which gives the highest spectral efficiency. The proposal solves two main issues seen in the prior art: 1-it removes the sub-optimality of solving the two problems separately (the rate allocation problem and the selection strategy problem), and 2-, it removes the need of an exhaustive search over a discrete finite set of possible rates that used to limit the practicality of the BRD proposed in the prior art.
Finally, in chapter 6, future directions and open challenges are presented. In learning framework, the MAB framework is seen as a possible direction to solve the rate allocation problem when no CDI information is available at the scheduler. In addition, the SUCB1 algorithm is seen as an interesting solution to the exploration-exploitation problem of an online learning problem. On the other hand, the FDM-based orthogonal MAMRN is presented with the needed outage and utility metric events. Two novel preliminary algorithms for relaying nodes scheduling are presented and seen to have a good performance. upper bound exceeds the upper bound of the best super arm (and j might coincide with the best action, but that's fine). The fourth line comes from the following fact. If the upper bound of super arm i exceeds that of the optimal choice, it is also the case that the maximum upper bound for action i we have seen after the first m trials exceeds the minimum upper bound we have seen on the optimal super arm ever. But, on RL round j we do not know how many times we have played the optimal super arm, nor do we even know how many times we have played super arm i (except that it's more than m). So we try all possibilities and look at the minimum and the maximum. We denote by X i,s the random variable for the empirical mean after playing action i a total of s times, and X * s the corresponding quantity for the optimal super arm. Realizing everything in notation, the final line holds due to the following: at each j for which the max is greater than the min, there will be at least one pair s, s ′ for which the values of the quantities inside the max/min will satisfy the inequality. And so, even worse, we can just count the number of pairs s, s ′ for which it happens. That is, we can expand the event above into the double sum which is at least as large. For the first summation, we increase the sum to go from j = 1 to ∞. This means that we can replace j -1 with j and thus the final line is reached. Now, when the event X i,s + a(s, j) ≥ X * s ′ + a(s ′ , j) is actually happened, one of the following must hold:
(1) : X * s ′ ≤ θ * -a(s ′ , j) (2) : X i,s ≥ θ i + a(s, j)
(3) : θ * < θ i + 2a(s, j) where θ * and θ i are the average reward of the optimal super arm and the super arm i. Using union bound, bounding (1) and (2) by j -4 , and making (3) always false by choosing s ≥ m > 8 ln t/∆
1 1 . 2 1 . 3
11213 Cooperative communication and Relaying Protocols (RP) . . . . . . . . . 1 1.1.1 Different cooperative networks . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Different relaying protocols . . . . . . . . . . . . . . . . . . . . . . 3 Link adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.1 Different link adaptation problems . . . . . . . . . . . . . . . . . 6 1.2.2 Link adaptation problems in learning and decentralized networks 8 Relaying nodes selection strategies . . . . . . . . . . . . . . . . . . . . . . 1.4 Motivation and scope of the thesis . . . . . . . . . . . . . . . . . . . . . . 1.5 Thesis contribution and outline . . . . . . . . . . . . . . . . . . . . . . .
2. 1 4 . 8
148 The MAMRN consists of a wireless network with multiple sources, multiple relays, and a single destination. . . . . . . . . . . . . . . . . . . . . 2.2 Transmission of a frame: initialization, first and second phases. A control exchange process is seen before each retransmission time slot. . . . . . . . 2.3 A toy example describing the process of the selection strategy used in the following chapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Control exchange process used in the following chapter. . . . . . . . . . . 3.1 Illustration of the "Genie-Aided" assumption. . . . . . . . . . . . . . . . 3.2 Proposed frame structure with variable packet size in the transmission phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 ASE that corresponds to the proposed link adaptation algorithm for different scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 The ratio of the ASE with variable α and fixed α with respect to γ. . . . 3.5 The ratio of the ASE with variable α and fixed α with respect to network size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 ASE that corresponds to the different algorithms with a variable α. . . . 3.7 ASE of BRD approach under SLA QoS 1 for different number of MC samples. 3.8 The (average) number of BRD iterations with respect to sources/relays included in the system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 FLA with partial CSI knowledge for orthogonal MAMRN. . . . . . . . . 3.10 The CDI update event steps. . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 ASE that corresponds to the different link adaptation strategies. . . . . . 4.1 A toy example describing the process of the selection strategy proposed in this chapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Control exchange process corresponding to: the prior art (in blue) and the current proposal (in bold red). . . . . . . . . . . . . . . . . . . . . . . 4.3 ASE with symmetric configuration for SR and PR. . . . . . . . . . . . . 4.4 ASE with asymmetric configuration for SR and PR. . . . . . . . . . . . . 4.5 Gain ratio with asymmetric configuration with respect to the number of relays in the network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 ASE with asymmetric configuration with EGC. . . . . . . . . . . . . . . 4.7 Average energy reduction when using the proposed EE strategy for different β values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Control exchange process in the proposed selection strategy: in black, we see the steps upon a decoding set update request; and in bold orange, we see the reduced steps when there is no decoding set update request. . . . 4.9 ASE with symmetric link and rate configuration with Γ = 0. . . . . . . . 4.10 ASE with symmetric link and rate configuration with Γ ̸ = 0. . . . . . . . 4.11 The ratio of the effective ASE of proposal with 0 requests and the effective ASE of the different benchmark selection strategies. . . . . . . . . . . . . 5.1 Control exchange process in the proposed joint allocation. . . . . . . . . 5.2 Transmission of a frame following the proposed joint allocation. . . . . . 5.3 ASE that corresponds to the proposed joint allocation and the BRD allocation with symmetric link configuration. . . . . . . . . . . . . . . . . . 5.4 ASE that corresponds to the proposed joint allocation with symmetric link configuration for different discrete sets of rates. . . . . . . . . . . . . 5.5 ASE of the proposal joint allocation (optimal and sequential) with respect to the size of the network. . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Efficiency of the different MAB algorithms for γ = -4dB. . . . . . . . . . 6.2 Efficiency of the different MAB algorithms for γ = 6dB. . . . . . . . . . . 6.3 Efficiency of the different MAB algorithms for γ = 21dB. . . . . . . . . . 6.4 ASE vs γ after 500 Samples. . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Allocation of the resources between the sources and the relays in the transmission and the retransmission phases. . . . . . . . . . . . . . . . . 6.6 Control information exchange for the proposed selection strategies in the FDM orthogonal MAMRN. . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 ASE with symmetric link and rate configuration for different FDM allocation strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 ASE with asymmetric link and rate configuration for different FDM allocation strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Control exchange process corresponding to: multiple antennas (in blue) and the single antenna (in bold red). . . . . . . . . . . . . . . . . . . . . List of Tables 1.1 Different cooperative networks. . . . . . . . . . . . . . . . . . . . . . . . 1.2 Relaying protocols summary: retransmission type and protocol. . . . . . 1.3 Relaying protocols summary: retransmission method. . . . . . . . . . . . 1.4 Different link adaptation problems. . . . . . . . . . . . . . . . . . . . . . 1.5 Different link adaptation problems in learning and decentralized networks. 1.6 Scheduling literature review summary. M > 1: number of sources; L > 1: number of relays; OP = outage probability; BER = bit error rate, EC = effective capacity; RT = retransmission type; CE = control exchange design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Different selection strategies of the current and the prior arts. . . . . . .3.1 Different allocation methods with their corresponding complexities and performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Description of the different link adaptation schemes. . . . . . . . . . . . .
Figure 2 . 1 :
21 Figure 2.1: The MAMRN consists of a wireless network with multiple sources, multiple relays, and a single destination.
Figure 2 . 2 :
22 Figure 2.2: Transmission of a frame: initialization, first and second phases. A control exchange process is seen before each retransmission time slot.
(a) (3,2,1)-MAMRN with the decoding sets (b) Candidate nodes in this example (c) Selected node in this example (d) Candidate sources in this example
Figure 2 . 3 :
23 Figure 2.3: A toy example describing the process of the selection strategy used in the following chapter.
Figure 2 . 4 :
24 Figure 2.4: Control exchange process used in the following chapter.
Figure 3 . 1 :
31 Figure 3.1: Illustration of the "Genie-Aided" assumption.
Figure 3 . 2 :
32 Figure 3.2: Proposed frame structure with variable packet size in the transmission phase.
Figure 3 . 3 :
33 Figure 3.3: ASE that corresponds to the proposed link adaptation algorithm for different scenarios.
Figure 3 . 4 :
34 Figure 3.4: The ratio of the ASE with variable α and fixed α with respect to γ.
Figure 3 . 5 :
35 Figure 3.5: The ratio of the ASE with variable α and fixed α with respect to network size.
(c) SLA with QoS 2 targetFigure 3 . 6 :
236 Figure 3.6: ASE that corresponds to the different algorithms with a variable α.
Figure 3 . 7 :
37 Figure 3.7: ASE of BRD approach under SLA QoS 1 for different number of MC samples.
Figure 3 . 8 :
38 Figure 3.8: The (average) number of BRD iterations with respect to sources/relays included in the system.
Figure 3 . 9 :
39 Figure 3.9: FLA with partial CSI knowledge for orthogonal MAMRN.
Figure 3 . 10 :
310 Figure 3.10: The CDI update event steps.
3 :
3 Rate initialization under GA (or other) assumption:[ R 1 (0), . . . , R M (0)] ← [R GA 1 , . . . , R GA M ]. 4: R i (-1) ← 0 for all i ∈ {1, . . . , M } ▷ To force loop to start 5: while (| R i (t) -R i (t -1)| > 0), for some i ∈ {1, . . . , M } do 6:
Figure 3 . 11 :
311 Figure 3.11: ASE that corresponds to the different link adaptation strategies.
(a) (3,2,1)-MAMRN with the decoding sets (b) Candidate source nodes in this example (c) Selected source and relaying nodes in this example
Figure 4 . 1 :
41 Figure 4.1: A toy example describing the process of the selection strategy proposed in this chapter.
Figure 4 . 2 :
42 Figure 4.2: Control exchange process corresponding to: the prior art (in blue) and the current proposal (in bold red).
Figure 4 . 3 :Figure 4 . 4 :
4344 Figure 4.3: ASE with symmetric configuration for SR and PR.
Figure 4 . 5 :
45 Figure 4.5: Gain ratio with asymmetric configuration with respect to the number of relays in the network.
Figure 4 . 6 :
46 Figure 4.6: ASE with asymmetric configuration with EGC.
Figure 4 . 7 :
47 Figure 4.7: Average energy reduction when using the proposed EE strategy for different β values.
Figure 4 . 8 :
48 Figure 4.8: Control exchange process in the proposed selection strategy: in black, we see the steps upon a decoding set update request; and in bold orange, we see the reduced steps when there is no decoding set update request.
Figure 4 . 9 :
49 Figure 4.9: ASE with symmetric link and rate configuration with Γ = 0.
Figure 4 . 10 :
410 Figure 4.10: ASE with symmetric link and rate configuration with Γ ̸ = 0.
Figure 4 . 11 :
411 Figure 4.11: The ratio of the effective ASE of proposal with 0 requests and the effective ASE of the different benchmark selection strategies.
Algorithm 9
9 Proposed joint allocation. 1: MAX ← 0 ▷ Initialize the MAX value 2: for T used = 0 till T used = T max do ▷ For all frame sizes 3:
4 : 5 : 6 : 11 :
45611 Compute R(A) : (R i (A) for all i ∈ S) ▷ Compute the highest rates for A if (MAX < η frame (R(A), A)) then ▷ If we encounter a better selection Compute R( A) : (R i ( A) for all i ∈ S) ▷ Choose the highest rate for selected A Algorithm 10 Proposed sequential joint allocation.1: A ← ϕ, MAX ← η frame (R(ϕ), ϕ) ▷ Initialization 2: for t = 1 till t = T max do▷ For all frame sizes 3:b t ← argmax bt∈S η frame (R(B), B = [ b 1 , . . . , b t-1 , b t ] T ) ▷ Select the t th source 4:if (MAX < η frame (R( B), B)) then ▷ If we encounter a better selection 5:
Figure 5 . 1 :
51 Figure 5.1: Control exchange process in the proposed joint allocation.
Figure 5 . 2 :
52 Figure 5.2: Transmission of a frame following the proposed joint allocation.
Figure 5 . 3 :
53 Figure 5.3: ASE that corresponds to the proposed joint allocation and the BRD allocation with symmetric link configuration.
Figure 5 . 4 :
54 Figure 5.4: ASE that corresponds to the proposed joint allocation with symmetric link configuration for different discrete sets of rates.
Figure 5 . 5 :
55 Figure 5.5: ASE of the proposal joint allocation (optimal and sequential) with respect to the size of the network.
Figure 6 . 1 :Figure 6 . 2 :Figure 6 . 3 :
616263 Figure 6.1: Efficiency of the different MAB algorithms for γ = -4dB.
Figure 6 . 4 :
64 Figure 6.4: ASE vs γ after 500 Samples.
Figure 6 . 5 :
65 Figure 6.5: Allocation of the resources between the sources and the relays in the transmission and the retransmission phases.
Figure 6 . 7 :
67 Figure 6.7: ASE with symmetric link and rate configuration for different FDM allocation strategies.
Figure 6 . 8 :
68 Figure 6.8: ASE with asymmetric link and rate configuration for different FDM allocation strategies.
• Case 1 :
1 each relaying node j ∈ {1, ..., M + L} does not know the channel h j,d,b SNR i,t,b = P j∈Help i h j,d,b
• Case 3 : 2 /
32 * j,d,b /|h j,d,b | with i 2 = -1SNR i,t,b = P Assuming that the subset Help i = Help1 i Help2 i breaks down into a subset Help1 i of nodes knowing their phase with the destination (sent by the destination) and Help2 i not knowing it, in this case, SNR i for i ∈ {1, ..., M } is written as:SNR i,t,b = P j∈Help1 i h j,d,b [n t ] j + j∈Help2 i h j,d,b [n t ] j N 0 . (6.14)
6. 4 . 2
42 Selection strategies for multiple antennas receivers 6.4.2.1 Control exchange
Figure 6 . 9 :
69 Figure 6.9: Control exchange process corresponding to: multiple antennas (in blue) and the single antenna (in bold red).
node 4 :
4 m s i × v i,4 node 5: m s i × v i,5 .6.4.2.2 Selection strategy: Maximum Ratio Transmission (MRT)
Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 System description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Performance metric and outage events . . . . . . . . . . . . . . . . . . . Dynamic Rate and Channel Use Allocation Algorithms 3.1 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Fixed time slot duration . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Starting point using the "Genie-Aided" assumption . . . . . . . . 3.2.2 Sequential Best-Response Dynamic solution . . . . . . . . . . . . 3.2.3 Convergence and complexity . . . . . . . . . . . . . . . . . . . . . 3.3 Variable time slot duration . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Novel system model . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Performance metric and outage events for variable channel uses . 3.3.3 Rate and channel use allocation . . . . . . . . . . . . . . . . . . . 3.4 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 FLA with partial CSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Utility of FLA with Partial CSI and the proposed algorithm . . . 3.5.3 Numerical result . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 System Model 19
2.1
2.3.1 Average spectral efficiency . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Outage events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii 3
Table 1 .
1
]
2: Relaying protocols summary: retransmission type and protocol.
Ali Al Khansa, Raphael Visoz, Yezekael Hayel, Samson Lasaulce, and Rasha Alkhansa, R. Dynamic Rate and Channel Use allocation for Cooperative Wireless Networks. In submission in EURASIP Journal on Wireless Communications and Networking.
wireless networks using best-response dynamics. In International Conference on
Network Games, Control and Optimization. Springer, Cham.
• Ali Al Khansa, Raphael Visoz, Yezekael Hayel, Samson Lasaulce, and Rasha
Alkhansa, (2021, December). Fast Link Adaptation with Partial Channel State
Information for Orthogonal Multiple Access Multiple Relay Channel (OMAMRC).
In 2021 IEEE 3rd International Multidisciplinary Conference on Engineering Tech-
nology (IMCET) (pp. 11-16).
•
Stefan Cerovic, Raphael Visoz, Yezekael Hayel, and Samson Lasaulce,
(2021, September). Slow-link adaptation algorithm for multi-source multi-relay
and the following patent fillings:
• Ali Al
Khansa, Raphael Visoz, Stefan Cerovic, "Procédé et système OMAMRC de transmission avec variation du nombre d'utilisations du canal", Application No: FR2004643. Date de dépôt: 12/05/2020. • Ali Al Khansa, Raphael Visoz, "Procédé de réception d'au moins une trame de données dans un système OMAMRC, destination, programme d'ordinateur et système correspondants", Application No: FR2014132. Date de dépôt: 24/12/2020.
Ali Al Khansa, Raphael Visoz, Yezelael Hayel, Samson Lasaulce, and Rasha Alkhansa, Centralized Scheduling for MAMRN with Optimized Control Channel Design. In submission in Annals of Telecommunications. and the following patent fillings: • Ali Al Khansa, Raphael Visoz, "Procédé de retransmission coopérative dans un système OMAMRC", Application No: FR2206422. Date de dépôt: 28/06/2022. • Ali Al Khansa, Raphael Visoz, "Procédé de transmission et système OMAMRC avec une stratégie de sélection lors de retransmissions tenant compte du débit des sources et d'un unique échange de contrôle Domaine de l'invention", Application No: FR2206443. Date de dépôt: 28/07/2022.
Raphael Visoz, Yezelael Hayel, Samson Lasaulce, and Rasha Alkhansa, (2022, September). Parallel Retransmissions in Orthogonal Multiple Access Multiple Relay Networks. In International Workshop on Resource Allocation and Cooperation in Wireless Networks (RAWNET), 2022. • • Ali Al Khansa, Raphael Visoz, "Procédé de transmission et système OMAMRC avec une stratégie de sélection lors de retransmissions tenant compte du débit des sources et d'un ou plusieurs échanges de contrôle Domaine de l'invention", Application No: FR2206446. Date de dépôt: 28/07/2022.
Raphael Visoz, Yezelael Hayel, and Samson Lasaulce, Joint Rate and Relaying Nodes allocation for Fast Link Adaptation with Full Channel State Information. Accepted in the 5th International Conference on Advanced Communication Technologies and Networking (CommNet 2022).
and the following patents filling:
• Ali Al Khansa,
Raphael Visoz, "Joint Rate and Relaying Nodes allocation for Fast Link Adaptation with Full Channel State Information", Application No: FR2210608. Date de dépôt: 14/10/2022. • Ali Al Khansa, Raphael Visoz, "Stratégie de sélection optimale à l'aide d'un échange CSI pour l'OMAMRC", Application No: FR2214095. Date de dépôt: 21/12/2022. • Ali Al Khansa, Raphael Visoz, "Stratégie de sélection optimale à l'aide d'un échange CSI conditionnel pour l'OMAMRC", Application No: FR2214097. Date de dépôt: 21/12/2022.
Raphael Visoz, Yezelael Hayel, and Samson Lasaulce, (2021, May).
Chapter 2
System Model
2.1 Chapter summary
Resource allocation for multi-source multi-relay wireless networks: A multi-armed bandit approach. In International Symposium on Ubiquitous Networking (pp.
[START_REF] Zhao | Resource allocation for multiple access channel with conferencing links and shared renewable energy sources[END_REF][START_REF] Abuajwa | Resource allocation for throughput versus fairness trade-offs under user data rate fairness in noma systems in 5g networks[END_REF][START_REF] Lozano | Optimum power allocation for parallel gaussian channels with arbitrary input distributions[END_REF][START_REF] Papandreou | Bit and power allocation in constrained multicarrier systems: The single-user case[END_REF][START_REF] Dai | Distributed power allocation for cooperative wireless network localization[END_REF][START_REF] Festus Kehinde Ojo | Optimal power allocation in cooperative networks with energy-saving protocols[END_REF][START_REF] Iqbal | Channel allocation in multi-radio multi-channel wireless mesh networks: a categorized survey[END_REF][START_REF] Kumar | Channel allocation in cognitive radio networks using evolutionary technique[END_REF][START_REF]Survey on channel allocation techniques for wireless mesh network to reduce contention with energy requirement[END_REF][START_REF] Yilmazel | A novel approach for channel allocation in ofdm based cognitive radio technology[END_REF][START_REF] Zhao | Channel allocation for cooperative relays in cognitive radio networks[END_REF][START_REF] Deng | Cooperative channel allocation and scheduling in multi-interface wireless mesh networks[END_REF][START_REF] Toumpis | Capacity regions for wireless ad hoc networks[END_REF][START_REF] Kodialam | Characterizing the capacity region in multi-radio multi-channel wireless mesh networks[END_REF]
. Springer, Cham. • Ali Al Khansa, Raphael Visoz, Yezelael Hayel, Samson Lasaulce, and Rasha Alkhansa, (2022, October). Centralized Scheduling for Frequency Domain Orthogonal Multiple Access Multiple Relay Network. In 2022 27th Asia-Pacific Conference on Communications (APCC). IEEE. and the following patent fillings: • Ali Al Khansa, Raphael Visoz, "Procédé et système OMAMRC avec transmission FDM", Application No: FR2006623. Date de dépôt: 24/06/2020. • Ali Al Khansa, Raphael Visoz, "OMAMRC retransmission par source avec MRT", Application No: FR 2205907. Date de dépôt: 16/06/2022. • Ali Al Khansa, Raphael Visoz, "Procédé et système OMAMRC avec transmission FDM et coopérations multiples par sous-bande", Application No: FR2210584. Date de dépôt: 14/10/2022.
Table 3 .
3 codewords, non-outage achieving Joint Network Channel Coding/Joint Network Channel Decoding
Allocation Method Complexity Properties
Optimal but complex:
Exhaustive with variable α Ω((n MCS × n CUR ) M ) exponential with M and
linear with n MCS and n CUR
Optimal (for fixed α)
Exhaustive with fixed α Ω((n MCS ) M ) but complex: exponential with M and
linear with n MCS
BRD with variable α Ω(n MCS × n CUR ×M × n BRD ) Approaches the exhaustive method with lower complexity (linear with M )
BRD with fixed α Ω(n MCS ×M × n BRD ) Approaches the exhaustive method with lower complexity (linear with M )
GA with variable α Ω(n MCS ×n CUR × M ) Low complexity but with mediocre performance
GA with fixed α Ω(n MCS × M ) Low complexity but with mediocre performance
Fixed allocation Ω(1) Unacceptable performance
1: Different allocation methods with their corresponding complexities and performance.
Table 3 .
3 2: Description of the different link adaptation schemes.
• Channel information: Based on the
CDI of the links, the allocation is deter-
mined using MC simulations, over the
probability distribution of the links
FLA
• LA occurrences: Once the CSI of
any link changes
• Channel information: Based on the
exact CSI of the links, the allocation is
performed
FLA with partial CSI
• LA occurrences: Once the CSI of
any direct link or the CDI of any in-
direct link need to be updated
• Channel information: Based on the
CSI of the direct links, and the CDI of
indirect links
computation is needed
16: Compute A using equation (4.10) ▷ recompute A
17: end if
18: end if
19:
d .
▷ Initialize the
boundaries.
2: while (Right -Left > ϵ) do ▷ While the window size > ϵ
3: R i ← (Right -Left)/2. ▷ Set the candidate rate
4: if (O i,T used (A) = 0) then ▷ If there is no outage
5: Left ← R i ▷ Shift the lower bound
6: else ▷ If there is outage
7: Right ← R i ▷ Shift the upper bound
8:
2: while (Right -Left > ϵ) do ▷ While the window size > ϵ
3: R i ← (Right -Left)/2. ▷ Set the candidate rate
4: if (O i,T used (A) = 0) then ▷ If there is no outage
5: Left ← R i ▷ Shift the lower bound
6: else ▷ If there is outage
7: Right ← R i ▷ Shift the upper bound
8:
1 :
1 Initialization: For t = 1, . . . , n M MCS , for the t th RL round, select the super arm t (play each super arm once). 2: UCB: For t ≥ n M MCS +1, for the t th RL round, select the super arm i which maximizes Rew i (t) + Rew i (t) representing the average reward obtained from super arm i up to RL round t, and the upper confidence term represented by
2 ln t nb i (t) .
on two terms summed together, 2 ln t
nb i
The proposal with zero requests is robust and optimal, as it achieves the upper bound in the case of no overhead consideration, and outperforms all the other strategies when we take into consideration the overhead.
The strategy of[START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] outperforms that of[START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF] which confirms that when using SR, it is better to choose the relaying node having the highest mutual information, rather than minimizing the common outage probability.
Although PR[START_REF] Al Khansa | Parallel retransmissions in orthogonal multiple access multiple relay networks[END_REF] outperforms the other strategies (i.e., strategies of[START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF][START_REF] Mohamad | Cooperative incremental redundancy hybrid automatic repeat request strategies for multi-source multi-relay wireless networks[END_REF]), it acts poorly when considering the overhead of its control exchange process. The third conclusion can be justified by the fact that, although PR is a promising strategy, it acquires a heavier control exchange process which degrades its performance.
Acknowledgments
I would like to express my deepest appreciation to all those who provided me with the possibility to complete this study.
List of Publications
Journal Papers [START_REF] Al Khansa | Dynamic rate and channel use allocation for cooperative wireless networks[END_REF][START_REF] Al Khansa | Centralized scheduling for mamrn with optimized control channel design[END_REF] List of Algorithms
Numerical results
In this section, we validate the learning algorithms with an orthogonal (3,3,1)-MAMRN, while using 4 possible retransmissions in the second phase (T max = 4) and α = 0.5. We assume independent Gaussian distributed channel inputs (with zero mean and unit variance), with I a,b = log 2 (1 + |h a,b | 2 ). Note that some other formulas could be also used for calculating I a,b but they would not have any impact on the basic concepts of this work. There are many factors to investigate: links configuration, SNR levels, and different MCS families.
Due to brevity, and after carefully checking different possible scenarios, we present the results of symmetric link configuration (SNR of all channel links is symmetric). Three different levels of SNR will be considered, specifically, SNR = {-4, 6, 21}dB. The importance of choosing the different SNR links, is that the optimal rate allocation (the Oracle allocation) is different at each SNR level. Following the discrete MCS family whose rates belong to the set {0.5,1,1.5,2,2.5,3,3.5} [bits per channel use], the Oracle rate allocation of sources {s 1 , s 2 , s 3 } will be {1, 1, 1}, {3, 3, 2.5}, and {3.5, 3.5, 3.5} respectively to the SNR set investigated.
In figures 6.1, 6.2, and 6.3, we see the regret analysis of the three different SNR levels. For clarity of the results, we present the regret in the form of a percentage loss with respect to the optimal efficiency. In other words, we compare the efficiency of the algorithms as a ratio of the rewards of the algorithms and the Oracle. In Fig. 6.1, for SNR = -4dB, we see that the three algorithms are featuring a close regret level (up to 25% loss after 1000 samples). Next, in Fig. 6.2, for SNR = 6dB, we see a great improvement with using SUCB1 (reaching 90% of the optimal reward), as compared to UCB1 and AUCB1 which act closely as in the case when γ = -4dB. In Fig. 6.3, the same result is seen for SNR = 21dB, where SUCB1 is outperforms other algorithms, while AUCB1 is slightly better than UCB1. Finally, in Fig. 6.4, we present the ASE, for the different SNR levels between -5 and 15dB after 500 samples (larger numbers of samples were investigated and gave the same results). We see that the proposed SUCB1 algorithm approaches the upper bound (the Oracle) while outperforming UCB1 and AUCB1.
To sum up, we investigated in this section the LA of OMAMRN using an online learning framework, MAB. First, we formulate the system model as a MAB problem. in this strategy (since Pr(O s,Tmax ) ≤ Pr(common outage)), it is not minimized. In [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF], the selection strategy is based on choosing the relaying node having the best channel with the destination at each time slot. Here (i.e., in the FDM regime), as we have several sub-bands, the selection using the strategy of [START_REF] Cerović | Efficient cooperative harq for multi-source multi-relay wireless networks[END_REF] is repeated at each different sub-band. The drawback of this method is that it ignores the fact that one sub-band allocation may affect other sub-bands allocations.
Numerical results
In this section, we validate the proposed selection strategies using MC simulations. We consider a (3,3,1)-MAMRN scenario, with 3 sub-bands per time slot. We set T max to 1 following the goal of reducing the latency, although the results are similar with higher T max . The channel inputs are assumed independent and Gaussian distributed with zero mean and unit variance with
[nt]a ) being the mutual information between the transmitting node a and the receiving node b at a given sub-band f , where [n t ] a is the number of allocated sub-bands for the transmitting node a at time slot t. Note that other channel inputs might be considered without changing the conclusions of this work. We consider two link configuration scenarios: symmetric and asymmetric.
Appendix
Appendix A: Proof of theorem 4.3.1 (proof of the derived upper bound) First, since x a i (t) ≤ x b i (t) for every t, then, it is sufficient to prove the theorem for x a i (t). Second, we define y a i (t) as:
The only difference between y a i (t) and y a i (t) is that we are subtracting J i,d (l) instead of J i,d (t -X(t)). As we know that J i,d (l) ≥ J i,d (l -X(t)) for all l ∈ {t -X(t), . . . , t -1}, we deduce that y a i (t) ≥ y a i (t). Now, y a i (t) can be rewritten as:
In order to be sure that we have no outage for source i, we know from the outage event (equation (4.6)) that the following inequality should be satisfied:
Thus, at a given retransmission time slot t, if a source node is selected in the following time slots x i (t) times, we can write:
Tmax l=t
Equation ( 4) is obtained using the definition of the ceiling function. Equation (5) follows using the fact that y a i (t) ≥ y a i (t) and using the definition of y a i (t). Equation ( 6) is obtained after a simple reorganization. Equation ( 7) is obtained using the fact that J i,d (l) ≥ J i,d (t) for all t ≤ l. Equation ( 8) is obtained by making the two summations one big summation, and leads us to the needed inequality. Finally, we recall that as we proved the theorem for x a i (t), we directly deduce that the result holds for x b i (t).
Appendix B: Proof of theorem 6.2.1 (proof of the expected regret of the UCB1 algorithm)
The proof to be presented can be found in [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF] and is presented here for completeness. We start by recalling the Chernoff-Hoeffding inequality. Let X 1 , . . . , X j be i.i.d random variables which are all bounded by the interval [0, 1]. The sample mean is X j = 1 j j q=1 X q , then for a > 0, we have
Now, for a given super arm i, we set the target as a(i, j) = 2 ln j/nb i where nb i is the number of times super arm i is selected and j is the total number of plays till now. As a result, we reach Pr(X j > E(X) + a) ≤ j -4 ,
Next, we upper bound nb i . Recall that the random variable Dec(j) has as its value the index of the super arm selected and [Dec(j) = i] is the indication function which gives 1 when the selected arm at RL round j is the super arm i (and zero otherwise). where the second line means that we are picking super arm i in RL round j and we have already played this arm at least m times. The third line comes from the fact that saying that we have picked super arm i in RL round j means that the upper bound for super arm i exceeds the upper bound for every other super arms. In particular, this means its |
03556000 | en | [
"scco.neur"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03556000/file/S246845112100026X.pdf | C.-G Bénar
J Velmurugan
V J López-Madrona
F Pizzo
J M Badier
Detection and localization of deep sources in magnetoencephalography: a review
Magnetoencephalography (MEG) is a non-invasive technique for exploring the spatio-temporal dynamics of brain networks with high temporal resolution as well as good spatial capacities thanks to relative insensitivity to low skull conductivity. It is a tool of choice both for neuroscience research and for clinical applications, and is used routinely in epilepsy for localizing the sources of epileptiform discharges.
Still, the capacity of capturing deep sources, such as hippocampus and amygdala that are key players in memory and emotion, has been for long a topic of debate. Thus, the fine characterization of deep structures has been up to now the reserved domain of intracerebral EEG, performed during presurgical evaluation of patients with epilepsy. We review here the evidence for the detection of deep sources in MEG, with emphasis on simultaneous recordings of MEG and intracerebral EEG.
In this review, we discuss how simultaneously recording depth and surface signals enables to investigate the correlation between MEG and invasive signals actually recorded in deep structures. We also discuss new venues in analysis and recording methods for reconstructing deep activities from MEG.
Introduction
Magnetoencephalography (MEG) measures neuronal activity in a non-invasive manner, with a temporal resolution of the order of the millisecond and weak dependence n the electrical conductivity of the underlying structures. This gives MEG a very good spatial discrimination power [START_REF] Gavaret | Meg and eeg sensitivity in a case of medial occipital epilepsy[END_REF], thanks in particular to its low sensitivity to skull conductivity. MEG is widely used to localize the brain regions underlying cognitive tasks, during functional mapping of eloquent and language regions before surgery, and is one of the key pre-surgical evaluation tools in patients with drug-resistant epilepsy [START_REF] Jayabal | Role of magnetoencephalography and stereo-electroencephalography in the presurgical evaluation in patients with drug-resistant epilepsy[END_REF]. Among all regions that can be investigated with MEG, deep brain structures such as hippocampus, amygdala and thalamus are of primary interest. Indeed, these structures are key players in memory, emotion, cognition, and in pathologies such as neurodegenerative disorders and epilepsy [e.g; [START_REF] Bartolomei | Neural networks involving the medial temporal structures in temporal lobe epilepsy[END_REF][START_REF] Barbeau | Spatio temporal dynamics of face recognition[END_REF]. They have however long been considered as difficult if not impossible to see from EEG/MEG surface sensors as signals attenuate rapidly as a function of the distance to sensors. Further attenuation arises from the fact that deep structures such as hippocampus, amygdala and thalamus are considered as the epitome of "closed fields" generators, i.e., producing little field outside the structure itself [START_REF] Lopes Da Silva | Biophysical aspects of eeg and magnetoencephalogram generation[END_REF]. To make things even more difficult, activity in deep "closed fields" structures propagate very fast to the nearby (neocortical) structures [START_REF] Alarcon | Intracerebral propagation of interictal activity in partial epilepsy: Implications for source localisation[END_REF], and what is visible on the surface signals could be in fact propagated activity or activity generated solely in lateral cortices [START_REF] Wennberg | Eeg and meg in mesial temporal lobe epilepsy: Where do the spikes really come from?[END_REF]. Finally, deep activity can be overshadowed by that of more superficial concurrent generators that have much higher signal to noise ratio at the surface [START_REF] Koessler | Catching the invisible: Mesial temporal source contribution to simultaneous eeg and seeg recordings[END_REF]. See Figure 1 for a schematic representation of these configurations.
A very important venue for better understanding deep activity arises from intracerebral recordings performed clinically during presurgical evaluation of epilepsy (stereo-electroencephalography, SEEG [START_REF] Talairach | Stereotaxic approach to epilepsy : Methodology of anatomofunctional stereotaxic investigations[END_REF]) as well as during deep brain stimulation. About 15 to 18 SEEG depth electrodes can be implanted, each electrode having between 5 and 15 contacts. They permit to obtain signals with exquisite spatial specificity, and to target directly deep brain structures. They are orthogonally or obliquely implanted through a small burr hole in the scalp, on sole clinical criteria. The number of electrodes required to insert depends on the underlying anatomo-electro-clinical hypotheses and on the hospital [START_REF] Balanescu | A personalized stereotactic fixture for implantation of depth electrodes in stereoelectroencephalography[END_REF][START_REF] Cossu | Stereoelectroencephalography in the presurgical evaluation of focal epilepsy in infancy and early childhood: Clinical article[END_REF][START_REF] Bartolomei | Defining epileptogenic networks: Contribution of seeg and signal analysis[END_REF][START_REF] Kahane | The bancaud and talairach view on the epileptogenic zone: A working hypothesis[END_REF]. Some centers use subdural grids with the advantage of higher spatial sampling, as they cover large areas of the brain. However, requirement of craniotomy, impracticality of bilateral monitoring, and inability to record from deeper cortical or subcortical sources are limitations of this approach.
Intracerebral recordings give a formidable opportunity for validating non-invasive results [START_REF] Bénar | Eeg-fmri of epileptic spikes: Concordance with eeg source localization and intracranial eeg[END_REF] and for ensuring the deep origin of signals. Importantly, for spontaneous activity such as interictal epileptic spikes, it is necessary to record simultaneously surface and deep signals in order to ensure that the exact same activity is captured at the two levels [START_REF] Koessler | Catching the invisible: Mesial temporal source contribution to simultaneous eeg and seeg recordings[END_REF][START_REF] Merlet | Reliability of dipole models of epileptic spikes[END_REF]. Indeed, the occurrence of epileptic discharges is unpredictable, and the frequency oand spatial distribution may flucturate depending on brain state and level of medication. Simultaneous recordings are also crucial if one wants to measure trial-to-trial coupling between deep structures and surface signals [START_REF] Williams | Dopamine-dependent changes in the functional connectivity between basal ganglia and cerebral cortex in humans[END_REF][START_REF] Dubarry | Simultaneous recording of meg, eeg and intracerebral eeg during visual stimulation: From feasibility to single-trial analysis[END_REF] or to trigger surface analysis based on activities marked in depth [START_REF] Gavaret | Simultaneous seeg-megeeg recordings overcome the seeg limited spatial sampling[END_REF][START_REF] Pizzo | Deep brain activities can be detected with magnetoencephalography[END_REF]. We review here studies that have performed simultaneous recordings of both deep structures and surface MEG signals, as well as new venues in analysis and recording methods for reconstructing deep activities from MEG. Previous reviews focussing on the hippocampus can be found in [START_REF] Ruzich | Characterizing hippocampal dynamics with meg: A systematic review and evidence-based guidelines[END_REF][START_REF] Pu | Non-invasive investigation of human hippocampal rhythms using magnetoencephalography: A review[END_REF].
Simultaneous MEG and intracerebral recordings
The first combined use of intracerebral electrodes and EEG/MEG was done by Cohen and Cuffin, who created artificial dipoles in the head by injecting current between consecutive electrodes [START_REF] Cohen | Meg versus eeg localization test using implanted sources in the human brain[END_REF]. They did not find a significantly better localization error for MEG than EEG, but it is to be noted that the injected dipoles were radial, which is the most unfavourable condition for MEG. Simultaneous recordings of MEG with intracranial signals were later performed with electrocorticography in patients during presurgical evaluation of epilepsy [START_REF] Mikuni | Simultaneous recording of epileptiform discharges by meg and subdural electrodes in temporal lobe epilepsy[END_REF][START_REF] Sutherling | Dipole localization of human induced focal afterdischarge seizure in simultaneous magnetoencephalography and electrocorticography[END_REF][START_REF] Shigeto | Feasibility and limitations of magnetoencephalographic detection of epileptic discharges: Simultaneous recording of magnetic fields and electrocorticography[END_REF][START_REF] Oishi | Epileptic spikes: Magnetoencephalography versus simultaneous electrocorticography[END_REF][START_REF] Rampp | Meg correlates of epileptic high gamma oscillations in invasive eeg[END_REF].
In order to confidently determine the deep origin of signals, it is useful to use the intracerebral technique that directly records within deep structures. The first simultaneous MEG and intracerebral recordings were performed at Centro Médico Teknon in Barcelona [START_REF] Santiuste | Simultaneous magnetoencephalography and intracranial eeg registration: Technical and clinical aspects[END_REF] and at the LENA laboratory at Pitié-Salpêtrière in Paris [START_REF] Godet | Concordance between distributed meg source localization dynamic, and simultaneous ieeg study of epileptic spikes[END_REF][START_REF] Dalal | Simultaneous meg and intracranial eeg recordings during attentive reading[END_REF]. Santiuste and colleagues recorded from a single depth electrode and showed that mesial spikes, despite their lower amplitude, can be detected on MEG (25 to 60% for deep spikes versus 95% for neocortical spikes); they considered signals to be concordant if they presented a delay less than 25 ms [START_REF] Santiuste | Simultaneous magnetoencephalography and intracranial eeg registration: Technical and clinical aspects[END_REF]. In another study, Dalal and colleagues showed a map of zero-lag correlation between the hippocampus and the MEG signals with a dipolar topography that strongly suggests a deep source [START_REF] Dalal | Simultaneous meg-intracranial eeg: New insights into the ability of meg to capture oscillatory modulations in the neocortex and the hippocampus[END_REF], even though no actual source reconstruction was performed on these signals. The superimposition of the intracerebral signal in the hippocampus and the most correlated MEG sensor shows clearly a theta oscillation with zero phase difference. This is in favour of a direct recording of hippocampus on MEG, even if zero-lag phase synchrony cannot be ruled out (see for example [START_REF] Gollo | Theta band zero-lag longrange cortical synchronization via hippocampal dynamical relaying[END_REF], noting however that in this latter study the hippocampus is the third-party driver leading to zero phase synchrony between neocortical structures).
We have shown in the MEG Centre of Marseille that it is possible to push the recording capacities in order to obtain signals from three modalities (MEG-EEG-SEEG) at the single trial level [START_REF] Dubarry | Simultaneous recording of meg, eeg and intracerebral eeg during visual stimulation: From feasibility to single-trial analysis[END_REF] and with high SEEG sampling [START_REF] Badier | Technical solutions for simultaneous meg and seeg recordings: Towards routine clinical use[END_REF]. Thanks to the high sampling (containing both mesial structures and surrounding neocortex) and to independent component analysis (ICA), we investigated the differential implication of subcortical structures and neocortex [START_REF] Pizzo | Deep brain activities can be detected with magnetoencephalography[END_REF]. Specifically, ICA allowed separating large-scale networks that correspond to propagated activity towards the neocortex from actual activity from deep structures, this latter being overshadowed by the former at the sensor level. Several tests were performed, including single-trial correlation between ICA components and SEEG, as well as source localization on the ICA components. Taken together, these results suggest that signals from deep structures can be captured on MEG -at least for epileptic activity. We are currently working on detectability of deep mesial structures in MEG within the context of a memory paradigm [START_REF] Vj | Detection of mesial networks with magnetoencephalography during cognition[END_REF].
Another important venue for simultaneously recording MEG and SEEG at deep structure arises from the possibility to record patients implanted with deep brain stimulation (DBS) electrodes [START_REF] Oswal | Analysis of simultaneous meg and intracranial lfp recordings during deep brain stimulation: A protocol and experimental validation[END_REF]. As DBS targets the thalamus, it will be possible to investigate the thalamic contribution to MEG signals, which has been suggested to be visible in MEG [START_REF] Pizzo | Deep brain activities can be detected with magnetoencephalography[END_REF][START_REF] Attal | Assessment of subcortical source localization using deep brain activity imaging model with minimum norm operators: A meg study[END_REF][START_REF] Krishnaswamy | Sparsity enables estimation of both subcortical and cortical activity from meg and eeg[END_REF]. To elucidate the therapeutic effects of DBS, simultaneous MEG-LFP recordings were carried out in patients with Parkinson's disease (PD). These studies were able to delineate two key networks between sub-thalamic nuclei and cortical structures (motor area and fronto-parietal regions) [START_REF] Litvak | Optimized beamforming for simultaneous meg and intracranial local field potential recordings in deep brain stimulation patients[END_REF][START_REF] Hirschmann | Distinct oscillatory stn-cortical loops revealed by simultaneous meg and local field potential recordings in patients with parkinson's disease[END_REF], where the cortico-sub-cortical coupling could be modulated by DBS [START_REF] Oswal | Analysis of simultaneous meg and intracranial lfp recordings during deep brain stimulation: A protocol and experimental validation[END_REF].
Source localization and forward modelling
Simultaneous recordings have shown that deep sources visible on SEEG can be reflected on the MEG signals. However, in most cases, only surface MEG signals are available, and one needs to ensure that deep activity can be recovered without the help of intracerebral recordings. Several difficulties arise. Generally, MEG recordings consist of signals with low signal to noise ratio, and this is especially true for deep sources. Moreover, there is a linear superposition of superficial sources at the channel level that may mask the activity from deep sources. One option to overcome these limitations is to use a source localization algorithm directly targeting deep sources [START_REF] Attal | Modeling and detecting deep brain activity with meg & eeg[END_REF].
To estimate brain sources based on MEG data, solution to both the forward and inverse problem must be addressed. The forward model is implemented by a gain matrix, which estimates the contribution of each dipole, i.e. each elementary source within the brain, to the sensors. The computation of the gain matrix is based on a volume conductor (head) model that defines the head geometry and tissue resistive properties. Depending on the topographical patterns computed using forward solution, the sources associated to the activity recorded with MEG are determined with inverse solution techniques. While constructing head model, it would be ideal to consider the electrical and geometrical features of subcortical structures. When a detailed head model is considered, the simulation study in [START_REF] Piastra | A comprehensive study on electroencephalography and magnetoencephalography sensitivity to cortical and subcortical sources[END_REF] showed that MEG is more sensitive than EEG for detecting tangential cortical sources and can also detect subcortical sources that are sufficiently tangential [START_REF] Piastra | A comprehensive study on electroencephalography and magnetoencephalography sensitivity to cortical and subcortical sources[END_REF].
One of the simplest ways to find the origin of the recorded activity of deep sources is by using one or a few equivalent current dipoles (ECD) that reproduce the magnetic field observed at the surface [START_REF] Shiraishi | Interictal and ictal magnetoencephalographic study in patients with medial frontal lobe epilepsy[END_REF]. The actual source configuration is likely to be non-dipolar (for example in the case of the convoluted fine structure of the hippocampus, see Fig. 1), but it is important to note that potential multi-dipolar contribution to signals attenuate very rapidly with distance. Dipolar localization has thus been used to identify hippocampal activations during memory tasks [START_REF] Martin | Brain regions and their dynamics in prospective memory retrieval: A meg study[END_REF][START_REF] Papanicolaou | The hippocampus and memory of verbal and pictorial material[END_REF]. Some researchers advocate to investigate also monopolar sources (see [START_REF] Riera | Pitfalls in the dipolar model for the neocortical eeg sources[END_REF] discussed in [START_REF] Destexhe | Do neurons generate monopolar current sources[END_REF]) Another approach consists in estimating sources distributed over the brain, for example the Minimum Norm Estimate (MNE) [START_REF] Hämäläinen | Interpreting magnetic fields of the brain: Minimum norm estimates[END_REF][START_REF] Baillet | Electromagnetic brain mapping[END_REF]. The distributed sources model does not require any prior on the number of actives sources, and can incorporate additional spatial information [START_REF] Attal | Modeling and detecting deep brain activity with meg & eeg[END_REF]. It also permits to consider the possibility of extended neocortical networks which activity can overlap in time with deep sources.
In distributed sources approaches, sources can be modelled as a very large number of dipolar sources located on a grid or constrained to lie on a mesh of the neocortex. Attal and colleagues have proposed to use additional compartments that model deep structures, in order to compensate for the bias introduced by the simultaneous activation of cortical and sub-cortical regions, specifically the hippocampus, amygdala and thalamus [START_REF] Attal | Assessment of subcortical source localization using deep brain activity imaging model with minimum norm operators: A meg study[END_REF][START_REF] Attal | Modeling and detecting deep brain activity with meg & eeg[END_REF]. Such framework was subsequently used to reconstruct amygdala time course [START_REF] Balderston | How to detect amygdala activity with magnetoencephalography using source imaging[END_REF][START_REF] Dumas | Meg evidence for dynamic amygdala modulations by gaze and facial emotions[END_REF]. The results indicate that hippocampus and amygdala can be localized with good accuracy when activated alone. However, regional specificity is reduced when there are simultaneous activations of cortical and subcortical sources, making it hard to differentiate deep regions without further assumptions. An additional difficulty arises from the fact that neocortical structures surrounding the hippocampal /amygdala can be activated and difficult to distinguish from the latter because of the proximity relative to the distance to sensors. Thus, it seems important to assess the volume of confidence of a given localization result [START_REF] Pizzo | Deep brain activities can be detected with magnetoencephalography[END_REF], and verify whether hippocampal, amygdala and neighbouring neocortex are included or not within the confidence interval. This confidence interval is more tractable for dipoles that for distributed sources [START_REF] Fuchs | Confidence limits of dipole source reconstruction results[END_REF][START_REF] Sorrentino | Bayesian multi-dipole modelling of a single topography in meg by adaptive sequential monte carlo samplers[END_REF] although solutions have been proposed for patches [START_REF] Schmidt | Bayesian inference applied to the electromagnetic inverse problem[END_REF].
Another extensively used method in source reconstructions is adaptive spatial filtering (or "beamforming") [START_REF] Sekihara | Reconstructing spatio-temporal activities of neural sources from magnetoencephalographic data using a vector beamformer[END_REF]. This method is based on a spatial filter computed for each source location independently and can reconstruct the time series throughout the brain, including the hippocampus or amygdala [START_REF] Cornwell | Evoked amygdala responses to negative faces revealed by adaptive meg beamformers[END_REF]. Beamforming estimates the activity recorded at a given location, incorporating biophysical constraints (typically a regional basis of dipoles). When the activity is temporally overlapping with cortical fields, leakage originating from the inverse problem [START_REF] Brookes | Measuring functional connectivity in meg: A multivariate approach insensitive to linear source leakage[END_REF] may obscure the signal originating from deep sources. Subtraction of several conditions with different activities may be used to mitigate this effect [START_REF] Quraan | Detection and localization of hippocampal activity using beamformers with meg: A detailed investigation using simulations and empirical data[END_REF]. On the other hand, beamforming reconstructs the activity at the desired location, instead of finding the point that better reproduces the recorded field. Therefore, this approach is complementary to the other methods for source localization. Different source localization algorithms could reconstruct deeper subcortical sources from MEG data by suppressing the high SNR signals from the neocortex. In general, source reconstruction from beamformer seems to be more focal and localized to hippocampus, compared to MNE images, but this requires further investigation as estimation of source extent is a difficult task [START_REF] Maksymenko | Strategies for statistical thresholding of source localization maps in magnetoencephalography and estimating source extent[END_REF].
Krishnaswamy and colleagues proposed a method based on sparse representations for recovering deep sources. Based on the analysis of the forward field, they show that in the presence of largely distributed cortical sources (full cortical space) it is impossible to unambiguously estimate simultaneously active subcortical sources. On the contrary, if cortical generators are sparse, then it is possible to recover separately cortical and sub-cortical activities [START_REF] Krishnaswamy | Sparsity enables estimation of both subcortical and cortical activity from meg and eeg[END_REF], which they do based on a subspace pursuit greedy algorithm. Sparse source localization is in general difficult, but is certainly a promising path for source localization and mitigation of source leakage effects [START_REF] Grova | Evaluation of eeg localization methods using realistic simulations of interictal spikes[END_REF] (review in [START_REF] Krishnaswamy | Sparsity enables estimation of both subcortical and cortical activity from meg and eeg[END_REF]).
Beyond source localization biophysical/mathematical models, one can use computational models in order to infer hidden variables. In other words, one can perform an inverse problem on a variable that is in the computational model but not directly visible from the data, as done for example in dynamic causal modeling [START_REF] David | Dynamic causal modeling of evoked responses in eeg and meg[END_REF], or in the 'virtual epileptic patient' [START_REF] Jirsa | The virtual epileptic patient: Individualized whole-brain models of epilepsy spread[END_REF]. This was done in the context of distant subcortical-cortical networks where effective connectivity based on DCM allowed inferring subcortical activity [START_REF] David | Dynamic causal modeling of subcortical connectivity of language[END_REF].
Blind source separation
One option for disentangling deep and cortical sources mixed on sensors is ICA, performed on the continuous signals [START_REF] Pizzo | Deep brain activities can be detected with magnetoencephalography[END_REF]. ICA was successfully applied to EEG and MEG in order to separate different processes mixed on the signals, either in cognition [START_REF] Jung | Analysis and visualization of single-trial event-related potentials[END_REF]or in epilepsy [START_REF] Ossadtchi | Automated interictal spike detection and source localization in magnetoencephalography using independent components analysis and spatio-temporal clustering[END_REF][START_REF] Sohrabpour | Noninvasive electromagnetic source imaging of spatiotemporally distributed epileptogenic brain sources[END_REF][START_REF] Malinowska | Interictal networks in magnetoencephalography[END_REF].This was the strategy used in the study of Pizzo and colleagues in order to separate deep activity from that of superficial networks, then comparing it to simultaneously acquired SEEG [START_REF] Pizzo | Deep brain activities can be detected with magnetoencephalography[END_REF] (Figure 2). ICA aims at separating the activity of N sources mixed on M sensors (where N ≤ M), assuming that the sources have different spatial distributions and that their time-series are independent [START_REF] Comon | Handbook of blind source separation, independent component analysis and applications[END_REF]. After reconstructing the time-course of the sources, their origin can be inferred using ECDs to reproduce the ICA topography of each source. As pointed earlier, one difficulty that arises when solving the inverse problem is that activities from deep sources can be mixed with neocortical sources [START_REF] Pizzo | Deep brain activities can be detected with magnetoencephalography[END_REF]. If the ICA manages to fully disentangle the sources, then one component can actually correspond to a single brain region [START_REF] Delorme | Independent eeg sources are dipolar[END_REF] and thus be reasonably modelled with a single dipole (assuming the activity is not too extended, which is not guaranteed). In some cases, two symmetric sources can be observed that correspond to homologous brain regions, and can be modelled by symmetric dipoles [START_REF] Piazza | Eeg effective source projections are more bilaterally symmetric in infants than in adults[END_REF]. Bayesian source localization is also an interesting path to explore [START_REF] Sorrentino | Bayesian multi-dipole modelling of a single topography in meg by adaptive sequential monte carlo samplers[END_REF], as well as sparse methods such as L1 on the spatial gradient of the sources (total variation) [START_REF] Sohrabpour | Noninvasive electromagnetic source imaging of spatiotemporally distributed epileptogenic brain sources[END_REF]. For more complex source configurations and in particular for extended brain areas, distributed sources including patch modelling can be considered; a difficulty to overcome is that the definition of noise covariance for ICA topography is not straightforward (there is not temporal dimension one which to estimate covariance, only one data vector). Moreover, the ICA approach has been challenged recently, showing that source localization of single ICA components does not improve the localization results in MEG (and can on the contrary be detrimental) [START_REF] Pellegrino | Effects of independent component analysis on magnetoencephalography source localization in pre-surgical frontal lobe epilepsy patients[END_REF], a potential explanation for these results is the above-mentioned difficulty for modelling the source of a single topography when it is not purely dipolar, and also in the fact in some cases several ICs may need to be taken in to account [START_REF] Malinowska | Interictal networks in magnetoencephalography[END_REF].
We noted above the importance of sparsity constraints in source localization. It is interesting to note that ICA can be seen as another way to obtain sparse representations [START_REF] Daubechies | Independent component analysis for brain fmri does not select for independence[END_REF]. Potentially, the patterns seen on ICA topographies could be used in order to gain information on the underlying sources, as was done on EEG patterns in epilepsy [START_REF] Ebersole | Spike voltage topography identifies two types of frontotemporal epileptic foci[END_REF].
The computation of ICA or covariance matrices that are used in source localization require large amount of time samples which increases with the square of number of channels. To be noted, there are signal space separation (SSS) [START_REF] Özkurt | Decomposition of magnetoencephalographic data into components corresponding to deep and superficial sources[END_REF] and beamspace dual signal space projection (bDSSP) [START_REF] Sekihara | Beamspace dual signal space projection (bdssp): A method for selective detection of deep sources in meg measurements[END_REF] algorithms where dimensionality reduction is obtained by projecting the data into a low dimensional subspace. Moreover, these methods decompose the MEG signals into superficial and deep source components and suppress the stronger interference from superficial sources, as performed in the cortical signal space suppression (CSS) method [START_REF] Samuelsson | Cortical signal suppression (css) for detection of subcortical activity using meg and eeg[END_REF] that preserves the subcortical signals while suppressing the cortical contributions. This latter method is completely data-driven and does not require forward modelling or inverse solutions.
Detectability of deep sources according to MEG configuration
The significance of MEG sensor design on the depth sensitivity of MEG is still under investigation. Conventional MEG systems have either any of the three types of pickup coil configurations: magnetometers (with single loop of wire), axial, and planar gradiometers (two or more loops spaced at a distance with opposite orientation). Overall, axial gradiometers were able to deliver optimal brain SNRs compared to magnetometers, while planar gradiometer is sensitive to superficial sources by suppressing environmental noise. Although axial gradiometers have an added advantage, any sensor type can measure subcortical activity relative to the square of the distance and the amount of noise in the background of other brain activity [START_REF] Vrba | Signal processing in magnetoencephalography[END_REF].
In another (complementary) line of research, Tzovara and colleagues have identified the difficulty of localizing deep sources, and propose to use a head cast to ensure minimal head movement and therefore optimal localization precision [START_REF] Tzovara | High-precision magnetoencephalography for reconstructing amygdalar and hippocampal oscillations during prediction of safety and threat[END_REF] -which should be useful for both deep and superficial sources.
Conclusion and New venues
There is now several converging evidence that deep signals can be recorded in MEG, in line with EEG findings [START_REF] Koessler | Source localization of ictal epileptic activity investigated by high resolution eeg and validated by seeg[END_REF][START_REF] Seeber | Subcortical electrophysiological activity is detectable with high-density eeg source imaging[END_REF], thus opening new exciting new venues for the non-invasive investigation of brain networks in physiology and pathology, a fast growing field in neuroscience [START_REF] He | Electrophysiological brain connectivity: Theory and implementation[END_REF].
Simultaneous recordings of intracerebral EEG and MEG recordings provide perhaps the strongest evidence that MEG can reliably measure deep activity in patients [START_REF] Pizzo | Deep brain activities can be detected with magnetoencephalography[END_REF][START_REF] Dalal | Simultaneous meg and intracranial eeg recordings during attentive reading[END_REF][START_REF] Crespo-Garcia | Slowtheta power decreases during item-place encoding predict spatial accuracy of subsequent context recall[END_REF]. Reliability of deep sources detection with source localization and source separation methods (ICA/bDSSP/CSS) can be assessed if simultaneous ground truth such as SEEG electrode recordings are available. More progress is thus expected from such simultaneous acquisitions.
An important future path is the advent of a new generation of MEG sensors. As opposed to conventional SQUID system, optically pumped magnetometers (OPMs; [START_REF] Boto | A new generation of magnetoencephalography: Room temperature measurements using optically-pumped magnetometers[END_REF]) does not require liquid helium. They have lower sensitivity, but a four-fold increase in SNR could be achieved as OPMs can be brought closer to scalp surface and allow head movements [START_REF] Boto | Moving magnetoencephalography towards real-world applications with a wearable system[END_REF]. A question that still remains to be investigated is whether being close to the surface does not 'blind' the OPM to the deeper sources by being overwhelmed by the strong activity of superficial sources [START_REF] Krishnaswamy | Sparsity enables estimation of both subcortical and cortical activity from meg and eeg[END_REF][START_REF] Samuelsson | Cortical signal suppression (css) for detection of subcortical activity using meg and eeg[END_REF]. As OPM sensors are mobile, it has been proposed to record directly within the mouth in order to obtain a recording of hippocampal activity dorsally from within the mouth [START_REF] Tierney | Mouth magnetoencephalography: A unique perspective on the human hippocampus[END_REF]. This could help in more accurately to determine the actual deep origin of the signals.
Another interesting possibility is to use simultaneous recording of surface and depth signals as a combined "meta-modality", i.e. to fuse them into a joint analysis [START_REF] Gavaret | Simultaneous seeg-megeeg recordings overcome the seeg limited spatial sampling[END_REF][START_REF] Litvak | Optimized beamforming for simultaneous meg and intracranial local field potential recordings in deep brain stimulation patients[END_REF][START_REF] Crespo-Garcia | Slowtheta power decreases during item-place encoding predict spatial accuracy of subsequent context recall[END_REF]. In any case, the continuous or single trial analysis is crucial in order to extract the covariation between depth and surface signals, either at zero lag (reflecting the same activity) or at different lags (network analysis) [START_REF] Dubarry | Simultaneous recording of meg, eeg and intracerebral eeg during visual stimulation: From feasibility to single-trial analysis[END_REF][START_REF] Zhang | Dynamic analysis on simultaneous ieeg-meg data via hidden markov model[END_REF].
Future progress in simultaneous recordings (number of modalities, type of sensors) and in the inverse problem of both MEG and intracerebral EEG [START_REF] Hosseini | Electromagnetic source imaging using simultaneous scalp eeg and intracranial eeg: An emerging tool for interacting with pathological brain networks[END_REF][START_REF] Caune | Evaluating dipolar source localization feasibility from intracerebral seeg recordings[END_REF][START_REF] Chang | Dipole localization using simulated intracerebral eeg[END_REF] should also allow refining the definition of the links between surface measures and cerebral activity. signal is recorded from the surface. b) Activity from hippocampus propagates to nearby neocortex, which has a structured source geometry (open field) and can be seen as a signal at the surface (small because of the depth). c) only subparts of hippocampus and amygdala are active, producing an open field and small signal at the surface. d) The activity from deep structures is overshadowed by activity from the superficial neocortex that has a higher amplitude due to shorter distance to the sensors. This is the first attempt to record hippocampus from within the mouth, which is now possible thanks to the new generation of OPM MEG sensors that are mobile. This gives the opportunity to improve the sampling of the topography deep activity, by providing a measure from below. This is very relevant for deep sources that can have a widespread topography, and in which surface sensors may miss important parts of the fields. This should in turn improve the localization of deep sources and help disantangling them from superficial measures * Sekihara K, Adachi Y, Kubota HK, Cai C, Nagarajan SS. Beamspace dual signal space projection (bDSSP): A method for selective detection of deep sources in MEG measurements. Journal of Neural Engineering 2018;15(3). This study adresses the difficult challenge of separating deep and superfical sources from surface measures, as the former can be overwhelmed by the activity of the latter on non-invasive sensors.
Here, the approach is based on the beamformer signal processing technique, a spatial filter that suppresses the activity of sources other than the source of interest. The authors show that by using a signal subspace approach specific to deep sources, they can obtain better suppression of the superficial contribution to the recorded signals than with a classical method. * Seeber M, Cantonas LM, Hoevels M, Sesia T, Visser-Vandewalle V, Michel CM. Subcortical electrophysiological activity is detectable with high-density EEG source imaging. Nature Communications 2019;10.
This study uses for the first time simultaneously recorded DBS electrodes and surface EEG for assessing the detectability of thalamus on EEG. The authors have correlated the time courses of source reconstructed signals and that of DBS electrodes, and found highest correlation for sources close to the DBS electrodes recording the thalamus. This suggests that thalamic activity can indeed be reconstructed from the surface signals. This study investigates the possibility of retrieving deep brain sources in MEG (magnetometers), thanks to simultaneously recorded intracerebral EEG (stereoelectroencephalography, SEEG). Epileptiform events were marked on SEEG, and independent component analysis was used in order to disentangle deep sources from superficial (propagated) sources. Importantly, there was a high spatial sampling iof depth electrodes that allowed probing nearby neocortex. The origin of the putative deep components was assessed by several methods in order to strengthen confidence in the results: (i) the deep aspect of the MEG topography ("wide" field) (ii) correlation of MEG-ICA and SEEG across time along trials (testing for general ressembalnce of the time courses) or at a given time point across trials (testing whether tria-to-trial fluctuations correlate in depth and surface), as well as (iii) dipolar source reconstruction with a confidence interval (which is expected to be large for deep sources). This study applied ICA on MEG during a memory task to explore the signature of the hippocampal formation on MEG. They found a common ICA-MEG component across subjects, with the same "deep" topography. Using simultaneous SEEG recordings and partial correlation, they confirmed the origin of the ICA-MEG as a network composed solely by the hippocampus and rhinal cortex. These results confirmed that hippocampal activity can be seen on MEG in cognition. Importantly, they identified a specific fingerprint of this network on MEG recordings, facilitating future studies based on this activity without an intracranial confirmation of its origin.
Hippocampus
Figure 1 :
1 Figure 1: Schematic view of different source configurations and their reflection on MEG. a) only deep sources are active (hippocampus and amygdala). As activation is of a 'closed field' type, no visible
Figure 2 :
2 Figure 2: Results of simultaneous MEG/intracerebral recordings (methods in [19]. a) Example of an ICA component corresponding to a deep source (note the "wide" topography), found on data triggered on spikes visible in mesial temporal SEEG contacts b) Source localization performed on the topography (confidence interval of equivalent dipole), within right mesial temporal regions c) SEEG implantation (only left side is shown) d) Superimposition of average ICA time course (in color, based on SEEG triggers) and SEEG contacts for which there is significant correlation (in black). Contacts are located in left amygdala (Ap), left hippocampus (Bp) and left basal temporal regions (TBp) and are shown on the right of the panel, with the same color convention as the traces.
*
Pizzo F, Roehri N, Medina Villalon S, Trebuchon A, Chen S, S, Carron R, Gavaret M, Giusiano B, McGonigal A, Bartolomei F et al: Deep brain activities can be detected with magnetoencephalography. Nat Commun (2019) 10(1):971.
*
Samuelsson JG, Khan S, Sundaram P, Peled N, Hämäläinen MS. Cortical Signal Suppression (CSS) for Detection of Subcortical Activity Using MEG and EEG. Brain Topography 2019;32(2):215-28. Tailoring of the source reconstruction for specifically targeting deep sources.This article presents a signal subspace approach that is tailored for suppressing neocortical signals and emphasizing the deep contribution to the surface sensors. It takes advantage of the presence of both gradiometers and magnetometers, using the (more short-sighted) gradiometers for the estimation of superficial activity. * López-Madrona VJ, Medina Villalon S, Jayabal V, Trébuchon A, Alario FX, Bartolomei F, et al. Detection of mesial networks with magnetoencephalography during cognition. LiveMEG 2020: a cutting EEG event2020.
ACKNOWLEDGEMENTS
This work was partially supported by the RHU EPINOV, A*MIDEX project (ANR-17-RHUS-0004) funded by the 'Investissements d'Avenir' French Government and by a FLAG ERA/HBP grant from Agence Nationale de la recherche "SCALES" ANR-17-HBPR-0005, by France Life Imaging network (grant ANR-11-INBS-0006), and ILCB convergence institute ANR-16-CONV-0002. |
03484132 | en | [
"shs.anthro-se"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03484132/file/S1744388121000554.pdf | Alice Mignot1
Karelle De Luca3
Gérard Leboucher2
Véronique Servais1
MSc in Clinical psychology, Ph.D student Alice Mignot
email: [email protected]
French handlers' perspectives on Animal-Assisted Interventions
Background: Animal-Assisted Interventions (AAI) are well implemented in human healthcare, in France as elsewhere; yet there are still difficulties in characterizing these practices and misconceptions about their mechanisms -little is known about the French practice of AAI and about the human-animal team.
Objectives: This study aims to characterize AAI by exploring their specificities through French handlers' perspectives.
Material and method: An online survey addressed to French handlers working in AAI with mainly one dog was carried out. This research included questions about their practice in AAI (registration status, beneficiaries, and animals) and their background (training in AAI, training in the medico-social field, training in animal behavior). We then examined a phenomenological understanding of handlers' definitions of their practice in AAI, their motivations to work with these approaches, and the expectations of the human-animal team.
We used an open coding strategy and created major themes from their answers.
) the shared role of mediator, and (5) handlers' beliefs about the human-animal relationship related to their personal experiences. This survey allowed us to understand how the French use AAI and its role in the care system.
Conclusion:
The benefits of AAI are numerous both for care settings and for the caregivers mainly by making the care more humane. AAI seem to put the wellbeing of beneficiaries and the relationship with the caregiver at the center of the care. The complementarity of the human-animal team is the common feature of these practices and is critical to their success.
Future interdisciplinary studies are required to explore the particularities of these interspecific approaches and the differences between countries.
Introduction
Practices utilizing the human-animal bond in healthcare settings are commonly designated by the term Animal-Assisted Interventions (AAI). They are defined as "a goal oriented and structured intervention that intentionally includes or incorporates animals in health, education and human services (e.g., social work) for the purpose of therapeutic gains in humans. It involves people with knowledge of the people and animals involved" [START_REF] Iahaio | The IAHAIO Definitions for Animal Assisted Intervention and Guidelines for Wellness of Animals Involved in AAI[END_REF]. AAI include several sub categories such as Animal-Assisted Therapies (AAT), Animal-Assisted Activities (AAA), Animal-Assisted Education (AAE) and Animal-Assisted Coaching (AAC) [START_REF] Iahaio | The IAHAIO Definitions for Animal Assisted Intervention and Guidelines for Wellness of Animals Involved in AAI[END_REF]. These methods are receiving increased attention within the medical and paramedical fields because of their benefits on a large range of health-related problems [START_REF] Cirulli | Animal-assisted interventions as innovative tools for mental health[END_REF]. The interactions with animals in care settings have positive impacts on human health such as the decrease of anxiety [START_REF] Barker | The effects of animal-assisted therapy on anxiety ratings of hospitalized psychiatric patients[END_REF][START_REF] Cole | Animal-assisted therapy in patients hospitalized with heart failure[END_REF][START_REF] Waite | A meta-analysis of animal assisted interventions targeting pain, anxiety and distress in medical settings[END_REF] and depression [START_REF] Koukourikos | Benefits of Animal Assisted Therapy in Mental Health[END_REF][START_REF] Ambrosi | Randomized controlled study on the effectiveness of animal-assisted therapy on depression, anxiety, and illness perception in institutionalized elderly[END_REF] as well as the improvement of social skills [START_REF] King | Effect of a time-out session with working animalassisted therapy dogs[END_REF][START_REF] Becker | Animal-assisted social skills training for children with autism spectrum disorders[END_REF][START_REF] Hediger | Effects of animalassisted therapy on social behaviour in patients with acquired brain injury: a randomised controlled trial[END_REF] and self-esteem [11,[START_REF] Schuck | The role of animal assisted intervention on improving self-esteem in children with attention deficit/hyperactivity disorder[END_REF]. As a result, the benefits of animals on human health represent a significant scientific research field that continues to grow [START_REF] Fine | The State of Animal-Assisted Interventions: Addressing the Contemporary Issues that will Shape the Future[END_REF]; yet there is still difficulty to characterize AAI. Even if there is a professionalization of the field, there is still a lack of standards and inconsistencies about the terms and definitions of AAI [START_REF] Parish-Plass | Order Out of Chaos Revised: A Call for Clear and Agreed-Upon Definitions Differentiating Between Animal-Assisted Interventions[END_REF][START_REF] Santaniello | Methodological and Terminological Issues in Animal-Assisted Interventions: An Umbrella Review of Systematic Reviews[END_REF][START_REF] Enders-Slegers | Animal-assisted interventions with in an international perspective: Trends, research, and practices[END_REF][START_REF] López-Cepero | Current Status of Animal-Assisted Interventions in Scientific Literature: A Critical Comment on Their Internal Validity[END_REF], specifically, regarding the French AAI where there is a lack of data about these seemingly heterogeneous practices [START_REF] Michalon | Panser avec les animaux: Sociologie du soin par le contact animalier[END_REF]. It could be linked to the absence of governmental regulation and mandatory training to practice AAI [START_REF] Enders-Slegers | Animal-assisted interventions with in an international perspective: Trends, research, and practices[END_REF][START_REF] Boizeau | [END_REF]. Furthermore, most research on AAI has been focused on proving the efficiency of animals on the beneficiaries [START_REF] Servais | Du surnaturel au malentendu Pour une approche interactionnelle des systèmes de communication homme/animal[END_REF]. However, the complementarity of the human-animal dyad is central in AAI for the influence and mutual benefits it has during sessions [START_REF] Payne | Current perspectives on attachment and bonding in the dog–human dyad[END_REF][START_REF] Menna | The Human-Animal Relationship as the Focus of Animal-Assisted Interventions: A One Health Approach[END_REF][START_REF] Kuzara | Exploring the Handler-Dog Connection within a University-Based Animal-Assisted Activity[END_REF]. Handlers are regularly excluded from studies and little is known about their perspectives and their roles [START_REF] Firmin | Qualitative Perspectives of an Animal-Assisted Therapy Program[END_REF][START_REF] Grandgeorge | Human-animal relationships: from daily life to animalassisted therapies[END_REF]. There are still misconceptions about AAI, such as the thinking that petting the animals is sufficient to get the reward [START_REF] Parish-Plass | Order Out of Chaos Revised: A Call for Clear and Agreed-Upon Definitions Differentiating Between Animal-Assisted Interventions[END_REF].
Theoretical framework
We aimed to apply the recommendations of changes in AAI research suggested by Delfour & Servais [START_REF] Delfour | L'animal dans le soin : entre théories et pratiques[END_REF] that are: "the consideration of the animal as a subject; the restitution of their speech to handlers; and the development of attentive and creative methods of observation and investigation". Consequently, the qualitative perspective respects these criteria since it "facilitates better understanding of factors that may influence the intervention implementation" [START_REF] López-Cepero | Current Status of Animal-Assisted Interventions in Scientific Literature: A Critical Comment on Their Internal Validity[END_REF]. As highlighted in the Shen et al. [START_REF] Shen | We need them as much as they need us": A systematic review of the qualitative evidence for possible mechanisms of effectiveness of animal-assisted intervention (AAI)[END_REF] review of qualitative studies, these studies can be a way to reveal possible mechanisms of AAI. However, they focused on the beneficiaries point of view [START_REF] Lange | Is Counseling Going to the Dogs? An Exploratory Study Related to the Inclusion of an Animal in Group Counseling with Adolescents[END_REF][START_REF] Schmitz | Animal-assisted therapy at a University Centre for Palliative Medicine -a qualitative content analysis of patient records[END_REF][START_REF] Lubbe | The application of animal-assisted therapy in the South African context: A case study[END_REF]. Only a few of the studies were interested in handlers' opinions about their practices in AAI [START_REF] Firmin | Qualitative Perspectives of an Animal-Assisted Therapy Program[END_REF][START_REF] Abrahamson | Perceptions of a hospitalbased animal assisted intervention program: An exploratory study[END_REF][START_REF] Berget | Animal-Assisted Interventions and Psychiatric Disorders: Knowledge and Attitudes among General Practitioners, Psychiatrists, and Psychologists[END_REF][START_REF] Bibbo | Staff Members' Perceptions of an Animal-Assisted Activity[END_REF][START_REF] Black | Australian psychologists' knowledge of and attitudes towards animal-assisted therapy: Psychologists and animal-assisted therapy[END_REF][START_REF] Crowley-Robinson | A long-term study of elderly people in nursing homes with visiting and resident dogs[END_REF] and were principally focused on their knowledge and attitudes regarding specific practice [START_REF] Abate | Nurse Leaders' Perspectives on Animal-Assisted Interventions[END_REF]. This makes it essential to interview handlers with different backgrounds and considerations towards the relationship between themselves and their animals to get a better understanding of AAI.
The aim of the present research is to contribute to increasing the body of knowledge surrounding AAI by integrating handlers' opinions.
To this end, we focused on two axes through handlers' answers to four questions. The first axis concerned the main features of AAI that we obtained through interviewing handlers about their definition of their own practice in AAI. In addition, we assumed that handlers' professional backgrounds and motivations to work in AAI enabled them to understand the characteristics of these practices. The second axis concerned the interspecific complementarity of the human-animal team that we investigated through their views on their dedicated roles and the roles of their animals in AAI.
Materials & Method
Participants & recruitment
Our cohort was composed of 111 French handlers in AAI. Our inclusion criteria were to be active in AAI and to work with at least one dog because dogs constitute the most represented species in AAI [START_REF] Hatch | The View from All Fours: A Look at an Animal-Assisted Activity Program from the Animals' Perspective[END_REF][START_REF] Maurer | Analyse de dix recherches sur la thérapie assistée par l'animal : quelle méthodologie pour quels effets ?[END_REF][START_REF] Nimer | Animal-Assisted Therapy: A Meta-Analysis[END_REF][START_REF] Ng | Describing the Use of Animals in Animal-Assisted Intervention Research[END_REF]. Handlers were all volunteers and we had no selection criteria based on their professional backgrounds. We constructed an online questionnaire that was posted on AAI-specialized social media accounts and sent by email from April 2018 to May 2019. It was important for us to develop an online questionnaire for ease of use and timesaving reasons. Moreover, contrary to other qualitative research that focuses on small samples and/or specific groups, we aimed to interview a large panel of handlers.
Ethics
Before accessing the questionnaire, handlers were required to complete a consent form that included an explanation of the study framework, objectives and the research ethics features.
Signing this consent form guaranteed the confidentiality of their responses, the possibility of interrupting the research, respect for their integrity and their rights in accordance with the research ethics. The collection, processing and storage of personal data complied with the rules laid down by the European General Data Protection Regulation [START_REF] Voigt | The eu general data protection regulation (gdpr)[END_REF].
Data collection
The questionnaire was built based on a literature review [START_REF] Iahaio | The IAHAIO Definitions for Animal Assisted Intervention and Guidelines for Wellness of Animals Involved in AAI[END_REF][START_REF] King | Effect of a time-out session with working animalassisted therapy dogs[END_REF][START_REF] Firmin | Qualitative Perspectives of an Animal-Assisted Therapy Program[END_REF][START_REF] Delfour | L'animal dans le soin : entre théories et pratiques[END_REF][START_REF] Berget | Animal-Assisted Interventions and Psychiatric Disorders: Knowledge and Attitudes among General Practitioners, Psychiatrists, and Psychologists[END_REF][START_REF] Society | Standards of practice for animal assisted activities and animal assisted therapy[END_REF][START_REF] Boizeau | La médiation animaleproblématiques règlementaires et enjeux professionnels[END_REF][START_REF] Budahn | Effectiveness of Animal-Assisted Therapy: Therapists' Perspectives[END_REF] and informal interviews of handlers and scientific experts in the field. It was entirely written in French and was composed of four unequal sections for approximatively 20 minutes in total. We used a mixed method; therefore, some data was obtained through closed questions while other data was obtained through open questions. We chose to separate our data into specific articles; therefore, this one presents only data about the characteristics of AAI through handlers' perceptions.
Analysis
The data presented below are based on sociodemographic and open questions. Quantitative data were treated with the Software GraphPad Prism 8. These data concerned sociodemographic questions (gender, age), followed by questions about their current practice in AAI (their registration status, the populations and the animal species they work with) and questions about their professional background (their training in AAI and their education institution, their training in the medico-social field and their training in animal behavior). Descriptive characteristics of the cohort were calculated and presented as means averages for continuous variables and percentages for categorical variables.
The qualitative method was selected for four open questions: We used a phenomenological method because it "describes the meaning for several individuals of their lived experiences of a concept or a phenomenon; describing what all participants have in common as they experience a phenomenon" [START_REF] Creswell | Qualitative inquiry and research design: Choosing among five approaches[END_REF]. Like Firmin et al. [START_REF] Firmin | Qualitative Perspectives of an Animal-Assisted Therapy Program[END_REF],
we used an open coding strategy with a line-by-line analysis approach and developed clusters of meaning into themes. We wrote a description of the significant themes that emerged from our data [START_REF] Creswell | Qualitative inquiry and research design: Choosing among five approaches[END_REF] and illustrated them with citations (translated from French to English) of subjects with their anonymity number.
Results
Demographic characteristics of our sample
Our sample was composed of 111 handlers in AAI. They were mostly women (94.59%; N=105) with a mean age of 41 years (min 20 years; max 68 years). Handlers worked with a broad range of populations; based on the two first pathologies cited we analyzed data on 166 answers. Beneficiaries were mostly elderly with dementia (30.12%; N=50), followed by people with mental and/or motor disability (22.29%; N=37), followed by Pervasive Developmental Disorders (13.85%; N=23) and people with various mental health problems (13.25%; N=22). Also, 47.75% (N=53) of the handlers in our sample only worked with dogs, the rest worked with two species on average (range 1 to 7), mainly small pets such as guinea pigs and rabbits (45.04%; N=50). Handlers' professional backgrounds were different. 83.78%
(N=93) of our sample were trained in AAI from different institutions that included both university training centers and private structures. 71.17% (N=79) of interviewed handlers had training in the medico-social field that represented various types of care professions. They were mostly psychologists (24/05%;N=19), caseworkers (16.46%;N=13), nurses (13.92%;N=11) and psychomotor therapists and occupational therapists (12.66%;N=10). In addition, some of them had a background in the animal field (37.84%: N=42) which concerned mostly dog trainer (50%; N=21) and veterinary/assistant veterinary (19.05%; N=8).
Qualitative analysis
Five themes were identified as important characteristics of AAI: (1) AAI as additional approaches in care settings, (2) the person-centered approach, (3) the complementarity between handler and their animal(s), ( 4) the shared role of mediator and (5) handlers' beliefs about the human-animal relationship related to their personal experiences. A brief description of each theme supported with illustrative citation of participant data is presented for every result of the study.
AAI as additional approaches in care settings
Handlers alluded that AAI bring benefits to various care settings. They mentioned objectives that concern a diverse set of domains (therapeutic, educative, social, pedagogical etc.). Most of them referred to the introduction of AAI to mitigate the limits of conventional care. More specifically, care professionals referred to AAI as additional approaches to support their work and some of them distinguished themselves from other handlers by the therapeutic value of their AAI. Regardless of their initial training, handlers mentioned the benefits that AAI brought to themselves, such as being able to specialize in a new approach or even a career change.
s1: "The inadequacies of "conventional" approaches" s32: "It is a practice with a therapeutic aim (by virtue of my function)" s46: "It is a "way" to achieve an objective that cannot be achieved with conventional tools" s70: "Another string to my bow" s109: "Doing the work for which he [the handler] was trained, for me as a psychologist, with an additional tool that is the dog"
Person-centered approaches
The construction of the objectives seemed to be flexible and adapted to each patient.
Therefore, the beneficiary appeared to regain an active role in his or her care. The most frequently cited objectives concerned the well-being of beneficiaries and the creation of bonds between beneficiary and caregiver. When handlers described their relationship with beneficiaries, they used words associated with positivity and warmth (i.e. "link," "alliance,"
"trust," "affection," "tenderness," and "empathy"). For most handlers, their animals were seen as assistants that take an immersive role in these relationships.
s5: "My practice is created according to my patients: they are the ones who initiate the process and propose activities around the dog most often" s50: "[…] sometimes there is also a therapeutic interest, but this is not the primary goal" s73: "humane and playful" s99: "The goal is to increase interactions with humans through interactions with animals"
Complementarity of the human-animal team
Handlers pointed out the central role of animals into enriching the actual care. They spoke about the common intrinsic attributes of animals such as their absence of judgement, their neutral attitude toward human pathologies, and the absence of verbal communication.
Handlers referred to their main role as optimizing the effects of the animal and to guarantee safety. That implies a work upstream to construct the project, downstream to evaluate objectives and adapt the next sessions and the adjustment of what is emerging during sessions. They also mentioned a major responsibility in their animals' welfare by observing their behavior such as their signs of fatigue and stress. Finally, they highlighted a teamwork with their animals.
s14: "He can be himself! Not being programmed. To offer with one's naturalness a well-being to people, as well as efforts without realizing it" s39: "a partnership relationship with my dog" s48: "I am the guarantor of the framework and safety during the session" s61:" Define, organize, adjust, readjust, guide, evaluate the sessions […]" s79: "Create situations to enrich patient/animal exchanges, frame the work" s83: "Protecting your animal and listening to them to see when they have had enough" s97: "The animal is a precious help for the handler, they offer them/us a multitude of possibilities to enter into a relationship, to consolidate a relationship, to make people work on so many different objectives"
The shared role of mediators
Handlers seem to not make any difference between the animal species involved, however dogs seemed to be considered more proactive in the interactions. The most common term used to talk about the animal's role is "mediator", acting as a link between handler and beneficiary, but also between objectives and beneficiary. Some handlers went beyond this, suggesting that the animal is the intermediary that allows the establishment and/or reinforcement of the care relation. Handlers were also referring to themselves as mediators between the animal-beneficiary interactions and objectives. It was their duty to intervene in specific ways in order to influence the behavior of the animal offering support so as to guide the interactions and reach the objectives. Then, they can stimulate or calm beneficiaries according to what is happening during the interactions.
s38: "An added value to the relationship of committed help and a precious mediator because it is alive and offers a diversity of emotions" s39: "It depends, sometimes I am the mediator of the meeting with the animal, sometimes it is him who allows the patient to access the care and to come to meet me" s48: "It is the use of the animal presence as a media in the caring relationship" s73: "Accompany the patient-dog pairing to work on the patient's own objectives"
Beliefs about the human-animal relationship related to their personal experiences
Handlers directed themselves to AAI because they had beliefs about the benefits of the human-animal relationship. Their beliefs were mostly linked to their personal experiences with animals more than the theory around the human-animal bond. They believed that the introduction of animals into care would benefit other humans. In addition, they had a passion for animals and the care of other humans. AAI were therefore a good compromise to work with both humans and animals.
Discussion
The purpose of this study was to gain insight into the representations of handlers in AAI, focusing on their definitions of their practices, their motivations to work in AAI and the roles of the human-animal team. Our research suggests that AAI include numerous methods because of the variety of handlers' professional backgrounds and the possibility of adding AAI to various settings. Even though there is a wide heterogeneity of practices, we found common features to all handlers. Some characteristics concur on the more "humane" care, the convictions of handlers based on their personal experiences with animals and the complementarity of the human-animal team.
The goal of the present research was to contribute to the body of knowledge surrounding AAI and representations of handlers via two axes. The first one concerned the main features of AAI. The second one concerned the interspecific complementarity of the human-animal team.
The characteristics of the French practice of AAI
Our first aim was to highlight the main features of the French practice of AAI. Handlers defined AAI as holistic approaches that allow a wide cross-section of applications in human health (psychological, motor, speech, cognitive, social). Therefore, handlers reported to work in AAI with various populations and animal species. This is consistent with the common French application of AAI that is defined as a set of heterogeneous practice [START_REF] Michalon | Panser avec les animaux: Sociologie du soin par le contact animalier[END_REF], contrary to the US model that is more categorized. To underline the main features of AAI, we assumed that the interview about handlers' professional backgrounds would give us information.
Indeed, it highlights various profiles in handlers that correspond to a variety of settings.
Furthermore, handlers were trained in various fields and in various institutions. As mentioned before by Kruger et al. [11], it could explain the heterogeneity of AAI because handlers will practice AAI according to their initial professions. The variety of their professional backgrounds can be explained by their motivations to work in AAI, which were mostly based on their personal positive experiences with animals. Handlers introduced AAI because of their convictions that AAI brought something new to care, which is consistent with Michalon [START_REF] Michalon | Les relations anthropozoologiques à l'épreuve du travail scientifique[END_REF].
AAI can therefore concern people with various professional backgrounds who have the willingness of compromise between care work and love for animals in common. In this sense, AAI are chosen because of handlers' intrinsic convictions more than their theoretical knowledge of the human-animal bond. Another feature that we can underline about handlers' professional backgrounds is a distinction between handlers that were care professionals and others who were not. Care professionals incorporated their initial training first to define their practices in AAI, which is consistent with another recent French report [START_REF] Boizeau | [END_REF]. Consequently, most participants of our sample can be considered as AAT handlers because they are care professionals working within the scope of their professions [START_REF] Iahaio | The IAHAIO Definitions for Animal Assisted Intervention and Guidelines for Wellness of Animals Involved in AAI[END_REF][START_REF] Black | Australian psychologists' knowledge of and attitudes towards animal-assisted therapy: Psychologists and animal-assisted therapy[END_REF]. However, it seems that the common US model can hardly be applied to the French practice of AAI because the French practice is more heterogeneous than a distinction between care professionals and the non-care professionals. This is consistent with previous studies that highlighted the difficulty in exporting the US model to other countries [START_REF] Black | Australian psychologists' knowledge of and attitudes towards animal-assisted therapy: Psychologists and animal-assisted therapy[END_REF][START_REF] Haubenhofer | Austrian and American approaches to animal-based health care services[END_REF]. Finally, we wanted to underline that some handlers work in AAI without any training, which can expose the practice to certain abuses.
This diversity of handlers' backgrounds points out the need of standards to ensure: i) the quality of sessions, ii) the welfare of animals and iii) the welfare of beneficiaries as proposed by the Italian model [START_REF] Simonato | The Italian Agreement between the Government and the Regional Authorities: National Guidelines for AAI and Institutional Context[END_REF]. Efforts should be made on the regulation of these practices in France mostly concerning the necessary minimal training of handlers to ensure quality and safety within AAI sessions.
Reintroducing some care in the cure
The most common trait of AAI mentioned by handlers was that its practices contrast with classical approaches. The differences cited by handlers are consistent with previous studies, though more research is needed to increase our understanding of the mechanisms. Contrary to conventional medicine, the objectives cited mostly focus on the well-being of beneficiaries during sessions, which is consistent with the theme "mood improvement" found in 6
qualitative studies about AAI in the review of Shen et al. [START_REF] Shen | We need them as much as they need us": A systematic review of the qualitative evidence for possible mechanisms of effectiveness of animal-assisted intervention (AAI)[END_REF]. More specifically, handlers referred to AAI as a moment where the disease is no longer central in the care, which is consistent with the theme "fostering feeling of normalcy" highlighted in the review of Shen et al. [START_REF] Shen | We need them as much as they need us": A systematic review of the qualitative evidence for possible mechanisms of effectiveness of animal-assisted intervention (AAI)[END_REF]. Therefore, the relationship between caregiver and patient was central and handlers gave importance to creating an alliance, almost an affectionate relationship with beneficiaries, which was redundant in other qualitative researches [START_REF] Firmin | Qualitative Perspectives of an Animal-Assisted Therapy Program[END_REF][START_REF] Lubbe | The application of animal-assisted therapy in the South African context: A case study[END_REF][START_REF] Black | Australian psychologists' knowledge of and attitudes towards animal-assisted therapy: Psychologists and animal-assisted therapy[END_REF]. This was first highlighted by Levinson [37] who reported AAI as allowing the change from "patient" to individual. We can assume that AAI are close to "person-centered approaches" that are defined as "putting the person with the human's worth and uniqueness, as well as the person's interests and lived experience, at the center of the caring process" [START_REF] Edvardsson | Implementing national guidelines for personcentered care of people with dementia in residential aged care: effects on perceived person-centeredness, staff strain, and stress of conscience[END_REF]. It is interesting also to note that AAI give some benefits to handlers, too, with the positive emotions of the beneficiaries giving them joy and a sense of usefulness [START_REF] Black | Australian psychologists' knowledge of and attitudes towards animal-assisted therapy: Psychologists and animal-assisted therapy[END_REF][START_REF] Gundersen | What motivates arrangements of dog visits in nursing homes? Experiences by dog handlers and nurses[END_REF]. Handlers also mentioned that AAI help them to overcome the impasse with traditional tools and the possibility of spending time with their animals [START_REF] Abrahamson | Perceptions of a hospitalbased animal assisted intervention program: An exploratory study[END_REF].
Therefore, the benefits of AAI seem to be due to their differences with classical medicine that focuses on curing the patient first. The introduction of animals in care settings could allow another form of care work, a more humane care regarding the objectives but also the consideration of patients as individuals. Further investigations need to focus on limits of the current care and the fact that animals bring more humanity.
The complementarity of the human-animal dyad
Our second goal was to question separately the roles of handlers and animals to understand the specificity of the interspecific teamwork. The centrality of animals in these practices are indeed recognized since they are critical to the identity of AAI [START_REF] Marino | Construct Validity of Animal-Assisted Therapy and Activities: How Important Is the Animal in AAT?[END_REF]. Though, the interviews of the handlers highlight the importance of the interspecific collaboration in AAI [START_REF] Black | Australian psychologists' knowledge of and attitudes towards animal-assisted therapy: Psychologists and animal-assisted therapy[END_REF]. Handlers mentioned the intrinsic qualities of animals, such as their absence of judgment and their unconditional love [START_REF] Wry Aanderson | Alberta, Alberta Children's Services, Chimo Project, Paws on purpose: implementing an animal-assisted aherapy program for children and youth, including those with FASD and developmental disabilities, Chimo Project[END_REF]. However, animals are also actors in the therapeutic setting that facilitate contact between humans and facilitate the establishment of a therapeutic relationship [START_REF] Nimer | Animal-Assisted Therapy: A Meta-Analysis[END_REF][START_REF] Beetz | The Effect of a Real Dog, Toy Dog and Friendly Person on Insecurely Attached Children During a Stressful Task: An Exploratory Study[END_REF][START_REF] Glenk | A Dog's Perspective on Animal-Assisted Interventions[END_REF][START_REF] Joye | Biophilia in Animal-Assisted Interventions-Fad or Fact?[END_REF][START_REF] Beetz | Theories and possible processes of action in animal assisted interventions[END_REF][START_REF] Villers | La médiation animale : un concept fourre-tout?[END_REF]. Still, it is important to highlight that the relationship between beneficiary and animal is not a substitute to the relationship between handler and beneficiary [START_REF] Chandler | Animal Assisted Therapy in Counseling[END_REF]. Handlers referred to their roles as mediators and as "good" caregivers who succeed in creating a positive relationship with beneficiaries. They are also the spokesperson for their animals and need to ensure their well-being and appropriate interactions [START_REF] Michalon | Panser avec les animaux: Sociologie du soin par le contact animalier[END_REF][START_REF] Ng | Our Ethical and Moral Responsibility[END_REF]. Therefore, handlers and their skills are necessary to build the framework around AAI, such as ensuring the good conditions of interactions and the evaluation of objectives. This data clarifies the fact that studies must take in account both handlers and animals to understand the mechanisms of AAI, whereas most research is focused on proving the benefits of animals [START_REF] Servais | Du surnaturel au malentendu Pour une approche interactionnelle des systèmes de communication homme/animal[END_REF]. Moreover, it would be interesting to clarify the aspects related to the different modalities of interspecific relationships based on the animal species involved. Indeed, dogs are the most common species in AAI because they are well adapted to therapeutic settings because of their availability, trainability and predictability [START_REF] Glenk | Current Perspectives on Therapy Dog Welfare in Animal-Assisted Interventions[END_REF]. Some authors point out that dogs allow for a more therapeutic work with more reciprocity than other species [START_REF] Menna | The Human-Animal Relationship as the Focus of Animal-Assisted Interventions: A One Health Approach[END_REF][START_REF] Beetz | Theories and possible processes of action in animal assisted interventions[END_REF]. It may be due to their outstanding skills to communicate and their ability to create relationships with humans [START_REF] Beck | Romantic Partners and Four-Legged Friends: An Extension of Attachment Theory to Relationships with Pets[END_REF].
They are also easier to train for therapy [START_REF] Bert | Animal assisted intervention: A systematic review of benefits and risks[END_REF]. Other species, such as small pets, are increasingly introduced in AAI on the other hand because of their small size and toy-like appearance that can allow for the development of other forms of relationships [START_REF] Loukaki | Animal welfare issues on the use of rabbits in an animal assisted therapy program for children[END_REF].
Limits
This research presents some limits that can be palliated in further studies. First, our sample was mostly composed of handlers that were initially care professionals, which can be a bias for the representativeness of our study. Further studies can focus on handlers without a background in the medico-social field to observe the pertinence of their answers vis-à-vis of those with a medical professional background. Secondly, our cohort concerned handlers that worked with dogs and other species, but we focused on dogs. Other specific studies on other animal species introduced into AAI could bolster the knowledge. Finally, our study concerned the French practice of AAI; other studies in various European countries can be useful to understand the importation of the US model to France.
Conclusion
The aim of this study was to underline handlers' perspectives on Animal-Assisted Interventions in order to contribute to the body of knowledge surrounding AAI. To this end, we focused on the main features of AAI and the interspecific complementarity of the humananimal team that we interviewed. Our study underlined that AAI in France are heterogeneous because they are complementary approaches to various care settings. This is also linked to the fact that handlers will work in accordance to their initial training, which represent a wide scope of fields. Moreover, handlers' profiles are heterogeneous because these practices concern people who want to include their passion of animals in their work. It seems that AAI allow a more "humane" care through the presence of animals. This point needs more consideration to question the actual care and its limits for both patients and caregivers.
Finally, the human-animal dyad must be considered as a teamwork. The animal is here to "be himself" and the handler to bear the benefits of the beneficiary-animal relationship to fulfil the objectives. Consequently, this exploratory study highlights the heterogeneity of AAI and the need to focus on individual considerations [START_REF] Cirulli | Animal-assisted interventions as innovative tools for mental health[END_REF][START_REF] Serpell | The Human-Animal Bond[END_REF][START_REF] Beetz | Theories and possible processes of action in animal assisted interventions[END_REF][START_REF] Colussi | Variations of salivary cortisol in dogs exposed to different cognitive and physical activities[END_REF].
s8: "Combining my passion for dogs with my job" s85: "It's natural because I've always shared my life with dogs […]" s108: "The conviction that the animal can bring things to humans"
Acknowledgments
The researchers would like to thank the handlers who took the time to answer the questionnaires.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. grant from funding agencies in the public, commercial, or not-for-profit sectors. |
03366143 | en | [
"sdv.spee"
] | 2024/03/04 16:41:22 | 2021 | https://cnrs.hal.science/hal-03366143/file/S0167494321000844.pdf | Johann Harel
email: [email protected]
Romain Fossaert
email: [email protected]
Alain Bérard
Aurélie Lafargue
email: [email protected]
Marie Danet-Lamasou
Philippe Poisson
email: [email protected]
Véronique Dupuis
email: [email protected]
Isabelle Bourdel-Marchasson
email: [email protected]
Isabelle Bourdel
Masticatory coefficient and physical functioning in older frail patients admitted for a Comprehensive Gerontological Assessment
Keywords: multimorbidity, masticatory capacities, physical performance, malnutrition CRediT roles: Conceptualization: JH, RF,AB, IBM, Data curation: HR, AL, Formal analysis: AL, PP, IBM, Investigation: AB, MDL, VD, frailty, oral health, poly-morbidity, comprehensive gerontological assessment, physical performance, malnutrition
Comprehensive Gerontological Assessment in multi-morbid and frail older patients can include intraoral exam
In multi-morbid patients impaired mastication capacities are associated with malnutrition In frail un multi-morbid patients lower limb performance is lower with lower masticatory capacities independently from nutritional status
Introduction
The prevention of both decompensation due to chronic disease and functional dependence is an important goal in the care of older people. The comprehensive gerontological assessment (CGA) is based on systematic screening and scoring of four domains: mental health, functionality, medical status and social status(Kenneth [START_REF] Rockwood | Comprehensive geriatric assessment[END_REF][START_REF] Valencia | Assessment procedures including comprehensive geriatric assessment[END_REF]. The resulting care plan has been shown to prevent a worsening of health, particularly in frail patients. In older people, frailty refers to a multifactorial impairment of health that predicts adverse living events, such as loss of independence, hospitalization, adverse drug reactions, decompensation related to chronic disease, admission to a nursing home, or death [START_REF] Fried | Frailty in older adults: evidence for a phenotype[END_REF][START_REF] Rockwood | A brief clinical instrument to classify frailty in elderly people[END_REF]. Effective preventive strategies in older people may thus rely on the identification of those who are frail. Poor oral health in hospitalized older patients was found strongly associated with poor functional status or malnutrition [START_REF] Poisson | Relationships between oral health, dysphagia and undernutrition in hospitalised elderly patients[END_REF]. Thus, frailty may be linked to oral health as a possible determinant of multifactorial development of this syndrome. Indeed, several studies have reported cross-sectional relationships between components of oral health and frailty. Loss of teeth, an unmet need for a fixed prosthesis or removable denture, self-reported poor oral health, and pain or impaired mastication are among the factors associated with an increased prevalence of frailty [START_REF] Castrejón-Pérez | Oral health conditions and frailty in Mexican community-dwelling elderly: a cross sectional analysis[END_REF][START_REF] Kamdem | Relationship between oral health and Fried's frailty criteria in community-dwelling older persons[END_REF][START_REF] Semba | Denture use, malnutrition, frailty, and mortality among older women living in the community[END_REF][START_REF] Tsai | Association of dental prosthetic condition with food consumption and the risk of malnutrition and follow-up 4-year mortality risk in elderly Taiwanese[END_REF][START_REF] Woo | Chewing difficulty should be included as a geriatric syndrome[END_REF]. Finally, masticatory dysfunction, number of missing teeth, and dry mouth were identified among the factors that increase the rate of progression of the prefrail or robust older people to a frail status [START_REF] Hakeem | Association between oral health and frailty: A systematic review of longitudinal studies[END_REF][START_REF] Horibe | Relationship between masticatory function and frailty in community-dwelling Japanese elderly[END_REF]. Oral health behavior also varies according to frailty status, as older individuals with several health complaints may neglect oral hygiene [START_REF] Niesten | The impact of frailty on oral care behavior of older people: a qualitative study[END_REF].
CGA is essentially multidisciplinary; evaluation is performed by a basic team consisting of a medical practitioner, a nurse, and a social worker; depending on the patient profile, it may be supplemented by a psychologist, a physiotherapist, an occupational therapist, and/or a pharmacist.
Given the importance of oral health in the general health status of older subjects, an intra-oral health examination carried out by a dentist should be performed and care should be proposed if indicated.
The oral exam should be conducted as part of the medical section, along with a nutritional assessment [START_REF] Valencia | Assessment procedures including comprehensive geriatric assessment[END_REF]. In the older adult poor oral health, in particular a loss of teeth is associated with a low-quality diet [START_REF] Gaewkhiew | Functional dentition, dietary intake and nutritional status in Thai older adults[END_REF][START_REF] Kiesswetter | Functional determinants of dietary intake in community-dwelling older adults: a DEDIPAC (DEterminants of DIet and Physical ACtivity) systematic literature review[END_REF][START_REF] Mendonça | Prevalence and determinants of low protein intake in very old adults: insights from the Newcastle 85+ Study[END_REF]. Impaired mastication may be due to a loss of teeth or to a non-functional removable prosthesis [START_REF] Iwasaki | The association between dentition status and sarcopenia in Japanese adults aged ≥75 years[END_REF] and contributes to sarcopenia. An intraoral examination may have added value beyond that of a component of the nutritional assessment.
The functional domain assessment includes scoring of daily living activities and assessments of physical performance and fall risk. Two tools are available for this purpose: the short physical performance battery (SPPB) [START_REF] Guralnik | A Short physical performance battery assessing lower extremity function: association with self-reported disability and prediction of mortality and nursing home admission[END_REF] and the timed up and go test (TUG) [START_REF] Podsiadlo | The Timed "Up & Go": A test of basic functional mobility for frail elderly persons[END_REF]. These two tests explore balance, gait speed and rising from a chair.
We hypothesized that in the older frail or dependent subjects, functional impairment assessed using the SPPB or TUG would be more important in the subgroup with missing teeth, particularly individuals with a non-functional removable prosthesis or without a needed prosthesis. The objective of the present study is to analyze relationships between functional impairment and an index of missing teeth recorded as geriatric masticatory coefficient taking into account nutritional status.
Materials and Methods
This cross-sectional study included consecutive patients older than 70 years who were admitted to a single geriatric day hospital for CGA as part of routine care during a 2-year period. Based on the usual activity of the unit the duration of the inclusion period was selected to include at least 10 subjects per descriptive variables. Patients received written information about the use of their anonymized health data for clinical research purposes and were given the option of refusal. The study was approved by the Ethical Committee of the CHU (university hospital center) of Bordeaux (GP-CE2020-43).
Patients who had undergone an intra-oral health examination performed by a dentist during the CGA were included in the study. Patients unable to perform the TUG or SPPB were excluded. In case of two or more visits to the day hospital, only the first was considered.
Comprehensive gerontological assessment (CGA)
Co-variables of this study included age, gender, living place (personal home or care home), education (none or no certificate, primary school certificate, secondary school certificate, bachelor and over). Mental health was assessed for depression and cognitive troubles by screening with the 15-Geriatric depression scale (GDS) [START_REF] Yesavage | 9/Geriatric Depression Scale (GDS)[END_REF] and the Gréco French version of mini mental status examination (MMSe), respectively [START_REF] Folstein | Mini-mental state[END_REF][START_REF] Kakafat | Standardisation et étalonnage français du Mini Mental State (Mms) Version Gréco[END_REF]. A GDS score > 4/15 was considered to indicate depression, and an MMSe score < 23/30 was taken to indicate cognitive troubles. The functional domain included TUG or SPPB assessment (see 2.3), basic activities of daily living (ADL) [START_REF] Katz | Assessing self-maintenance: activities of daily living, mobility, and instrumental activities of daily living[END_REF] and instrumental activities of daily living (IADL) [START_REF] Lawton | Assessment of Older People: Self-Maintaining and Instrumental Activities of Daily Living[END_REF]). An ADL score > 1/12 and an IADL score < 8/8 indicated dependence. The medical evaluation included chronic pathologies listing, full treatment list and a nutritional assessment. The latter consisted of a body mass index (BMI, weight (kg)/height (m)², mini nutritional assessment (MNA©) [START_REF] Kaiser | Validation of the Mini Nutritional Assessment short-form (MNA®-SF): A practical tool for identification of nutritional status[END_REF]scoring and dental examination by a dentist (see below). MNA© allows to identify malnutrition (MNA 0-7/14) and at risk for malnutrition . The version of the MNA based on calf circumference (CC) (<31 cm vs. ≥31 cm) instead of BMI was used because CC serves as a surrogate of muscle mass [START_REF] Cruz-Jentoft | Sarcopenia: revised European consensus on definition and diagnosis[END_REF]. Among routine blood sample analyses we selected hemoglobin (g/100 mL), C-reactive protein (mg/L), serum albumin (g/L), and vitamin D (ng/mL) levels. Glomerular filtration rate (eGFR) was estimated using the CKD-Epidemiology Collaboration equation (CKD-EPI) [START_REF] Levey | A new equation to estimate glomerular filtration rate[END_REF] which is based on serum levels of creatinine. Patients were categorized according to the Rockwood clinical frailty scale(K. [START_REF] Rockwood | A global clinical measure of fitness and frailty in elderly people[END_REF].
Geriatric masticatory coefficient (GMC)
In routine care, chewing performance is not measured in these older and frail adults; masticatory teeth capacity is assessed. Mastication involves both posterior occluding pairs of teeth (POPs) and anterior grasping teeth.
The masticatory coefficient (masticatory percentage) [START_REF] Dion | Correction of nutrition test errors for more accurate quantification of the link between dental health and malnutrition[END_REF] is determined by assigning points as follows to each functional tooth according to its role in mastication as long as the opposite tooth is also functional: 5 points for each molar except the third maxillary molar (2 points) and the third mandibular molar (3 points), 3 points for each premolar, 4 points for each canine tooth, and 1 point for each incisor except the central maxillary incisor (2 points). The maximal masticatory coefficient is 100. A modified index, the geriatric masticatory coefficient (GMC), was used because it takes into account all of the person's remaining teeth, not only those belonging to a functional pair [START_REF] Berard | Removable prosthesis improves body mass index in elderly people[END_REF] . Because teeth with missing opposite teeth have a non-zero role during mastication, GMC may better reflect the remaining masticatory capacity. With this approach, individuals with several remaining teeth were not assessed as edentulous. Determining the remaining masticatory capacity is an important point for treatment decisions, such as extraction of teeth or placement of dentures.
In patients wearing a prosthesis, a corrected GMC was similarly determined during clinical observation. Prostheses, fixed or movable, have the value of the teeth they replace, provided they allow a normal meshing. Thus, the variable "corrected GMC" represented the best GMC for each patient, with or without a prosthesis. Finally, the number of POPs (POP, range 0-8) without a prosthesis was also recorded. Pairs of opposite teeth are taken into account providing they allow a correct meshing.
Functional performance
Motor performance was assessed using either the SPPB [START_REF] Guralnik | A Short physical performance battery assessing lower extremity function: association with self-reported disability and prediction of mortality and nursing home admission[END_REF] or the TUG test [START_REF] Podsiadlo | The Timed "Up & Go": A test of basic functional mobility for frail elderly persons[END_REF] depending on the medical practitioner's preference. Both evaluate gait speed, standing balance, and rising from a chair. The SPPB consists of three 4-point scores: 1, standing balance: participants attempt to maintain side-by-side, semi-tandem, and tandem positions for 10 s; 2, a 4 m walk at normal pace timed from a standing start; and 3, time needed to rise from a chair as quickly as possible five times. Overall, the maximal SPPB score is 12 (4 points per test). The TUG test evaluates the same abilities but in a single test that focuses on the risk of falling.
The patient is instructed to rise from a chair (with arms), walk 3 m, turn and return to the chair, and then sit down again. The performance is timed. Cognitive troubles, mainly attention troubles increases time spend in the task. TUG explores the risk of falling.
Analyses
The inclusion period duration was selected to obtain a sample with at least 10 patients for each described variables based on the usual activity of the unit. Quantitative variables are expressed as the mean and standard deviation (SD), ordinal variables (scores) are shown as median (IQR, interquartile range) and qualitative variables are given as numbers and percentages. SPPB is an ordinal variable and is also expressed as mean (SD). Because the link between dental status and malnutrition is well established, the characteristics of the patients are presented according to MNA categories and the different categories are compared using a chi-square test for qualitative variables and a one-way ANOVA for quantitative variables. Correlations between GMC, corrected GMC, or POPs and the SPPB or TUG test were determined using Spearman coefficient tests. Because SPPB is an ordinal variable and TUG is not normally distributed in frail or dependent patients, partial correlation tests were performed to analyze the relationships between corrected GMC, GMC, and POPs and SPPB or TUG, controlling for age, sex, and MNA category to identify the role of malnutrition in those relationships. The software SPSS 23 (IBM®) was used for these analyses.
Results
During the study period, 302 patients underwent a CGA that included an intra-oral examination.
After the exclusion of 39 patients due to a missing physical performance assessment in their CGAs and 7 patients each of whom underwent two CGAs, the study sample included 256 patients.
The characteristics of the study population are presented in Table 1. The SPPB test was performed in 91 patients and the TUG test was performed in 211. Serious chronic diseases had a high prevalence and included heart failure in 76 (29.7%), respiratory diseases in 32 (12.5%), major cognitive troubles in 110 (44.4%), history of cancer in 59 (23.0%), stroke in 43 (16.8%), diabetes mellitus in 82 (32.0%), and hypertension in 185 (72.5%) patients. The number of medications taken daily by these patients was also high: 6.9 (SD 3.5), with five or more different medications taken by 191 (74.9%) patients. Chronic inflammation was of low-medium intensity as reflected by a mean Creactive protein level of 7.7 (15.3) mg/L. Vitamin D levels were <10 ng/mL in 48 (22.1%) patients; 31 (13.0%) patients had severe renal insufficiency (CKD-EPI<30 mL/min/1.73 m2).
Functional ADL or IADL dependence was present but at low levels. According to the frailty scale most of the subjects were classified in the categories "mildly frail" (79 (32.1%)) or "moderately frail" (77 (31.3%)). The others were either in the category "managing well" (25 (9.8%)) or at the opposite in the other "severely frail" (36(14.6%)).
Malnutrition and risk of malnutrition were frequent, identified in 75 (30.5%) and 126 (51.2%) patients, respectively. Patients living in a nursing home, with a higher 15-GDS, lower MMSe, major cognitive troubles, or a higher level of dependence were more likely to be malnourished or at risk of malnutrition.
The physical performance of the population was poor, evidenced by an SPPB score of <8/12 in 59 (64.8%) patients and a TUG score ≥ 20 s in 102 (46.4%) patients. The SPPB score was lower in malnourished patients and in patients at risk for malnutrition than in normo-nourished patients.
The dental examinations showed important impairments, based on a low GMC and the low number of POPs: 53.7/100 (SD 30.6) and 3.1/8 (SD 3.0), respectively. One hundred and five patients had a dental prosthesis. After correction for patients with a prosthesis, the corrected GMC was 75.4 (SD 19.8). While the corrected GMC was related to the MNA category, this was not the case for either the GMC or the number of POPs.
After controlling for age, gender and MNA category, an association between the corrected GMC and both the TUG time (-0.198, p = 0.005) and the SPPB score (0.282, p = 0.009) was determined (Table 2). Both the GMC and POPs were associated with the SPPB score (GMC: 0.269, p = 0.013; POPs: 0.319, p = 0.004) but not with the TUG time (Table2).
Discussion
We found a cross-sectional association between nutritional status, corrected GMC, and physical performance in the mildly to severely frail older adult with complex health problems. The association between the corrected GMC and physical performance was independent of nutritional status.
Masticatory capacity was evaluated using two different approaches, pairs of opposing teeth and number of teeth, weighted according to their alleged role in mastication. The adjusted partial correlations showed relationships between both indexes and the SPPB score.
An association between diminished masticatory capacity and malnutrition in the older people has previously been reported [START_REF] Toniazzo | Relationship of nutritional status and oral health in elderly: Systematic review with meta-analysis[END_REF]. Because malnutrition is associated with low muscle strength [START_REF] Veronese | Effect of nutritional supplementations on physical performance and muscle strength parameters in older people: A systematic review and meta-analysis[END_REF] lower muscle performance is expected in patients with impaired corrected masticatory coefficient. However, malnutrition does not fully explain the link between a lower corrected GMC and lower physical functioning of the limbs. Physical performance relies not only on strength but also on balance, proprioceptive functions, and several cognitive processes, such as attention. Teeth clenching may reinforce balance through a facilitation of the pretibial reflex that is related to applied strength [START_REF] Takada | Modulation of H reflex of pretibial muscles and reciprocal Ia inhibition of soleus muscle during voluntary teeth clenching in humans[END_REF]. Postural balance may be improved during chewing [START_REF] Alghadir | Effect of chewing on postural stability during quiet standing in healthy young males[END_REF]. A loss of teeth may directly impact physical performance, particularly balance. Pain, inflammatory syndromes, and dyspnea also diminish physical performance. In a previous study, these relationships were considered in a fully adjusted model, in which the presence of 19 or less teeth not corrected with a prosthesis was found to increase the risk of falling [START_REF] Yamamoto | Dental status and incident falls among older Japanese: a prospective cohort study[END_REF].
The most common characteristic of our frail study population was the high prevalence of severe comorbidity, including cardiovascular diseases, diabetes, COPD and asthma, cognitive troubles, or a history of stroke. The physical performance of patients with severe comorbidities is frequently impaired. The comorbidities are possibly secondary to the loss of teeth. In three community studies, each missing tooth increased cardiovascular risk and mortality [START_REF] Holmlund | Number of teeth as a predictor of cardiovascular mortality in a cohort of 7,674 subjects followed for 12 Years[END_REF][START_REF] Joshy | Is poor oral health a risk marker for incident cardiovascular disease hospitalisation and all-cause mortality? Findings from 172 630 participants from the prospective 45 and Up Study[END_REF][START_REF] Lee | Tooth Loss Predicts Myocardial Infarction, Heart Failure, Stroke, and Death[END_REF]. In a large cross-sectional study, severe tooth loss and a history of stroke were independently associated with inflammation [START_REF] You | Tooth loss, systemic inflammation, and prevalent stroke among participants in the reasons for geographic and racial difference in stroke (REGARDS) study[END_REF]. Missing teeth may predict a higher risk for cognitive impairment [START_REF] Zhang | Poor oral health conditions and cognitive decline: Studies in humans and rats[END_REF]. In nursing home residents, patients with missing POPs without prostheses have an increased 1-year mortality, independent of nutritional status [START_REF] Dewake | Posterior occluding pairs of teeth or dentures and 1-year mortality in nursing home residents in Japan[END_REF]. However, the causal link between missing teeth and the above-listed pathologies is probably multifactorial and bi-directional [START_REF] Felton | Complete edentulism and comorbid diseases: an update[END_REF]. Furthermore, some risk factors for tooth loss are shared by systemic diseases. Low-medium grade inflammation due to periodontitis can cause both systemic inflammation (and cardiovascular diseases) and a loss of teeth.
Chronic oral inflammation may induce dental loss and increase the risk for cardiovascular disease, malnutrition, and muscle functional impairment. Finally, the behavioral changes that occur with severe disease are likely to impair both self-care and access to professional care, thus worsening oral health.
In our study population, the association between masticatory capacity and physical performance was better described by the corrected GMC than the GMC, given that prosthetic adaptation protects against a physical decline. As demonstrated by a previous study showing an improvement in cardiovascular risk with better oral care [START_REF] Park | Improved oral hygiene care attenuates the cardiovascular risk of oral health disease: a population-based study from Korea[END_REF] dental care within the scope of the CGA holds promise in the management of complex health problems in the older adults.
The limitations of our study should be noted. First, it was a cross-sectional study and causality could not be determined. Second, masticatory capacity, not function, was assessed. Tests for chewing ability, such as using color-indicator gums and measures of masseter or hand grip strength, are not done in routine care. However, an association between impaired chewing ability and weaker occlusal forces with sarcopenia was determined in a previous study of the community-dwelling elders [START_REF] Murakami | Relationship between chewing ability and sarcopenia in Japanese community-dwelling older adults[END_REF]. Therefore, low muscle strength may also affect the muscles involved in mastication (masseter, tongue, cheeks). In routine clinical practice, a "static" assessment of masticatory capacity in frail, older patients can be easily and reliably performed. Third, several subject characteristics potentially associated with the study variables were not recorded and therefore not analyzed: smoking and alcohol use and usual physical activity. Finally, this study did not permit to clarify the role of multi-morbidity in the relationship between masticatory capacities and physical performance. Among the strengths of our study were the large sample population and the inclusion of frail older individuals engaged in preventive action to decrease the burden of multimorbidity. Our results show that oral care has a role to play in preventing functional decline in the frail elderly. Finally the magnitude of the correlation is not outstanding.
Conclusions
In older mildly to severely frail and multi-morbid patients, the physical performance will be limited in those with a loss of teeth. In addition, malnutrition is not the only link between chronic diseases, oral health and posture. An impaired posture may be due to missing teeth. A longitudinal study should explore the effects of dental care on physical performance in multi-morbid older patients.
Table 1 .
1 Characteristics of the patients according to Mini-Nutritional Assessment (MNA) category.
All subjects MNA 0-7 MNA 8-11 MNA 12-14 P value*
Characteristics N = 256 Malnutrition At risk of Normal Comparison
N = 75 malnutrition N = 45 according to
N = 126 MNA classes
Age 83.8 (6.2) 83.3 (5.4) 84.6 (5.5) 84.0 (4.3) <0.001
Gender (F/M) 148/108 47/28 79/47 18/27 0.020
Educational level 0.389
-no certificate 63 (25.2) 20 (27.8) 34 (27.6) 8 (17.8)
-primary school
certificate 87 (34.8) 27 (37.5) 46 (37.4) 11 (24.4)
-secondary school
certificate 43 (17.2) 11 (15.3) 19 (15.4) 10 (22.2)
-bachelor and over 57 (22.8) 14 (19.4) 26 (19.5) 16 (35.5)
Living place 20 (7.8) 13 (17.3) 4 (3.2) 0 (0) <0.001
(nursing home)
BMI (kg / m²) 26.5 (5.3) 24.1 (5.0) 26.4 (3.9) 28.2 (5.8) <0.001
Calf circumference 33.0 (3.9) 31.6 (3.5) 33.6 (3.6) 36.1 (3.7) <0.001
(cm)
GMC (0-100) 53.7 (30.6) 43.6 (4.3) 58.9 (31.0) 71.3 (9.3) 0.096
Corrected GMC (0 - 75.4 (19.8) 75.9 (20.9) 77.6 (15.9) 77.3 9.4) <0.001
100)
POPs (n) 3.1 (3.0) 2.6 (3.6) 3.6 (3.2) 4.3 (2.2) 0.056
TUG (s) 22.0 (14.6) 22.0 (11.4) 25.3 (11.5) 18.8 (7.3) 0.303
SPPB (0-12) 6 (7) 4 (4) 4 (6) 10 (3) <0.001
MMSe (0-30 23 (9) 23 (7) 22 (9) 29 (3) <0.001
15-GDS (0-15) 6 (6) 8 (6) 7 (6) 4 (3) <0.001
ADL (0-12) 1 (4) 3 (6) 2 (5) 0 (0) 0.005
IADL (0-8) 4 (6) 2 (3) 2 (5) 7 (3) <0.001
Diabetes (yes) 82 (32.0) 26 (34.7) 36 (28.6) 18 (40.0) 0.333
Stroke history 43 (16.8) 15 (20.0) 15 (11.9) 11 (24.4) 0.099
Cognitive troubles 110 (44.4) 42 (59.2) 60 (48.8) 4 (9.1) <0.001
Hypertension 185 (72.5) 57 (76.0) 84 (67.2) 38 (84.4) 0.065
History of cancer 59 (23.0) 16 (21.3) 27 (21.4) 14 (31.1) 0.377
Heart failure 76 (29.7) 18 (24.0) 38 (30.2) 17 (37.8) 0.274
COPD-asthma 32 (12.5) 8 (10.7) 14 (11.1) 9 (20.0) 0.253
Number of 6.9 (3.48) 6.7 (3.2) 6.7 (3.5) 8.5 (3.8) 0.479
medication
eGFR (CKD-EPI, 65.0 (19.9) 66.4 (15.1) 65.2 (14.0) 62.4 (28.9) <0.001
mL/min / 1.73m²)
C-reactive protein 7.7 (15.7) 23.4 (51.2) 9.3 (14.0) 3.9 (4.1) 0.553
(mg /L)
Serum albumin 37.7 (4.0) 36.8 (4.5) 36.9 (3.1) 38.3 (4.0) 0.123
(g /L)
Hemoglobin (g / 100 13.0 (1.5) 12.6 (1.6) 13.1 (1.4) 12.9 (1.2) 0.231
mL)
Vitamin D 22.3 (13.0) 27.2 (14.1) 28.2 (19.3) 19.5 (9.7) 0.162
(ng / mL)
Variables are presented as Mean (SD), Median (Interquartile range) or N (%). POPs: posterior occluding pairs of teeth; GMC: geriatric masticatory coefficient; TUG: timed up and go test; SPPB: short physical performance battery; MMSe : mini mental state examination; GDS geriatric depression scale; ADL basic activities of daily iiving; IADL: instrumental activities of daily living; eGFR: estimated glomerular filtration rate;. *Quantitative variables were compared using oneway ANOVA and qualitative variables using Chi2 test.
Table 2 .
2 Correlations between physical performance assessment and masticatory capacity assessment based on the number of teeth
TUG |
03381875 | en | [
"sdv"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03381875/file/S0003687021001046.pdf | Philémon Marcel-Millet
email: [email protected]
Alain Groslambert
email: [email protected]
Philippe Gimenez
email: [email protected]
Sidney Grosprêtre
email: [email protected]
Gilles Ravier
email: [email protected]
Philémon Marcel
Psychophysiological responses of firefighters to day and night rescue interventions
Keywords: firefighting activity, autonomic nervous system, heart rate variability
This study aimed 1) to assess the psychophysiological responses throughout a rescue intervention performed during the day and at night and 2) to determine if a vibrating alarm influences these psychophysiological responses at night. Sixteen male firefighters completed a simulated intervention under three different conditions: 1) during the day with a sound alarm signal (DaySA), 2) during the night with a sound alarm signal (NightSA), 3) during the night with a vibrating alarm signal (NightVA). Cardiovascular and psychological stress were recorded throughout the interventions.
During the alarm signal, HR reactivity was greater in NightSA than in DaySA (p<0.01). Parasympathetic reactivation and self-confidence were significantly lower in NightSA than in DaySA (p<0.05). HR reactivity was decreased in NightVA in comparison to NightSA (p<0.05).
Overall, the rescue intervention had a greater impact on the psychophysiological variables during the night than during the day, and the type of alarm had a minor effect.
Introduction
Firefighting is a very demanding occupational activity that leads to a with high level of psychophysiological stress during rescue interventions [START_REF] Williams-Bell | Physiological demands of the firefighter candidate physical ability test[END_REF][START_REF] Horn | Physiological recovery from firefighting activities in rehabilitation and beyond[END_REF][START_REF] Horn | Firefighter and fire instructor's physiological responses and safety in various training fire environments[END_REF][START_REF] Smith | Cardiovascular strain of firefighting and the risk of sudden cardiac events[END_REF][START_REF] Windisch | Physiological Responses to Firefighting in Extreme Temperatures Do Not Compare to Firefighting in Temperate Conditions[END_REF]. It is important for firefighters to be physically fit in order to enhance their firefighting performance, which can be quantified by the time required to carry out their duties [START_REF] Williams-Bell | Physiological demands of the firefighter candidate physical ability test[END_REF], as well as by the associated physiological responses [START_REF] Windisch | Physiological Responses to Firefighting in Extreme Temperatures Do Not Compare to Firefighting in Temperate Conditions[END_REF]. In France, firefighters work a 24-hour shift, so they have to maintain a high level of performance regardless of the time of day or night. However, to date, firefighters' performance and physiological stress in response to interventions have only been studied during the day [START_REF] Williams-Bell | Physiological demands of the firefighter candidate physical ability test[END_REF][START_REF] Horn | Physiological recovery from firefighting activities in rehabilitation and beyond[END_REF][START_REF] Horn | Firefighter and fire instructor's physiological responses and safety in various training fire environments[END_REF][START_REF] Windisch | Physiological Responses to Firefighting in Extreme Temperatures Do Not Compare to Firefighting in Temperate Conditions[END_REF]) except for the acute response to the alarm signal, which has been compared between day and night conditions [START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF]. The authors showed that the increase in heart rate (HR) was similar during the day and night, whereas the increase in cortisol was higher at night than during the day. Several studies conducted with healthy subjects have shown that physical performance is better late in the afternoon than in the early morning [START_REF] Baxter | Influence of time of day on all-out swimming[END_REF][START_REF] Thun | Sleep, circadian rhythms, and athletic performance[END_REF].
However, these results appear to be equivocal, and other studies revealed no effect of circadian rhythm on physical performance [START_REF] Reilly | Investigation of circadian rhythms in metabolic responses to exercise[END_REF][START_REF] Deschenes | Chronobiological effects on exercise performance and selected physiological responses[END_REF]. Moreover, at rest, several physiological functions, such as rectal temperature, HR, plasma lactates, blood pressure (BP) or and cortisol levels, follow a circadian rhythmicity [START_REF] Reilly | Investigation of circadian rhythms in metabolic responses to exercise[END_REF][START_REF] Deschenes | Chronobiological effects on exercise performance and selected physiological responses[END_REF], which can also be found during or after exercise [START_REF] Harma | Circadian variation of physiological functions in physically average and very fit dayworkers[END_REF]. These results are controversial since other studies did not show any circadian variations in HR responses at with exercise [START_REF] Reilly | Investigation of circadian rhythms in metabolic responses to exercise[END_REF][START_REF] Deschenes | Chronobiological effects on exercise performance and selected physiological responses[END_REF]. Consequently, the general picture of the circadian modulations of performance and physiological variables is unclear. It seems that this variation should be highly dependent upon the type of exercise and the characteristics of the participants.
Moreover, beyond the circadian variations, the state (awake / asleep) in which the firefighter is in, at the time of the intervention, can influence his or her psychophysiological responses. Indeed, HRV and HR seem to be largely under a sleep-wake-dependent process [START_REF] Viola | Sleep processes exert a predominant influence on the 24-h profile of heart rate variability[END_REF]. Therefore, it would be important for the safety and security of firefighters to measure if the performance of firefighters is modulated by the moment of the intervention (daytime vs night-time).
Moreover, the psychophysiological stress resulting from firefighting activities depends on the interaction of three different phases during an intervention. The first phase, the alarm mobilisation phase, corresponds to the alarm signal followed by the moment when firefighters change into their protective clothing before leaving the fire station. This phase leads to physiological stress with an increase of HR from 20 to 66 beats⋅min -1 in a short period of time [START_REF] Kuorinka | Firefighters' reaction to alarm, an ECG and heart rate study[END_REF][START_REF] Karlsson | Heart rate as a marker of stress in ambulance personnel: a pilot study of the body's response to the ambulance alarm[END_REF][START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF], and an increase in salivary cortisol concentrations [START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF]. The second phase, the fire suppression phase, corresponds to the phase of firefighting activities, such as forcible entry, search and rescue operations or extinguishing a fire [START_REF] Kales | Firefighters and on-duty deaths from coronary heart disease: a case control study[END_REF]. During this phase, firefighters engage in intense physical activity in psychologically stressful situations with high temperatures. This leads to high, and sometimes maximal, HR, and to an increase in BP, body temperature and psychological stress [START_REF] Barr | The thermal ergonomics of firefighting reviewed[END_REF][START_REF] Boisseau | Air management and physiological responses during simulated firefighting tasks in a high-rise structure[END_REF][START_REF] Horn | Physiological recovery from firefighting activities in rehabilitation and beyond[END_REF][START_REF] Windisch | Physiological Responses to Firefighting in Extreme Temperatures Do Not Compare to Firefighting in Temperate Conditions[END_REF][START_REF] Ravier | Physiological responses and parasympathetic reactivation in rescue interventions: The effect of the breathing apparatus[END_REF][START_REF] Wilkinson | Physiologic strain of SCBA confidence course training compared to circuit training and live-fire training[END_REF]. The third phase, the alarm return phase, corresponds to the several hours following a fire suppression. During this period, the psychophysiological stress generated by the intervention remains, and it can sometimes take several hours for the physiological parameters to return to initial levels. After simulated firefighting activities, core temperature continues to increase for many minutes [START_REF] Selkirk | Active versus passive cooling during work in warm environments while wearing firefighting protective clothing[END_REF][START_REF] Horn | Physiological recovery from firefighting activities in rehabilitation and beyond[END_REF], and the HR returns to baseline values only after about 80 minutes [START_REF] Horn | Physiological recovery from firefighting activities in rehabilitation and beyond[END_REF]. Furthermore, a significant acute parasympathetic perturbation has been observed after simulated firefighting activities [START_REF] Ravier | Physiological responses and parasympathetic reactivation in rescue interventions: The effect of the breathing apparatus[END_REF]. Indeed, in this study, we observed a greater vagal depression was observed after a simulated emergency than after an incremental running test performed until exhaustion.
The amount of stress during a fire the interventions can represent major issues for firefighters on two levels.
First, for firefighters with underlying cardiovascular disease, a significant amount of stress can induce pathophysiological changes causing a risk of death [START_REF] Smith | Cardiovascular strain of firefighting and the risk of sudden cardiac events[END_REF][START_REF] Kales | Firefighting and the heart: implications for prevention[END_REF]. Second, regular exposure to a source of stress could contribute to the long-term development of cardiovascular disease [START_REF] Steptoe | Stress and cardiovascular disease[END_REF]. Thus, despite a high level of exposure to traumatic risks [START_REF] Kazemi | Comparison of melatonin profile and alertness of firefighters with different work schedules[END_REF], sudden cardiac events are the leading cause of line-of-duty death for firefighters in the United States [START_REF] Kales | Emergency duties and deaths from heart disease among firefighters in the United States[END_REF][START_REF] Smith | Cardiovascular strain of firefighting and the risk of sudden cardiac events[END_REF]. Consequently, the comparison of a day and night intervention would determine whether or not firefighters' performance and psychophysiological stress are affected by the time period in which the intervention occurs, considering that an increase in cardiovascular stress is associated with adverse outcomes for the health of firefighters [START_REF] Smith | Cardiovascular strain of firefighting and the risk of sudden cardiac events[END_REF][START_REF] Kales | Firefighting and the heart: implications for prevention[END_REF].
Finally, the sudden increase in HR and BP in response to the emergency alarm increases the risk of death among firefighters [START_REF] Kales | Firefighters and on-duty deaths from coronary heart disease: a case control study[END_REF][START_REF] Kales | Emergency duties and deaths from heart disease among firefighters in the United States[END_REF]. A previous study showed that in comparison to a classical emergency alarm, a gentle awakening carried out by an experimenter mitigated the increase in HR [START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF].
Usually, the alarm signal is a sound of at least 85 db(A) emitted by a transmitter carried by the firefighter.
Recently, the Regional Fire and Rescue Service offered to some firefighters a modified transmitter that starts by vibrating for 30 seconds, before emitting a sound signal if the transmitter is not turned off. The objective of this device is to reduce the stress of the firefighters during the alarm. Therefore, it is important to measure its effectiveness in workplace ecological conditions, to guide firefighters in their choice of alarms.
Consequently, the present field study aimed to: (1) compare the daytime and night-time physiological and psychological responses of structural firefighters to a simulated emergency; (2) compare the physiological and psychological responses of structural firefighters to a sound alarm and a vibration alarm, at night. It was hypothesised that the night intervention would exhibit higher physiological response than the day intervention, and that this higher response could be attenuated by a less noisy alarm.
Materials and methods
Participants
Sixteen male firefighters participated in this study (Table 1). All the participants had been medically screened by the local firefighter's healthcare institution, and each had received medical clearance to engage in firefighting activities. Prior to testing, all the participants gave their voluntary written informed consent, which indicated the purpose, the benefits and the risks of the investigation and the right to withdraw from participation at any time.
This study was conducted in accordance with the recommendations of the 1964 Declaration of Helsinki and its later amendments. The research design was examined and approved by the medical service and the Institutional Review Board of the Regional Fire and Rescue Service. ml⋅min -1 ⋅kg -1 18.6 ± 5.9 years
Experimental design
The study was conducted at the participants' workplace, and the rescue intervention was part of their occupational activity. The experimental exercises were part of the firefighters' regular training exercises. Prior to the experimental sessions, during a familiarisation session, the procedures were thoroughly explained, and the participants tested the connected suit and the psychological questionnaires used in the experimental sessions.
The connected suit used in this study was that of the Hexoskin brand (Hexoskin® Carré Technologies Inc., Montreal, Canada). Next, the participants completed the Intermittent Fitness Test [START_REF] Buchheit | The 30-15 intermittent fitness test: accuracy for individualizing interval training of young intermittent sport players[END_REF] to determine their individual HR peak (HRmax). The V O was estimated based upon the participant's final running speed using the following formula [START_REF] Buchheit | The 30-15 intermittent fitness test: 10 year review[END_REF]) : V ̇O2max (ml⋅min -1 ⋅kg -1 ) = 28.3 -(2.15 x G) -(0.741 x A) -(0.0357 x W) + (0.0586 x A x VIFT) + (1.03 x VIFT) where: VIFT is the final running speed, G is gender (male = 1), A is age (in years) and W is weight (in kilograms). Then, all the participants performed the three experimental conditions in a randomised order, interspersed by a minimal minimum of 3 days and a maximum of 14 days. The firefighters completed the same simulated rescue intervention (Figure 1) under three different For the DaySA condition, the participants completed a 10-minute sitting rest period at 08:00 h in a quiet room at ambient temperature (~20°C) in order to perform the resting measurements. Then, at 08:30 h, they went to a bedroom that was assigned to them for the duration of the study. In the bedroom, they rested without sleeping, lying down wearing the same attire they wear when sleeping at night in the fire station. The participants were informed that they might hear an alarm for the simulated rescue intervention at any time of the morning. At 09:00 h, the alarm signal was triggered. It was issued by a transmitter assigned to the firefighter and usually used for real interventions. The alarm was placed on a table next to the bed, at a distance of about 1 meter from the firefighter. The sound of the signal was about 85 db(A). Upon hearing the signal, the firefighters had to get dressed and go to the locker room as quickly as possible. The time between the triggering of the alarm and the arrival in the locker room corresponds to the alarm mobilisation phase. Once in the locker room, the pre-simulated intervention measures were carried out for 5 minutes, and the participants were equipped with the protective equipment for the simulated rescue intervention. The personal protective equipment consisted of a helmet, gloves, a bunker coat and pants, boots and a self-contained breathing apparatus with an air cylinder and facial mask, for a total weight of approximately 22 kg. For breathing at a standard rate of 40 L⋅min -1 , the SCBA cylinder used in the study can provide 45 minutes of air. Under their personal protective equipment, the firefighters wore the standard clothing provided by the service, namely long pants and t-shirts. Once equipped and ready, all the participants performed the simulated rescue intervention as quickly as possible. When the firefighters finished the simulated rescue intervention, they had 1 minute to remove their personal protective equipment (SCBA, helmet, jacket) before completing a 10-minute resting period in a seated position in a quiet room at ambient temperature (~20°C). For the NightSA and the NightVA conditions, the resting measurements were completed at 22:00 h in the same room as in the DaySA condition. Then, the participants went into the same bedroom that was used for the daytime simulation, between 22:30 h and 23:30 h, depending on their habits. They were informed that they might hear an alarm for the simulated rescue intervention at any time during the night.
The alarm signal was triggered approximately 2 hours after the bedtime that was indicated by the firefighter, between 00:30 h and 01:30 h. Particular care was taken to ensure that the participants were sleeping deeply when the alarm sounded. We asked them in what state they were at the time of the alarm (sleeping, sleepy, awake) and if they were awake or sleepy the experiment was postponed for a different night. The same transmitter and the same sound were used for the NightSA condition and the DaySA condition. For the NightVA condition, the transmitter signal was modified so that it would only vibrate for 30 seconds. At the end of the 30 seconds, if the firefighter did not turn off the transmitter, a sound signal was added to the vibrations. Then, the whole protocol during the NightVA condition was were the same as that used in the NightSA condition (Figure 1). Again, we have ensured that the participants were woken only by the vibrations. If it was is the sound that woke them up, then the experiment was postponed for a different night.
The three experimental conditions were performed in a randomised order. For the three conditions, everything was done to allow for maximum uncertainty, as in real conditions. The study was conducted at the participants' workplace, it was part of their occupational activity and they could be called either for the simulated rescue intervention or for real interventions. In addition, several nights were recorded with the Hexoskin® suit, during which they were warned that they could be awakened, without us actually doing so. The participants were also asked to abstain from consuming caffeine 3 hours before testing and from engaging in high intensity exercise and consuming alcohol 24 hours before the tests. All the participants were familiar with the simulated rescue intervention and the fitness test because both had been part of their occupational skills training and the regular monitoring of their physical abilities. Indeed, once a year, the firefighters of the regional fire service perform physical tests: the incremental running test and also measurements of muscular endurance, flexibility, etc. These tests are then reviewed by the hierarchy and the health service. Finally, the simulated intervention is carried out regularly by the firefighters (i.e. several times a year), as part of their training exercises. heart rate variability; BP: blood pressure
Data analysis
Assessment of the firefighting performance
For both the alarm mobilisation phases and the simulated rescue interventions, the completion time (CT) was recorded manually using a manual chronometer. For the alarm mobilisation phase, the time trial began when the alarm was triggered; it ended when the firefighters entered the locker room. For the simulated rescue intervention, the time trial started at the beginning of the first task; it ended when the firefighter crossed the finish line. Air consumption was quantified in bars directly from the pressure recorded on the manometer of the self-contained breathing apparatus at the start and the end of the simulated rescue intervention. The accuracy of the pressure gauge was ± 5 bars.
Assessment of the physiological parameters
The participants were fitted with a connected suit during the three experimental conditions and during the Intermittent Fitness Test. The connected suit provided an electrocardiogram signal at a frequency of 256 Hz from three electrodes (two on the chest, one on the right supra iliac region). The data that were recorded using the Hexoskin® suit were shown to be reliable and valid under a laboratory condition [START_REF] Elliot | Validity and reliability of the Hexoskin wearable biometric vest during maximal aerobic power testing in elite cyclists[END_REF]). Then, HR was calculated at 1 Hz, and was analysed during three distinct periods. During the alarm mobilisation phase, the pre-alarm HR corresponded to the average HR of the previous 5 seconds before the alarm signal, the peak HR was the maximal HR reached during the phase and the HR reactivity was calculated as the difference between peak HR and pre-alarm HR. For the alarm mobilisation phase, the pre-alarm HR, the peak HR and the HR reactivity were expressed in beats⋅min -1 . Then, mean and peak HR were determined for the entire simulated rescue intervention. For the simulated rescue intervention, mean and peak HR were expressed as the percentage of the maximal HR measured during the Intermittent Fitness Test (% HRmax). Finally, during the recovery period, HR recovery (HRR) was calculated by taking the difference between the exercise final HR and the HR at 60 seconds after the end of the exercise (HRR60s). Thus, a short-term HR time constant, a T30 index, was determined as the negative reciprocal of the slope of the regression line for the first 30 seconds (-1/ slope). Since for a few seconds after high intensity exercise, HR can continue to rise or plateau, we actually calculated the T30 from the 10 th to the 40 th seconds of recovery, as previously recommended [START_REF] Peçanha | Methods of assessment of the post-exercise cardiac autonomic recovery: a methodological review[END_REF]. It has been demonstrated that the HRR60s and T30 indices reflect parasympathetic reactivation immediately after exercise [START_REF] Peçanha | Methods of assessment of the post-exercise cardiac autonomic recovery: a methodological review[END_REF].
HR variability (HRV) derived from the electrocardiogram signal extracted from the Hexoskin® suit was recorded 10 minutes after the simulated rescue interventions. R-R intervals were analysed both on time-and frequency-domains using Kubios software (Kubios HRV Analysis v3.0; Bio-signal Analysis and Medical
Imaging Group at the Department of Applied Physics, University of Kuopio, Kuopio, Finland), and the signal was corrected automatically using software correction with a medium threshold of correction. After a 5-minute period of stabilisation, HRV analysis was performed for a 5-to-10-minute period, post-exercise. In the frequency domain, HRV was analysed using spectral analysis of the intervals from the fast Fourier transform. The high frequency (HF, ms 2 ) band, ranging between 0.15 Hz and 0.40 Hz, was selected for this study as the index to evaluate the post-exercise parasympathetic nervous system reactivation. Post-exercise vagal modulations were also assessed in the time domain by calculating the root mean square of successive differences of normal R-R interval (RMSSD, ms) and the standard deviation of instantaneous beat-to-beat R-R interval variability derived from the Poincare ́ Plot analysis (SD1, ms) indices.
Arterial systolic and diastolic BP were assessed via auscultation on the left upper arm while passively supporting the lower arm. In each condition, the participants' BP was assessed at the end of the resting measures, during the pre-simulated intervention measures, within the first minute of the post-simulated intervention and at the 10minute post-simulated intervention.
Assessment of the psychological parameters
The firefighters' anxiety and self-confidence levels were measured using a French adaptation [START_REF] Cury | Mesurer l'anxiété du sportif en compétition: Présentation de l'échelle d'état d'anxiété en compétition (EEAC)[END_REF] of the Competitive State Anxiety Inventory-2 (CSAI-2) questionnaire [START_REF] Martens | Longer exercise duration delays post-exercise recovery of cardiac parasympathetic but not sympathetic indices[END_REF]). This questionnaire consists of 27 items related to feelings of cognitive anxiety, somatic anxiety and self-confidence. In each of the study conditions, all the participants completed the questionnaire during the resting measures and during the presimulated intervention measures. Then, sleepiness was assessed during the resting and pre-simulated intervention measures using the Stanford Sleepiness Scale (SSS) [START_REF] Herscovitch | Sensitivity of the Stanford sleepiness scale to the effects of cumulative partial sleep deprivation and recovery oversleeping[END_REF]. Finally, the rating of perceived exertion (RPE) was self-evaluated 10 minutes after the end of the simulated rescue intervention by using the category ratio scale.
Statistical analysis
All data were expressed as mean ± standard-deviation (SD). The normality for data distribution and the equality of variance between the samples were checked with Shapiro-Wilk's test and Bartlett's test, respectively. Because the RMSSD and the HF indices were not normally distributed, the data were log-transformed (lnRMSSD and lnHF). The differences between the DaySA and the NightSA conditions and the differences between the NightSA and the NightVA conditions were analysed using pairwise t-tests if the data were normally distributed (for CT, air consumption, HR, T30, HRR60s, lnRMSSD, lnHF and SD1) or Wilcoxon tests if the data were not normally distributed (for the psychological parameters and BP values). To account for the increased possibility of a type-I error, the Bonferroni correction was used to adjust for multiple statistical tests (VanderWeele and Mathur 2019).
Friedman tests were used to analyse the BP differences between time measures (Resting-, Pre-, Post-1 min, post-10 min measures) and the Nemenyis procedure was used when the null hypothesis was rejected. Statistical analysis was performed using XLSTAT software 2013 (Addinsoft SARL, Paris, France). Statistical significance was set at <0.05. The magnitude of the differences was interpreting using Cohen's d effect-size where <0.20 is trivial, 0.20 to 0.49 is small, 0.50 to 0.79 is moderate and >0.80 is large [START_REF] Cohen | Heart-rate recovery immediately after exercise as a predictor of mortality[END_REF]).
Results
Performance and physiological parameters
The alarm was triggered at 08:32 h ± 00:07 h for the DaySA condition, at 00:50 h ± 00:17 h for the NightSA condition and at 00:53 h ± 00:16 h for the NightVA condition.
For the three experimental conditions, the CT and HR responses for both the alarm mobilisation phase and the simulated rescue intervention are presented in Table 2. The cardiac parasympathetic indices of the acute recovery phase for the three conditions are also displayed in Table 2. Systolic BP and diastolic BP for the three experimental conditions are presented in Figure 3. There was a significant difference between the pre-and 1-min post-simulated intervention systolic BP in comparison to resting systolic BP for both the three conditions, but no difference was found between the conditions.
Psychological parameters
Post-simulated intervention RPE was not significantly different between the DaySA and NightSA conditions (7.4 ± 2.1 and 8.1 ± 2.2 respectively). The post-simulated intervention RPE was 8.1 ± 1.7 for the NightVA condition, which was not significantly different from the NightSA condition.
Differences between the resting and pre-simulated intervention values of the CSAI-2 and SSS questionnaires are presented in Table 3. The comparison of the DaySA and NightSA conditions revealed a higher level of presimulated intervention sleepiness (p <0.0001) and a lower level of pre-simulated intervention self-confidence (p <0.01) for the NightSA condition in comparison to the DaySA condition. The comparison revealed no differences between the NightSA and NightVA conditions.
Discussion
This study aimed to assess the psychophysiological responses of firefighters throughout a simulated rescue intervention performed during the day and at night, in workplace ecological conditions. Furthermore, the study aimed to determine if a modified vibrating alarm has an impact on these psychophysiological responses during a simulated intervention performed at night. The main results revealed that during the alarm mobilisation phase, the HR reactivity was higher during the intervention performed at night than the intervention performed during the day, and it can be decreased with the use of a vibrating alarm during the night. During the simulated rescue intervention, no differences in the firefighters' performance and HR responses were found between the day and the night conditions or between the two different alarm signals. However, the cardiac parasympathetic reactivation following the simulated intervention was lower during the night condition than during the day condition. In terms of the psychological responses, the level of sleepiness was higher, and the level of selfconfidence was lower before the night-simulated intervention than before the day-simulated intervention. Presimulated intervention somatic anxiety increased in comparison to the resting measures for both the day and night conditions, but no differences were found between the two conditions.
Firstly, during the alarm mobilisation phase, HR reactivity was higher in the NightSA condition than in the DaySA condition (77.2 ± 20.4 vs 65.1 ± 17.9 beats⋅min -1 respectively). In our study, the HR reactivity measured during the day was slightly higher than those measured in previous studies, i.e. 20 to 61 beats⋅min -1 [START_REF] Kuorinka | Firefighters' reaction to alarm, an ECG and heart rate study[END_REF][START_REF] Karlsson | Heart rate as a marker of stress in ambulance personnel: a pilot study of the body's response to the ambulance alarm[END_REF][START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF]. In our study, in the DaySA condition, although the participants were not asleep, they rested in a supine position for about 30 minutes wearing the same attire that was worn in the NightSA condition. Consequently, the recorded pre-alarm HR was lower in our study than in previous studies, which could explain our higher HR reactivity. Only one study analysed HR reactivity in a night condition, and it found lower HR reactivity values than those measured in our study for the NightSA condition [START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF]. These authors also compared HR reactivity during a day and a night alarm mobilisation phase and showed no significant differences. In our study, the higher HR reactivity in the NightSA condition in comparison to the DaySA condition was due to a significantly lower pre-alarm HR in the NightSA condition; the peak HR was not different between the day and night conditions. It is well established that resting HR is influenced by circadian variations, and the lowest values are obtained between 01:00 h 02:00 h [START_REF] Harma | Circadian variation of physiological functions in physically average and very fit dayworkers[END_REF], as found in our study. Moreover, participants were asleep in the night condition but not in the day condition. This could also influence the resting HR, which is lower when a person is sleeping [START_REF] Viola | Sleep processes exert a predominant influence on the 24-h profile of heart rate variability[END_REF].
Therefore, the circadian rhythmicity of the resting HR as well as the HR during waking state when waking up, could explain the different HR reactivity found between the NightSA and the DaySA conditions. Moreover Finally, the alarm mobilisation phase resulted in an increase in pre-intervention systolic BP in comparison to the resting BP values, with no difference between the NightSA and the DaySA conditions. Both the significant increase in HR and the increase of pre-intervention systolic BP were in accordance with the "fight or flight" reaction of firefighters during the alarm mobilisation phase, as described in previous studies [START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF][START_REF] Smith | Cardiovascular strain of firefighting and the risk of sudden cardiac events[END_REF].
Secondly, regarding the simulated rescue intervention, our results revealed no difference between the NightSA and the DaySA conditions for the CT, HR and BP responses. As presented in Table 2, depending on the condition, mean HR ranged between 85% and 86% of HRmax and peak HR ranged between 90% and 91% of HRmax. These results are similar to those reported by previous studies with simulated firefighting activities [START_REF] Williams-Bell | Physiological demands of the firefighter candidate physical ability test[END_REF][START_REF] Windisch | Physiological Responses to Firefighting in Extreme Temperatures Do Not Compare to Firefighting in Temperate Conditions[END_REF][START_REF] Ravier | Physiological responses and parasympathetic reactivation in rescue interventions: The effect of the breathing apparatus[END_REF]). Moreover The increase in systolic BP during the simulated rescue intervention (Figure 2) was higher in our study than in a previous study [START_REF] Horn | Physiological recovery from firefighting activities in rehabilitation and beyond[END_REF].
However, in contrast to our study, in the previous research [START_REF] Horn | Physiological recovery from firefighting activities in rehabilitation and beyond[END_REF], the firefighting activity was longer (i.e. 18 minutes) and it included rest periods, which can decrease the systolic BP. The HR and BP reported in our study showed that the simulated rescue intervention was a strenuous exercise. Otherwise, to the best of our knowledge, our study is the first to compare day-and night-simulated rescue interventions in firefighters.
Although the results of studies with healthy subjects have revealed discrepancies, previous findings reported no effect of circadian rhythm on physical performance or physiological responses (V O , HR) during submaximal exercise [START_REF] Reilly | Investigation of circadian rhythms in metabolic responses to exercise[END_REF][START_REF] Deschenes | Chronobiological effects on exercise performance and selected physiological responses[END_REF] or maximal exercise [START_REF] Deschenes | Chronobiological effects on exercise performance and selected physiological responses[END_REF]. Regular training at specific times of the day can influence the diurnal rhythm by limiting the differences between the evening and morning performances [START_REF] Chtourou | The effect of training at a specific time of day: a review[END_REF]. Because their shift is 24 hours long, firefighters have to deal with regular interventions at any time of the day or night. Consequently, this regular training may help firefighters perform equally well at night and during the day. Moreover, firefighting activities are a physical stressor and a psychological stressor [START_REF] Robinson | Stress reactivity and cognitive performance in a simulated firefighting emergency[END_REF]. In response to a stressful situation, the hypothalamic-pituitary-adrenal axis releases cortisol, which allows the body to be on high alert and which may enhance the physical response to an emergency. The cortisol production follows a circadian rhythmicity, but it also includes an adaptive response to the acute stress that firefighters experience before simulated interventions [START_REF] Robinson | Stress reactivity and cognitive performance in a simulated firefighting emergency[END_REF] or during the alarm mobilisation phase [START_REF] Jay | Can stress act as a sleep inertia countermeasure when on-call?[END_REF]). In our study, for both the NightSA and the DaySA conditions, peak HR was similar during the alarm mobilisation phase, and there was a similar increase in pre-intervention somatic anxiety (Table 3). These results reveal that the firefighters were in a physically and psychologically stressful situation during the daytime and night-time interventions. Although the adaptive response of the cortisol production was not measured in our study, it may have helped the firefighters perform as well as during the NightSA condition and the DaySA condition.
Thirdly, the present study aimed to investigate cardiac parasympathetic reactivation after a simulated rescue intervention conducted at night and during the day. In a previous study, we investigated cardiac parasympathetic reactivation after firefighting activities [START_REF] Ravier | Physiological responses and parasympathetic reactivation in rescue interventions: The effect of the breathing apparatus[END_REF]), but to the best of our knowledge, no studies have evaluated it at night. In the present study, the indices of cardiac parasympathetic reactivation were higher than those that were observed previously (e.g. HRR60s, LnRMSSD), which suggests a greater cardiac parasympathetic reactivation in the current study. This difference could be due to the exercise duration, which was longer in the earlier study than in the present experiment (i.e. 13.4 ± 2.1 min vs 4.2 ± 0.6 min) [START_REF] Ravier | Physiological responses and parasympathetic reactivation in rescue interventions: The effect of the breathing apparatus[END_REF]). Indeed, previous research studies have shown that a longer exercise duration may attenuate the post-exercise HRV indices of cardiac parasympathetic reactivation [START_REF] Martens | Longer exercise duration delays post-exercise recovery of cardiac parasympathetic but not sympathetic indices[END_REF]). Nevertheless, despite the short duration of the simulated rescue intervention in the present study, the recovery indices values are similar to those observed after high-intensity exercises in sports [START_REF] Buchheit | Parasympathetic reactivation after repeated sprint exercise[END_REF][START_REF] Nakamura | Cardiac autonomic responses to repeated shuttle sprints[END_REF].
Furthermore, while the intensity and the duration were similar between the night-and the day-simulated rescue interventions, the cardiac parasympathetic activity was lower in the NightSA condition than in the DaySA condition. Few studies have investigated the effect of the time of day on acute post-exercise recovery, with inconsistent results. Some studies reported a lower HR recovery after evening exercise in comparison to morning exercise in heathy subjects [START_REF] Cohen | Human orcadian rhythms in resting and exercise pulse rates[END_REF]. However, other studies found no differences in acute recovery after morning and night exercise [START_REF] Harma | Circadian variation of physiological functions in physically average and very fit dayworkers[END_REF][START_REF] Prodel | Different times of day do not change heart rate variability recovery after light exercise in sedentary subjects: 24 hours Holter monitoring[END_REF]. A higher post-exercise blood lactate concentration would be one possible explanation for the depressed cardiac parasympathetic recovery during the night. Previous studies have reported a circadian variation in blood lactate values after exercises performed at the same intensities [START_REF] Baxter | Influence of time of day on all-out swimming[END_REF][START_REF] Deschenes | Chronobiological effects on exercise performance and selected physiological responses[END_REF], and it has been shown that the blood lactate concentration was associated with a decrease in cardiac parasympathetic reactivation [START_REF] Buchheit | Parasympathetic reactivation after repeated sprint exercise[END_REF]). It has also been reported that the aerobic contribution during a Wingate test was lower in the morning than in the afternoon [START_REF] Souissi | Effect of time of day on aerobic contribution to the 30-s wingate test performance[END_REF]). The number of cardiovascular events for the United States firefighters is related to the distribution of interventions; in the general population, it follows a circadian rhythm [START_REF] Kales | Firefighters and on-duty deaths from coronary heart disease: a case control study[END_REF]. The risk of death by coronary heart disease is 10-fold higher during the recovery period after firefighting interventions than during non-emergency duties [START_REF] Kales | Firefighters and on-duty deaths from coronary heart disease: a case control study[END_REF]. Because weak post-exercise parasympathetic activity is associated with an increased risk of death [START_REF] Cohen | Heart-rate recovery immediately after exercise as a predictor of mortality[END_REF][START_REF] Lahiri | Assessment of autonomic function in cardiovascular disease[END_REF], the impaired parasympathetic reactivation after firefighting activities might be one cause of the increased risk of firefighters' death. Finally, all of these results should be considered in the context of the development of stress at work. Indeed, firefighters face many stressful situations, as shown through the HR reactivity during the intervention or through the post-exercise vagal depression. In addition, it has recently been shown that the nocturnal HRV of firefighters is reduced while on-call (Marcel-Millet et al. 2020a). All of these elements can lead to a chronic reduction in HRV in firefighters who experience high job stress, as shown by [START_REF] Shin | Factors related to heart rate variability among firefighters[END_REF]. Moreover, high levels of work stress have been shown to affect the development of cardiovascular disease [START_REF] Belkic | Is job strain a major source of cardiovascular disease risk?[END_REF][START_REF] Steptoe | Stress and cardiovascular disease[END_REF]. This is why emergency services should consider implementing strategies to reduce the different sources of stress for firefighters. Among the different strategies that can be adopted, improving the reactivation of the parasympathetic system should be considered. Active cooling strategies, such as hand and forearm immersion, have been proposed to decrease the cardiovascular and thermoregulatory strain of firefighters [START_REF] Selkirk | Active versus passive cooling during work in warm environments while wearing firefighting protective clothing[END_REF][START_REF] Barr | The impact of different cooling modalities on the physiological responses in firefighters during strenuous work performed in high environmental temperatures[END_REF]. Regarding HRV, both face and body cold water immersion improved cardiac parasympathetic reactivation, in particular thanks to a decrease in the core temperature [START_REF] Haddad | Influence of cold water face immersion on postexercise parasympathetic reactivation[END_REF][START_REF] De Oliveira Ottone | The effect of different water immersion temperatures on post-exercise parasympathetic reactivation[END_REF]. In hot conditions, these strategies are necessary to increase exercise tolerance and prevent the firefighters' exhaustion [START_REF] Selkirk | Active versus passive cooling during work in warm environments while wearing firefighting protective clothing[END_REF]).
In addition, to reduce the stress of firefighters, the second objective of this study was to compare two different types of alarms during the night. The results revealed that the modification of the night alarm signal with a vibrating alarm, which was a softer alarm than the sound alarm, decreased cardiac stress during the alarm mobilisation phase. Indeed, between the NightSA and the NightVA conditions, the pre-alarm HR was similar but HR reactivity in the alarm mobilisation phase was lower in the NightVA condition than in the NightSA condition.
Similar results were observed in a previous study, which revealed a lower HR reactivity when participants were gently awakened in comparison to using a sound alarm signal [START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF]. However, that study [START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF]) modified the alarm signal and the subsequent mobilisation, which implied a lower physical activity in the mobilisation phase. Nevertheless, the significant increase in HR during the alarm mobilisation phase could be due to both the alarm stress and the subsequent movements [START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF]. The stress related to the alarm signal seems to contribute significantly to the HR responses, as shown previously in a study with ambulance personnel [START_REF] Karlsson | Heart rate as a marker of stress in ambulance personnel: a pilot study of the body's response to the ambulance alarm[END_REF]. A change in a night alarm signal seems sufficient to decrease the cardiac stress related to the emergency alarm. Moreover, no difference was observed between the NightSA and the NightVA conditions for all the other parameters, such as CT and HR, for the simulated rescue intervention and the indices of cardiac parasympathetic reactivation. The triggering of the alarm leads to a "fight or flight" reaction with a prominent sympathetic activation, a massive release of catecholamines and an increased HR [START_REF] Smith | Cardiovascular strain of firefighting and the risk of sudden cardiac events[END_REF].
These elements can be the cause of the significant increase in the risk of death during the alarm response [START_REF] Kales | Firefighters and on-duty deaths from coronary heart disease: a case control study[END_REF][START_REF] Kales | Emergency duties and deaths from heart disease among firefighters in the United States[END_REF]. In this context, it would be very interesting for the emergency services to provide firefighters with a modified alarm signal, which could decrease their cardiac stress during the alarm mobilisation phase.
Finally, regarding the psychological measures, the level of pre-intervention sleepiness was higher in the NightSA condition than in the DaySA condition. We also found this difference when we compared the resting measures:
the level of pre-intervention sleepiness increased during the night condition, but not during the day condition (Table 3). These results support the findings of previous studies that indicated that sleepiness followed a circadian rhythmicity, with a peak in sleepiness between 03:00 h and 07:00 h [START_REF] Valdez | Focus: Attention Science: Circadian Rhythms in Attention[END_REF]. This circadian pattern of sleepiness may be related to the melatonin secretion rhythm, which increases during the night [START_REF] Kazemi | Comparison of melatonin profile and alertness of firefighters with different work schedules[END_REF]. Another possible explanation of the increase in sleepiness may be the sleep restriction generated by the simulated rescue intervention [START_REF] Philip | Acute versus chronic partial sleep deprivation in middle-aged people: differential effect on performance and sleepiness[END_REF]. Moreover, we found no difference in the level of sleepiness between the NightSA and NightVA conditions. This result is in accordance with the study by [START_REF] Jay | Can stress act as a sleep inertia countermeasure when on-call?[END_REF], who revealed that a less disruptive awakening did not alter the participants' sleepiness after waking. These are important findings considering that firefighters have to "mobilise", drive and make decisions just after waking up [START_REF] Jay | Can stress act as a sleep inertia countermeasure when on-call?[END_REF].
We found a significant difference in the pre-intervention self-confidence between the NightSA and the DaySA conditions. When compared to the resting measures, the participants' level of self-confidence decreased during the night condition but not during the day condition (Table 3). A positive linear relationship between selfconfidence and sports performance was found in previous studies [START_REF] Woodman | The relative impact of cognitive anxiety and self-confidence upon sport performance: A meta-analysis[END_REF]. However, other factors moderated this relationship, such as sex and competitive standards [START_REF] Woodman | The relative impact of cognitive anxiety and self-confidence upon sport performance: A meta-analysis[END_REF], which may explain the lack of difference in our study between the firefighters' daytime and night-time performances, although they had lower self-confidence during the night. Nevertheless, it is important for firefighters to consider this decrease in self-confidence because they can be faced with very mentally challenging situations.
In terms of anxiety, the somatic anxiety was higher during the pre-intervention measures than during the resting measures under the day and night conditions, but no differences were found between the conditions (Table 3).
Somatic anxiety was defined as "the physiological and affective elements of the anxiety experience that develop directly from autonomic arousal" (p.6) [START_REF] Martens | Longer exercise duration delays post-exercise recovery of cardiac parasympathetic but not sympathetic indices[END_REF]). Consequently, the increase in somatic anxiety in our study was not surprising, with the activation of the sympathetic nervous system during the alarm mobilisation phase, as measured by the increase of HR (Table 2) and with the likely hypothalamic-pituitaryadrenal axis activation that was measured in previous studies [START_REF] Robinson | Stress reactivity and cognitive performance in a simulated firefighting emergency[END_REF][START_REF] Hall | The acute physiological stress response to an emergency alarm and mobilization during the day and at night[END_REF]). However, no difference was found for cognitive anxiety, which was defined by [START_REF] Martens | Longer exercise duration delays post-exercise recovery of cardiac parasympathetic but not sympathetic indices[END_REF] as "the mental component of anxiety and is caused by negative expectations about success or by negative self-evaluation" (p.6).
The participants in our study were familiar with the simulated rescue intervention, as it had been part of their occupational skills training. Consequently, they had no doubts about their ability to achieve it successfully.
However, it is reasonable to assume that during real interventions, when firefighters are confronted with unpredictable situations with sometimes vital issues, their level of cognitive anxiety may increase. Nevertheless, the increase in the level of somatic anxiety is an important result because it has been shown that there is a curvilinear relationship between somatic anxiety and sports performance [START_REF] Martens | Longer exercise duration delays post-exercise recovery of cardiac parasympathetic but not sympathetic indices[END_REF]). The increase in anxiety represents an important concern for the health of firefighters, who are significantly affected by coronary heart diseases [START_REF] Smith | Cardiovascular strain of firefighting and the risk of sudden cardiac events[END_REF]. It has been shown that anxiety may increase the risk of coronary heart diseases [START_REF] Kubzansky | Anxiety and coronary heart disease: a synthesis of epidemiological, psychological, and experimental evidence[END_REF]. Further studies are required to determine whether the performance and health of firefighters are related to anxiety or/and self-confidence.
There were several limitations to consider in this study. First, due to logistics issues, no women participated in this study although they represented 17 % of French firefighters in 2020. This may prevent the generalization of results to female firefighters, as gender may influence performance and physiological responses [START_REF] Williams-Bell | Physiological demands of the firefighter candidate physical ability test[END_REF], and as HRV may be influenced by the menstrual cycle [START_REF] Bai | Influence of the menstrual cycle on nonlinear properties of heart rate variability in young women[END_REF].
Second, the anthropometric data of the participants in this study diverge from the data usually published in firefighters' research, with a lower BMI and a higher estimated V ̇O2max in our study [START_REF] Selkirk | Active versus passive cooling during work in warm environments while wearing firefighting protective clothing[END_REF][START_REF] Boisseau | Air management and physiological responses during simulated firefighting tasks in a high-rise structure[END_REF][START_REF] Horn | Physiological recovery from firefighting activities in rehabilitation and beyond[END_REF][START_REF] Horn | Firefighter and fire instructor's physiological responses and safety in various training fire environments[END_REF]. The study being physically strenuous, it is the most athletic firefighters of the fire station who have volunteered to participate. This difference in fitness may affect the data of our study, a good physical condition allowing to improve the firefighters' performance [START_REF] Williams-Bell | Physiological demands of the firefighter candidate physical ability test[END_REF]Marcel-Millet et al. 2020b) without necessarily influencing cardiac parasympathetic reactivation (Marcel-Millet et al. 2020b).
Conclusions
In conclusion, the firefighters' cardiac stress in response to the alarm signal was greater during a night-simulated rescue intervention than during the day, and the cardiac parasympathetic reactivation in the post-simulated rescue intervention was more impaired in the night in comparison to the day. Moreover, the firefighters felt sleepier and less self-confident before the night-simulated intervention than before the day-simulated intervention. Nevertheless, the firefighters were equally efficient during the night-simulated intervention and the day-simulated intervention, with similar CT and HR responses between the two conditions. The use of a vibrating signal alarm during the night decreased the firefighters' cardiac stress throughout the alarm signal phase without influencing their subsequent performance, cardiac parasympathetic recovery and psychological arousal.
Regarding the applications of the present study, the emergency services should consider implementing strategies to improve parasympathetic reactivation after firefighting activities, especially at night when the disturbance is further increased. Hand and forearm immersion has been proposed as effective cooling strategy to decrease the physiological strain of firefighters [START_REF] Barr | The impact of different cooling modalities on the physiological responses in firefighters during strenuous work performed in high environmental temperatures[END_REF]).
conditions: 1) during the day, in the morning, with a sound alarm signal (DaySA), 2) during the night with a sound alarm signal (NightSA) and 3) during the night with a vibrating alarm signal (NightVA). The simulated rescue intervention consisted of 5 tasks, without downtime: (a) Hoses: The participants carried two hoses (7 kg each) over flat ground for a distance of over 100 m; (b) Obstacles course: After setting down the hoses, the participants completed a 50-meter course that required them to crawl under obstacles; (c) Tower: The participants grabbed their hoses again, and dropped them on the 4th floor of a tower; (d) Mannequin: The participants used a strap to carry a 60 kg mannequin down one floor and then up one floor; (e) Hoses: Finally, the participants hoisted their hoses again before going down the 4 floors of the tower and returning to the starting point. During each session, the participants were asked to perform the simulated rescue intervention as quickly as possible; they were given verbal encouragement throughout the test.
Figure 1 .
1 Figure 1. Overview of the experimental design. CSAI-2: Competitive State Anxiety Inventory -2; SSS:
All the cardiac parasympathetic indices were lower in the NightSA condition in comparison to the DaySA condition. HR reactivity was lower in the DaySA condition in comparison to the NightSA condition (p <0.01) and was also lower in the NightVA condition in comparison to the NightSA condition (p <0.05). Air consumption during the simulated intervention was not significantly different between the DaySA and NightSA conditions(85.6 ± 19.7 and 78.8 ± 17.2 bars, respectively) and between the NightSA and NightVA (77.5 ± 14.8 bars) conditions.
conditions for the HR at the end of the exercise. After 60 seconds of recovery, HR was higher in the NightSA condition than in the DaySA condition (p < 0.05).
Figure 2 .
2 Figure 2. Heart rate (HR) recovery after the end of the exercise in the DaySA (day sound alarm), NightSA (night sound alarm) and NightVA (night vibrator alarm) conditions. * shows a significant difference between DaySA and NightSA with * p <0.05.
Figure 3 .
3 Figure 3. Changes in systolic and in diastolic blood pressure (BP) throughout the DaySA (day sound alarm), NightSA (night sound alarm) and NightVA (night vibrator alarm) conditions. **: significant difference vs. rest < 0.01); ***: significant difference vs. rest (p < 0.001); #: significant difference vs. pre-intervention (p < 0.05); $$$: significant difference vs. 1-min post-intervention (p < 0.001).
Table 1 :
1 Characteristics of the participants
Age Height Weight Body mass index Estimated V ̇O2max Work experience
36.1 ± 5.8 years 175.3 ± 6.1 cm 73.6 ± 10.1 kg kg/m 2 23.8 ± 1.7 55.8 ± 3.6
Table 2 .
2 Performance, HR responses and parasympathetic reactivation indices for the three experimental conditions.
DaySA vs. NightSA NightSA vs. NightVA
Phases Variables DaySA NightSA NightVA ES (±95 CI) ES (±95 CI)
ES descriptor ES descriptor
CT mobilisation (s) 57.9 ± 26.6 104.9 ± 31.6*** 114.7 ± 27.8 1.61 (±0.82) Large 0.54 (±0.61) Medium
Alarm Pre-alarm HR (beats⋅min -1 ) 58.6 ± 8.2 50.7 ± 4.5*** 51.3 ± 8.1 1.18 (±0.77) Large 0.09 (±0.60) Trivial
mobilisation Peak HR (beats⋅min -1 ) 123.7 ± 18.4 127.9 ± 20.3 116.1 ± 21.7 0.22 (±0.72) Small 0.57 (±0.61) Medium
HR reactivity (beats⋅min -1 ) 65.1 ± 17.9 77.2 ± 20.4** 64.7 ± 19.5 # 0.63 (±0.73) Medium 0.63 (±0.61) Medium
CT intervention (s) 251.8 ± 38.6 257.6 ± 50.1 254.7 ± 33.2 0.13 (±0.72) Trivial 0.07 (±0.60) Trivial
Simulated intervention Mean HR (% HRmax) 86.0 ± 4.0 85.2 ± 4.7 84.3 ± 4.0 0.19 (±0.72) Trivial 0.21 (±0.60) Small
Peak HR (% HRmax) 90.6 ± 2.9 89.6 ± 3.4 89.6 ± 2.2 0.34 (±0.72) Small 0.01 (±0.60) Trivial
T30 (s) 171.8 ± 71.4 251.1 ± 106.0* 217.3 ± 46.7 0.88 (±0.75) Large 0.41 (±0.61) Small
HRR60s (beats⋅min -1 ) 38.0 ± 12.2 31.6 ± 8.4* 31.7 ± 10.0 0.61 (±0.73) Medium 0.01 (±0.60) Trivial
Acute recovery lnRMSSD (ms) 2.46 ± 0.64 2.15 ± 0.47* 2.34 ± 0.47 0.56 (±0.73) Medium 0.42 (±0.61) Small
SD1 (ms) 9.89 ± 5.93 6.73 ± 3.29* 8.07 ± 3.07 0.66 (±0.73) Medium 0.42 (±0.61) Small
lnHF (ms 2 ) 3.88 ± 1.47 2.85 ± 1.11* 3.04 ± 1.17 0.79 (±0.74) Medium 0.16 (±0.60) Trivial
Mean ± SD are displayed for DaySA (day sound alarm), NightSA (night sound alarm) and NightVA (night vibrator
alarm) conditions. CT: completion time; T30: short-term time constant; HRR60s: heart rate recovery at 60 sec
post-exercise; LnRMSSD: natural logarithm of the root mean square of successive differences of normal R-R
intervals; LnHF: natural logarithm of high frequency power; SD1: standard deviation of instantaneous beat-tobeat R-R intervals variability derived from the Poincare ́ Plot analysis. *shows a significant difference vs. DaySA with * p <0.05, ** p <0.01 and *** p <0.001; # shows a significant difference vs. NightSA with # p <0.05.
Table 3 .
3 Evolution of the psychological parameters (sleepiness, anxiety and self-confidence) from rest to presimulated intervention for the DaySA, NightSA and NightVA conditions.
Condition Rest Pre-intervention ES (±95 CI) Evolution (%) ES descriptor
SSS (on 7) DaySA NightSA NightVA 1.69 ± 1.01 2.13 ± 1.54 2.25 ± 1.24 1.81 ± 1.05 4.25 ± 1.73 ** 3.56 ± 0.96 ** 0.12 (±0.72) 1.30 (±0.78) 1.18 (±0.77) 7 100 58 Trivial Large Large
CSAI-2 Somatic Anxiety DaySA NightSA 1.21 ± 0.27 1.24 ± 0.31 1.64 ± 0.47 *** 1.77 ± 0.60 *** 1.11 (±0.77) 1.11 (±0.77) 36 43 Large Large
(on 4) NightVA 1.29 ± 0.35 1.81 ± 0.68 ** 0.96 (±0.75) 40 Large
CSAI-2 Cognitive Anxiety DaySA NightSA 1.20 ± 0.30 1.24 ± 0.23 1.12 ± 0.23 * 1.39 ± 0.42 0.30 (±0.72) 0.41 (±0.72) -7 12 Small Small
(on 4) NightVA 1.24 ± 0.36 1.37 ± 0.39 0.35 (±0.72) 10 Small
CSAI-2 Self-confidence DaySA NightSA 3.53 ± 0.44 3.43 ± 0.41 3.54 ± 0.43 2.89 ± 0.67 *** 0.01 (±0.72) 0.97 (±0.75) 0 -16 Trivial Large
(on 4) NightVA 3.41 ± 0.52 2.88 ± 0.66 ** 0.89 (±0.75) -16 Large
Mean ± SD are displayed for DaySA (day sound alarm), NightSA (night sound alarm) and NightVA (night vibrator alarm) conditions at rest and pre-simulated interventions. ES: effect size; CI: confidence interval; CSAI-2: Competitive State Anxiety Inventory -2; SSS: Stanford Sleepiness Scale. Scores for the SSS were expressed on a 7-point Likert scale and scores for all the CSAI-2 items were expressed on a 4-point Likert scale. *show a significant difference vs. rest with *: p <0.05; **: p <0.01; ***: p <0.001.
Acknowledgments
We would like to acknowledge the firefighters for their participation in this study and their technical support. We also thank the institutional board of the Service Départemental d'Incendie et de Secours du Doubs for its support.
Prior to testing, all participants gave a voluntary written informed consent which indicates the purpose, the benefits and the risks of the investigation and the possibility to stop their participation at any time.
Funding sources
The study was part of the European Cross-border Co-operation Programme (Interreg France-Suisse 2014-2020), and it was supported by a grant from the European Funding for Regional Development (FEDER).
Conflict of interest
No conflicts of interest, financial or otherwise, are declared by the authors. The authors declare that the results of the study are presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation.
Ethical approval
This study was conducted in accordance with the recommendations of the 1964 Declaration of Helsinki and its later amendments. The research plan was examined and approved by the medical service and the Institutional Review Board of the Regional Fire and Rescue Service of Doubs, and it was approved by the Ethics Committee of the responsible institutional department.
Informed consent |
03223212 | en | [
"spi"
] | 2024/03/04 16:41:22 | 2020 | https://univ-angers.hal.science/hal-03223212/file/S2405896321000434.pdf | Gabriel Freitas Oliveira
Renato Markele
Ferreira Candido
Vinicius Mariano Gonçalves
Carlos Andrey Maia
Bertrand Cottenceau
Laurent Hardouin
email: [email protected]
Discrete Event System Control in Max-Plus Algebra : Application to Manufacturing Systems*
Keywords: Petri nets, timed event graphs, max-plus algebra
This paper deals with control of industrial systems which can be depicted by timed event graphs. A methodology and software tools are presented to allow engineers to implement automatically controllers in PLC. Three steps are needed and recalled: the modelling of the system in max plus algebra, the design of the controller and the implementation in a Supervisory Control and Data Acquisition (SCADA) system. An example on a real system is given to illustrate these steps.
INTRODUCTION
The industrial manufacturing systems are frequently event-driven systems and their closed-loop control is worth of interest in order to react to disturbances modifying their behaviours. Among these discrete event systems there exists a subclass of system involving synchronization and delay phenomena for which an efficient control theory has been developed during the last decade [START_REF] Schutter | Analysis and control of max-plus linear discreteevent systems: An introduction[END_REF][START_REF] Maia | On the Model Reference Control for Max-Plus Linear Systems[END_REF][START_REF] Lhommeau | Interval analysis in dioid : Application to robust open loop control for timed event graphs[END_REF][START_REF] Shang | An integrated control strategy to solve the disturbance decoupling problem for max-plus linear systems with applications to a high throughput screening system[END_REF][START_REF] Heidergott | Max Plus at Work: Modeling and Analysis of Synchronized Systems : a Course on Max-Plus Algebra and Its Applications[END_REF][START_REF] Cohen | Max-plus algebra and system theory : Where we are and where to go now[END_REF]. Synchronization phenomena appear when meeting between events is needed, e.g. an event starts when all the preceding events are finished. Delay phenomena appear when transportation operation or manufacturing activities need a duration to be achieved, e.g. the event representing a finishing time of a task is equal to its starting time plus the duration of the operation. Although these phenomena lead to a non-linear model in conventional algebra they admit a linear model formulated in some idempotent semirings, the most popular being the max-plus algebra, where the max operation is the sum of the algebraic structure and the classical addition has to be considered as the product. It must be noticed that these systems and their linear models correspond exactly to the Timed Event Graphs (TEGs) which are a subclass of Petri nets where each place has exactly one upstream and one downstream transition and all arcs have weight equal to 1. The time is associated to places and represents the duration a token has to stay in a place before to contribute to the firing of a downstream transition. This graphical model is popular since suitable for engineer depicting the behaviour of a manufacturing system. This paper proposes a methodology and a specific CAPES-COFECUB software to help the engineers to synthesize and implement a closed-loop control of these systems by tacking advantage of the efficient existing control strategies. By starting from a TEG description of the system and a desired behaviour also given as a TEG named reference model, a method is proposed to obtain the algebraic model of the system and of the reference model, then a software is proposed to give automatically the code to implement in a Supervisory Control and Data Acquisition (SCADA) system. The engineer will have to choose the control strategy which will be implemented among different strategies, namely the disturbance decoupling problem, the model matching problem, the observer-based controller. They have for common point to yield an optimal control law according to the just-in-time criterion which aims to delay as much as possible the occurrence of event input while achieving the specific goal of the control strategy. This criterion is quite usual in industry since it leads to produce only the necessary quantity of raw materials and then to reduce as much as possible the useless stock.
In this paper we will recall the observer-based controller strategy in order to illustrate the methodology used in the software. It is organized as follows: First, the algebraic tools necessary to synthesize the control law are recalled, then in section 3 the method to obtain the algebraic model of the system is presented, this section presents an original description which allows the software to obtain automatically an explicit model. The observer-based control strategy which aims to match a reference model is presented in section 4. The software architecture and the methodology adopted are illustrated in section 5, which presents an implementation on a flexible automated system available at University of Angers.
MATHEMATICAL BACKGROUND
An idempotent semiring S is an algebraic structure with two internal operations denoted by ⊕ and ⊗. The operation ⊕ is associative, commutative and idempotent, that is, a ⊕ a = a. The operation ⊗ is associative (but not necessarily commutative) and distributive on the left and on the right with respect to ⊕. The neutral elements of ⊕ and ⊗ are represented by ε and e respectively, and ε is an absorbing element for the law ⊗ (∀a ∈ S, ε ⊗ a = a ⊗ ε = ε). As in classical algebra, the operator ⊗ will be often omitted in the equations, moreover, a i = a ⊗ a i-1 and a 0 = e. In this algebraic structure, a partial order relation is defined by a b ⇔ a, therefore an idempotent semiring S is a partially ordered set (see [START_REF] Baccelli | Synchronization and Linearity : An Algebra for Discrete Event Systems[END_REF][START_REF] Heidergott | Max Plus at Work: Modeling and Analysis of Synchronized Systems : a Course on Max-Plus Algebra and Its Applications[END_REF] for an exhaustive introduction). An idempotent semiring S is said to be complete if it is closed for infinite ⊕-sums and if ⊗ distributes over infinite ⊕-sums. In particular = x∈S x is the greatest element of S ( is called the top element of S).
Theorem 1. [see Baccelli et al. (1992), th. 4.75] The implicit inequality x ax⊕b as well as the equation x = ax⊕ b defined over S, admit x = a * b as the least solution, where a * = i∈N a i (Kleene star operator).
Definition 1. (Residual and residuated mapping). An order preserving mapping f : D → E, where D and E are partially ordered sets, is a residuated mapping if for all y ∈ E there exists a greatest solution for the inequality f (x) y (hereafter denoted f (y)). Obviously, if equality f (x) = y is solvable, f (y) yields the greatest solution. The mapping f is called the residual of f and f (y) is the optimal solution of the inequality.
Example 1. Mappings Λ a : x → a ⊗ x and Ψ a : x → x ⊗ a defined over an idempotent semiring S are both residuated (Baccelli et al. (1992), p. 181). Their residuals are order preserving mappings denoted respectively by Λ a (x) = a• \x and Ψ a (x) = x• /a. This means that a• \b (resp. b• /a) is the greatest solution of the inequality a ⊗ x b (resp. x ⊗ a b).
The set of n×n matrices with entries in S is an idempotent semiring. The sum, the product and the residuation of matrices are defined after the sum, the product and the residuation of scalars in S, i.e.,
(A ⊗ B) ik = j=1...n (a ij ⊗ b jk ) (1) (A ⊕ B) ij = a ij ⊕ b ij , (2)
(A • \B) ij = k=1..n (a ki • \b kj ) , (B• /A) ij = k=1..n (b ik • /a jk ). ( 3
)
The identity matrix of S n×n is the matrix with entries equal to e on the diagonal and to ε elsewhere. This identity matrix will also be denoted e, and the matrix with all its entries equal to ε will also be denoted ε.
Example 2. (max,plus) algebra:
Z max = (Z ∪ {-∞, +∞}, max, +) is a complete idempotent semiring such that a ⊕ b = max(a, b), a ⊗ b = a + b, a ∧ b = min(a, b) with ε = -∞, e = 0
, and = +∞. The order is total and corresponds to the natural order . By extension Z n×n max is a semiring of matrices with entries in Z max . Matrix ε ∈ Z n×m max will be such that all its entries are equal to ε ∈ Z max , matrix e ∈ Z n×n max will be such that all the entries are equal to ε ∈ Z max except the diagonal entries which are equal to e ∈ Z max . being matrices with entries in Z max . As the max-plus multiplication corresponds to the classical addition, its residual corresponds to conventional subtraction, i.e. , 1 ⊗ x 4 admits the solution set X = {x|x 1• \4} with 1• \4 = 4 -1 = 3 being the greatest solution of this set.
Applying the rules of residuation in max-plus algebra to the relation A ⊗ X B results in:
A• \B = 1• \6 ∧ 3• \7 ∧ 5• \8 2• \6 ∧ 4• \7 ∧ ε• \8 = 3 3 Matrix A• \B = [3 3] T is the greatest solution for X which ensures A ⊗ X B. Indeed, A ⊗ (A• \B) = 1 2 3 4 5 ε ⊗ 3 3 = 5 7 8 6 7 8 = B.
Remark 1. Note that residuation achieves equality if a solution exists. [START_REF] Baccelli | Synchronization and Linearity : An Algebra for Discrete Event Systems[END_REF][START_REF] Hardouin | Control and state estimation for max-plus linear systems[END_REF] is formally the quotient dioid of B [[γ, δ]] (the set of formal power series in two commutative variables γ and δ, with Boolean coefficients and with exponents in Z), by the equivalence relation [γ, δ]]. The representative which is minimal with respect to the number of terms is called the minimum representative.
SYSTEM MODELING
3.1 Dioid M ax in [[γ, δ]] Dioid M ax in [[γ, δ]] (see
xRy ↔ γ * (δ -1 ) * x = γ * (δ -1 ) * y. Dioid M ax in [[γ, δ]] is complete. As M ax in [[γ, δ]] is a quotient dioid, an element of M ax in [[γ, δ]] may admit several representatives in B[
A simple geometrical interpretation of the previous equivalence relation is available in the (γ, δ)-plane. Consider a monomial
γ k δ t ∈ B[[γ, δ]], its south-east cone is defined as {(k , t )|k
k and t t}. The south-east cone of a series in B [[γ, δ]] is defined as the union of the south-east cones associated with the monomials composing the considered series. For two elements s 1 and s
2 in B[[γ, δ]], s 1 Rs 2 (i.e., s 1 and s 2 are equal in M ax in [[γ, δ]]
) is equivalent to the equality of their south-east cones. Direct consequences of previous geometrical interpretation are:
• simplification rules in M ax in [[γ, δ]] γ k ⊕ γ t = γ min(k,t) and δ k ⊕ δ t = δ max(k,t)
(4) • a simple formulation of the order relation for monomials γ n δ t γ n δ t ⇔ n n and t t
A simple interpretation of the variable γ and δ for daters is available:
• multiplying a series s by γ is equivalent to shifting the argument of the associated dater function by -1. • multiplying a series s by δ is equivalent to shifting the values of the associated dater function by 1 Example 5. Consider the series s = γδ 2 ⊕ γ 3 δ 3 ⊕ γ 4 δ 1 represented by dots in Fig. 1. The minimum representative of s in M ax in [[γ, δ]] is γδ 2 ⊕ γ 3 δ 3 . This result could be obtained using the simplification rules (4). Besides,
s = k 0 γ k δ -∞ ⊕ k=1,2 γ k δ 2 ⊕ k 3 γ k δ 3
Therefore, the dater d s associated with s is given by
d s (k) = -∞ if k 0 2 if k = 1, 2 3 if k 3
Fig. 1. s and its south-east cone (hatched)
3.2 Linear state-space representation of TEG in M ax in [[γ, δ]]
From now on, we only consider TEG with at most one place from a transition to another transition. This assumption is not restrictive, as it is always possible to transform any TEG in an equivalent TEG with at most one place from a transition to another transition. The dynamics of a TEG may be captured by associating each transition with a series s ∈ M ax in [[γ, δ]], where d s (k) is defined as the time of firing k of the transition. Therefore, for TEG, γ is a shift operator in the event domain, where an event is interpreted as the firing of the transition, and δ is a shift operator in the time domain.
The transitions of a TEG are divided into three categories:
• state transitions (x 1 , ..., x n ): transitions with at least one input place and one output place. • input transitions (u 1 , ...u p ): transitions with at least one output place, but no input places. • output transitions (y 1 , ...y m ): transitions with at least one input place, but no output places.
Under the earliest functioning rule (i.e., state and output transitions fire as soon as they are enabled), with respect to a place with initially m tokens and holding time t, the influence of its upstream transition on its downstream transition is a positive shift in the time domain of t time units and a negative shift in the event domain of m events. The complete shift operator is coded by the monomial γ m δ t in M ax in [[γ, δ]]. Therefore, consider the place upstream from transition t i and downstream from transition t j , the influence of transition t j on transition t i is coded by the monomial f ij in M ax in [[γ, δ]] defined by f ij = γ mij δ τij where m ij is the initial number of tokens in the place and τ ij is the holding time of the place.
Consequently, a TEG admits a linear state-space representation in M ax in [[γ, δ]].
x = Ax ⊕ Bu ⊕ Rw y = Cx where x ∈ M ax in [[γ, δ]] n is the state, u ∈ M ax in [[γ, δ]] p the input, y ∈ M ax in [[γ, δ]] m the output and w ∈ M ax in [[γ, δ]
] n the additive perturbation of the state. The perturbation w models, for example, unexpected failure, delays or uncertain parameters such as task duration and matrix R how these perturbations affect the inner states, in the sequel R is assumed to be the Identity matrix.
A ∈ M ax in [[γ, δ]] n×n , B ∈ M ax in [[γ, δ]] n×p , C ∈ M ax in [[γ, δ]] m×n and R ∈ M ax in [[γ, δ]
] n×n are matrices with monomial entries describing the influence of transitions on each other.
According to Theorem 1, under the earliest functioning rule, the input-output (resp. perturbation-output) transfer function matrix H (resp. G) of the system is equal to
CA * B (resp. CA * ). y = CA * Bu ⊕ CA * q = Hu ⊕ Gq (5)
Therefore, the condition for holding the maximization (preserving the input-output and perturbation-output behaviors) is rephrased in terms of transfer function matrices. The conditions is now to preserve the input-output and perturbation-output transfer function matrices.
When an element s of M ax in [[γ, δ]] is used to code information concerning a transition of a TEG, then a monomial γ k δ t with k, t 0 may be interpreted as "at most k events occur strictly before time t" (i.e., d s (K) t). An element of s of M ax in [[γ, δ]], used to code a transfer relation between two transitions of a TEG (e.g., an entry of H), is causal (i.e., no anticipation in the time/event domain: all exponents are non-negative) and periodic (i.e., s = p⊕qr * with polynomials p, q and a monomial r = e). For a periodic series s with r = γ ν δ τ , its asymptotic slope σ(s) is defined as ν/τ . Fig. 2. TEG example Example 6. Let us consider a manufacturing system depicted by the TEG given in Fig. 2. The transition labeled u represents the inputs of raw material, it is transported during 2 time units to a machine with 2 treatment spots. Its input is labelled x 1 , the processing time is equal to 5, and the machine's output is labeled x 2 . The processed part is then transported out of the system during 3 time units, the transition y represents when the part is out of the production line. Before accepting a new raw part, the machine must be cleaned, this operation spends 7 time units. The model is then given by
A = ε γ 2 δ 7 δ 5 ε B = δ 2 ε C = ε δ 3
It must be noticed that this system can be realized in a straightforward way in (max, +) or (min, +) form:
x 1 (k) = max(2 + u(k), 7 + x 2 (k -2)) x 2 (k) =5 + x 1 (k) y(k) =3 + x 2 (k), x 1 (t) = min(u(t -2), 2 + x 2 (t -7)) x 2 (t) =x 1 (t -5) y(t) =x 2 (t -3),
where x i (k) represents the firing date of part k and x i (t) represents the number of firing occurred till the time t. These both systems are implicit equations. In order to obtain an explicit model we introduce an original procedure. We propose to split the system in the following way A = A r ⊕ A d ⊕ A g where
• if n ij = 0 and t ij = 0, (A r ) ij = A ij = γ nij δ tij else (A r ) ij = ε • if t ij = 0, (A g ) ij = A ij = γ nij else (A g ) ij = ε • if n ij = 0, (A d ) ij = A ij = δ tij else (A d ) ij = ε
In the present example:
A g = ε ε ε ε A d = ε ε δ 5 ε A r = ε γ 2 δ 7 ε ε
Knowing that x = Ax ⊕ Bu separating the matrix A we have:
x = (A d ⊕ A g ⊕ A r )x ⊕ Bu x = (A d ⊕ A g )x ⊕ A r x ⊕ Bu Using Theorem 1 x = (A d ⊕ A g ) * A r x ⊕ (A d ⊕ A g ) * Bu
Knowing that, we have A = (A d ⊕ A g ) * A r and B = (A d ⊕ A g ) * B and these matrices generate a model in the form
x = Ax ⊕ Bu y = Cx
Which can be realized in an explicit form whether in (max,+) or (min, +)
(A d ⊕ A g ) = ε ε δ 5 ε ⇒ (A d ⊕ A g ) * = e ε δ 5 e A = (A d ⊕ A g ) * A r = ε γ 2 δ 7 ε γ 2 δ 12
and for the input
B = (A d ⊕ A g ) * B = δ 2 δ 7 ,
this leads to the following explicit model
x 1 (k) = max(2 + u(k), 7 + x 2 (k -2)) x 2 (k) = max(7 + u(k), 12 + x 2 (k -2)) y(k) =3 + x 2 (k) x 1 (k) = min(u(t -2), 2 + x 2 (t -7)) x 2 (k) = min(u(t -7), 2 + x 2 (t -12)) y(k) =x 2 (t -3)
These matrices A and B can be programmed into a software without realization problems. The system can be solved by considering Theorem 1, (see Eq. ( 5)) and the transfer matrix y = Hu is given by H = C A * B = CA * B = δ 10 (γ 2 δ 12 ) * . This computation can be easily be performed by using the library MinMaxgd available as a C++ library or alternatively thanks to a web interface [START_REF] Cottenceau | Minmaxgd a library for computation in semiring of formal series[END_REF]; [START_REF] Cândido | Minmaxgdjs : A web toolbox to handle periodic series in minmax[END_REF]).
OBSERVER BASED CONTROLLER
This section presents how to implement an efficient control strategy for dynamical systems considered in the previous section. The control strategy proposed is depicted in Fig. 3. It is inspired from the observer based control for classical linear systems [START_REF] Hardouin | Control and state estimation for max-plus linear systems[END_REF][START_REF] Hardouin | Max-plus Linear Observer: Application to manufacturing Systems[END_REF].
The motivation to control the input of these systems is to decide when the operator should start to achieve an objective, e.g. when do you start the departure of a processing operator in order to achieve the customer demand.
Hence, the aim is to design a controller able to decide when the system should start to work in order to achieve a desired behavior. Classically, a popular production policy is to design a just-in-time policy, that is, to start as late as possible while ensuring the customer demand. It minimizes the internal stock while keeping the performance. The design goal is to get controllers M and P (see Fig. 3) which are matrices ensuring that the control input u = P (v ⊕ M x) be the greatest ( i.e. the one which delays as much as possible the input) in order to achieve a given objective, the reference input v. Signal x is either the real state of the system, (x = x if the state is measurable), or an estimation x observed thanks to an observer, inspired from the Luenberger observer [START_REF] Luenberger | An Introduction to observers[END_REF]). This estimator is given by
x = Ax ⊕ Bu ⊕ L(ŷ ⊕ y). ( 6
)
Where L is an observer matrix to be designed. It is fed by the measured output y of the system and ensures that the real system output be possible to compute the estimator x, especially that disturbance w feeding the system through matrix R. This observer based controller is then a feedback control strategy. The goal is to design P, M, L in order to achieve a desired behavior denoted G ref .
By solving Eq. 6 x is given by
x = Ax ⊕ Bu ⊕ L(Cx ⊕ C x) = (A ⊕ LC) * Bu ⊕ (A ⊕ LC) * LCA * Rw
by repeating in u, the control is:
u = P (v ⊕ M x) = P (M (A ⊕ LC) * BP ) * v ⊕ P M ((A ⊕ LC) * BP M ) * (A ⊕ LC) * LCA * Rw
The development are given in [START_REF] Hardouin | Max-plus Linear Observer: Application to manufacturing Systems[END_REF]) and lead to the optimal design
P opt = (CA * B)• \G ref (7) L opt = ((A * B)• /(CA * B)) ∧ ((A * R)• /(CA * R)) (8) M opt = P opt • \P opt • /(A * BP opt ) (9)
Example
In our sample system, showed in Fig. 2, since it does not have many inner states or inputs and outputs, these calculations can be done by hand and drawn. Using G ref = CA * B, we are able to calculate P opt = (CA * B)• \(CA * B) knowing that (CA * B) = δ 10 (γ 2 δ 12 ) * we obtain
P opt = δ 10 (γ 2 δ 12 ) * • \δ 10 (γ 2 δ 12 ) * = (γ 2 δ 12 ) * the equality holds since a * • \a * = a * (see Baccelli et al. ( 1992
)) which yields that (γ 2 δ 12 ) * • \(γ 2 δ 12 ) * = (γ 2 δ 12 ) * .
With the result of P opt we are able to calculate M opt
M opt = (γ 2 δ 12 ) * • \(γ 2 δ 12 ) * • / δ 2 δ 7 ⊗ (γ 2 δ 12 ) * M opt = δ -2 δ -7 ⊗ (γ 2 δ 12 ) *
Obviously it is not possible to implement a controller that has negative exponents (this controller would be noncausal), hence the solution is to pick only the causal projection. To do this, imagine a Cartesian plane where gamma is the x axis and delta the y axis. Now we put the points according to the desired series, for example
γ -4 δ -1 ⊕ γ -2 δ 2 ⊕ γ 2 δ 3 ⊕ γ 4 δ 4 .
Fig. 4. Gamma-delta plane representation of series γ -4 δ -1 ⊕γ -2 δ 2 ⊕γ 2 δ 3 ⊕γ 4 δ 4 and its causal projection
δ 2 ⊕ γ 2 δ 3 ⊕ γ 4 δ 4 .
It is clear that, using the south-east cone presented in section 2 all the points in the drawing must be inside the north-east quadrant for the series to be realizable. This way, the causal projection is the biggest area possible, in a way that all its corners are inside this quadrant, and is contained in the original area. For our example, it would be δ 2 ⊕ γ 2 δ 3 ⊕ γ 4 δ 4 , as showed in Fig. 4 Using the same reasoning in M opt we get the following causal series:
P r + (M opt ) = γ 2 δ 10 γ 2 δ 5 ⊗ (γ 2 δ 12 ) *
Finally the observer needs to be calculated
L 1 = δ 2 δ 7 ⊗ (γ 2 δ 12 ) * • /δ 10 ⊗ (γ 2 δ 12 ) * = δ -8 δ -3 ⊗ (γ 2 δ 12 ) * L 2 = e γ 2 δ 7 δ 5 e ⊗(γ 2 δ 12 ) * • / δ 8 δ 3 ⊗(γ 2 δ 12 ) * = γ 2 δ 4 δ -3 ⊗(γ 2 δ 12 ) * Then L = L 1 ∧ L 2 : L = γ 2 δ 4 δ -3 ⊗ (γ 2 δ 12 ) * ⇒ P r + (L) = γ 2 δ 4 γ 2 δ 9 ⊗ (γ 2 δ 12 ) * .
Since all the matrices entries are causal, all these controllers are realizable, and thus they can be implemented.
To convert the gamma-delta elements into min-plus or max-plus equations, we propose to split the series s = p ⊕ qr * in the following way : first we define ζ k = qr * thus s = p ⊕ ζ k . By using Theorem 1, we get the relation
ζ k = rζ k ⊕ q.
In our example x = Ax ⊕ Bu ⊕ Ly, for example sake, let us focusing on the first state we obtain, x1 = γ 2 δ 7 x 2 ⊕ δ 2 u ⊕ γ 2 δ 4 ⊗ (γ 2 δ 12 ) * y. Hence in this case p = γ 2 δ 7 x 2 ⊕ δ 2 u, q = γ 2 δ 4 y and r = γ 2 δ 12 . By using the formula we just found, x1 = γ 2 δ 7 x 2 ⊕ δ 2 u ⊕ ζ 1 and
ζ 1 = γ 2 δ 12 ζ 1 ⊕ γ 2 δ 4 y.
Thanks to this expression it is straightforward to put these equations in min-plus or maxplus implicit equations. For simplicity, min-plus will be used because updating the system periodically, in our case each 500ms, is easier than when events occur. The min-plus
equation for x1 is x1 = min(2 + x2 (t -7), u(t -2), ζ 1 (t)) knowing that ζ 1 (t) = min(2 + ζ 1 (t -12), 2 + y(t -4)).
These equations can be easily programmed into a software, since they only require data storage to use the time delays properly.
REAL SYSTEM APPLICATION
In this section, we are interested in applying this method in the real system, depicted in Fig. 5, which is located in Polytech Angers, France.
The system has 2 separated sections, a faster loop and a slower loop as shown in Fig. 5. The slower loop has 6 buttons that do not let the pallets pass while the faster loop has only 4. All the buttons have sensors just before them, as it can be seen in Fig. 6. The pallet's size is such that if they are waiting the button, the sensor will stay active. Each section (between two buttons) has a defined maximum number of pallets. The travel times were measured 10 times, and the time used is the average of them. Each button (labelled B1 to B10), the travel time and the maximum number of pallets are represented in Fig. 7. It is important to notice that there are 3 pallets waiting for B1, 2 waiting for B5 and 1 waiting for B6, as initial conditions of the system. Fig. 7. Buttons, travel times (in sec) and pallet limit
The system is programmed to activate the buttons when there is one pallet waiting for it (the sensor is active), there is at least one space left for the path and at least one control token available. Especially for B3 and B10, there is a forced synchronization, meaning that B3 and B10 will always activate at the same time, requiring 2 control tokens, one free space between B10 and B5 and Now the first step is to put this system in a Petri net model. We will analyze the section between B4 and B1 and then apply the same reasoning to the other sections.
Since each button changes the state of the system, we have the first two inner states, each one with one associated input because of the control tokens as shown in Fig. 8. Then we will need one place for each empty slot in the section, and since 2 pallets cannot be in the same place, these places must have only one token at a time. Besides the tokens, the timings for each place has to be 2 seconds, since the button will only activate 2 seconds after its previous activation, but the sum of all the places between the buttons has to be the total travelling time, using these conditions we have the second step of the model in Fig. 9. It is worth mentioning that the initial pallets are the tokens into (P1), ( P2) and (P3) so the token in (P1) can activate B1 instantaneously.
To summarize the model so far, the token in P1 will activate the B1 transition, meaning that the button will go down and the pallet will enter the path between B1 and B2. After that, the two other pallets will begin moving and wait for the 2 seconds to pass, what explains the 2 seconds travelling time in the model. The last condition that is missing is the total number of pallets for each section.
Since for the section B4 to B1 only 3 pallets are allowed, and all the three are already in it, so we have the final model represented in Fig. 10.
If we replicate this reasoning to all sections combined and knowing that before every button we have a sensor indicating the pallet is there, this means the sensors are the outputs of the system, we get the final model represented in Fig. 11.
We can achieve the system's matrices as proposed in section 3.1, where A 0 will be split into A g , A d and A r to make the model implementable using the same procedure discussed in section 3. The next step is to use these 3 matrices to calculate P opt , L opt and M opt thanks to Eqs. (7-9). All the calculus needed to achieve the final controller were obtained by using the software MinmaxGD [START_REF] Cottenceau | Minmaxgd a library for computation in semiring of formal series[END_REF]. After executing, this software gave us the matrices just like the ones presented in section 4.1.
Transforming those matrices in code is extremely laborious and can lead to unexpected errors. Knowing that, a software has been developed to make the calculus and write a C code directly (see [START_REF] Freitas | State estimation of an automated system in max-plus algebra by using a linear observer[END_REF]), using the method shown at the end of section 4.1. The generated code was then executed into a PC that supervises the real system. The PLCs ensure that the system will comport as specified and provide data for the supervisory system. The code works in a way that it reads the outputs provided by the PLCs, updates the input control tokens consequently updating the input and thus updating the amount of control tokens available for the buttons, then updates the observer for the user to know if the system is comporting adequately every 500 ms. 6. RESULTS
In this section we are going to discuss the effects of having implemented the control into the real system. Since the only place that can occur pallet accumulation is in B4 and B10 and the loop containing B4 is faster, it is the only place that can have accumulation. Our chosen G ref was the system's transfer function, that means, we chose to maintain the original output, but to delay as much as possible the buttons activations in order to reduce stocking. Knowing that, the objective of the controller is to reduce the stocking in B4 without changing the original output. To illustrate the performance of the controller we focus on measurements of x 7 state.
It must be mentioned that the travelling times are not deterministic, hence we focus on the difference between x 7 state with and without control, sometimes this difference is positive and sometimes negative. In Fig. 12 it is possible to see the time evolution of this difference. As expected, the difference is insignificant principally because x 7 is the number of times the state has been activated, which is a number that increases over time. The mean value of the difference is -0.1342, very close to 0, showing that all outputs would have been maintained if the system was perfectly deterministic.
The next topic will be the stocking difference. For this matter the PLCs is programmed to measure x 4 's information, this way the stock is calculated using stock(t) = x 4 (t)x 7 (t). The result can be seen in Fig. 13.
This pattern keeps repeating for the whole simulation. As it is possible to see, the controlled system has at maximum 2 pallets in stock, while the not controlled (natural) system has 3. This result makes sense, since our aim is to maintain the outputs, in this case, for output y 2 to be maintained, the system has to activate B2 at least twice, resulting in a minimum stock of 2. Calculating the mean stock, we get 1.6550 pallets for the natural system, and 1.3175 for the controlled one, meaning a 20.4% stock reduction.
CONCLUSION
In this paper we propose and present a way to implement in an efficient way controllers for max-plus linear systems.
Freely available software tools yield automatically the code for implementation in a SCADA system. In the real example presented, the just-in-time control implemented has reduced the useless internal stocks by 20%, which illustrates the interest of this kind of production policy. The engineers can use this methodology to adapt the control to their specific systems in an easy way. A user-friendly interface will be develop soon in order to help them to select the control strategy they want to implement among the different available possibilities (Observer-based control [START_REF] Hardouin | Control and state estimation for max-plus linear systems[END_REF], Disturbance Decoupling Problem [START_REF] Shang | An integrated control strategy to solve the disturbance decoupling problem for max-plus linear systems with applications to a high throughput screening system[END_REF], Robust control [START_REF] Lhommeau | Interval analysis in dioid : Application to robust open loop control for timed event graphs[END_REF], Stabilization thanks to Feedback Control [START_REF] Cottenceau | On timed event graphs stabilization by output feedback in dioid[END_REF], Model Predictive Control [START_REF] Schutter | Model predictive control for max-plus-linear discrete event systems[END_REF]).
Example 3 .
3 (min,plus) algebra: Z min = (Z ∪ {-∞, +∞}, min, +) is a complete idempotent semiring such that a ⊕ b = min(a, b), a ⊗ b = a + b, a ∧ b = max(a, b) with ε = +∞, e = 0, and = -∞. The order is total and corresponds to the inverse of the natural order ( i.e. , 2 1. Semiring of matrices Z n×n min is a semiring of matrices with entries in Z min . Example 4. (Matrix operations in Z max ). Given three matrices with entries in Z max ,
Fig. 3. Observer-based Controller architecture
Fig
Fig. 5. Slower Loop
FigFig. 10 .
10 Fig. 8. Modeling 1 Fig. 9. Modeling 2
Fig. 11 .
11 Fig. 11. Complete Model[START_REF] Gonçalves | Tropical algorithms for linear algebra and linear event-invariant dynamical systems[END_REF]
Fig. 12 .
12 Fig. 12. Difference between x 7 with (Natural) and without controller (Controlled) |
01676394 | en | [
"shs.geo",
"shs.eco"
] | 2024/03/04 16:41:22 | 2016 | https://shs.hal.science/halshs-01676394/file/Designing%20economic%20models%20for%20the%20energy%20transition.pdf | EDF), H. Joumni (CSTB S Nösperger
H Joumni
Mohamed Elmtiri
Olivier Bonin
VEOLIA Oliver Keserue
SETEC Eric Blanc
Armelle Langlois
Designing
VEOLIA), O.Bonin (IFSTTAR -LVMT M El-Mtiri
Designing economic models for the energy transition at the territorial scale
Keywords: economic model, business model, energy transition, cities, externalities
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
BACKGROUND
Energy efficiency investments have by and large been limited to energyintensive industries 'contribution to climate change mitigation [START_REF] Grubb | Planetary Economics, Energy, climate change and the three domains of the sustainable development[END_REF]. In urban projects energy efficiency is les concerted and explicit.
With new large urban projects encouraged to integrate climate change mitigation through high energy performance standards, the impact on project costs is considerable. From the additional conception costs to all the other additions in terms of equipment, facilities and infrastructure, energy efficiency implies a considerable increase in costs when constructing a new housing, office buildings or transport infrastructures. This is even more significant in the case of urban renovation (Guennec et al., 2009, Nösperger et al., 2015). However the poor economic outlook is making it hard to justify this additional investment in the absence of a market for these ambitious projects. Yet, social and environmental co-benefits related to energy efficiency are now apparent (Tirado Herrero et al., 2011;Ürge Vorsatz IEA 2014). How can these contradictions between benefits and costs be solved in order to make projects viable? We shall attempt to solve this paradox by addressing a number of questions:
When assessing the economic opportunity of an urban project, how can it be ensured that the socio-environmental benefits related to improved energy efficiency are taken into account? How are these benefits qualified from the perspective of different stakeholders and monetised, when relevant? What types of partnerships, agreements or contracts exist which ensure an equitable distribution or sharing of the economic value stemming from the identified co-benefits? Is this value sufficient to make the energy efficiency investment viable?
Today's energy production facilities are either highly centralised (power plants) or highly decentralised (e.g. photovoltaic panels installed on roofs). What can learn from business models developed for transport (Uber) and internet (Google) that can integrate the multi-scale territorial perspective that characterises the diversity of actors and object in the area of energy efficiency, for example through smart grids. EFFICACITY, the institute for energy transition in the urban environment, has been addressing these questions in a thematic program with the aim of delivering operational tools for use by stakeholders involved in urban projects (local authorities, energy and water suppliers, developers, etc.).
Following a literature review (part 2 of this article), a methodology for a broadened economic assessment was developed by EFFICACITY (presented in part 3) .This methodology was applied in two urban projects to test its validity, which is the subject of sections 4 and 5 of this article. We then end with a concluding section.
LITERATURE REVIEW
Environmental externalities have been studied in numerous projects and research articles and provide a solid basis to help identify broad families of externalities within the framework of an energy efficiency project (Amann, 2006), such as health impacts [START_REF] Loftness | Linking energy to health and productivity in the built environment. Evaluating the Cost-Benefits of High Performance Building and Community Design for Sustainability, Health and Productivity[END_REF], biodiversity protection [START_REF] Herrero | Co-benefits quantified: employment, energy, security and fuel poverty implications of the large-scale, deep retrofitting of the Hungarian building stock[END_REF], and greenhouse gas reduction (IEA, 2014). In the case of environmental externalities, it should be noted that some States set "shadow values" that is to say, the value of the external costs when evaluating public infrastructure projects (Quinet 2008[START_REF] Quinet | Evaluation socio-économique des investissements publics[END_REF], CESD "socio-economic assessment" ...). The positive impact on local economic development (2014Guennec et al IEA. 2009) or productivity [START_REF] Loftness | Linking energy to health and productivity in the built environment. Evaluating the Cost-Benefits of High Performance Building and Community Design for Sustainability, Health and Productivity[END_REF] are also revealed. This literature review has been synthesized in a reference book (Effiacity 2014) and an accompanying database (under development).
The following table provides some examples of the co-benefits of energy efficiency in the case of buildings. In fact, the main difficulty, when evaluating a well-defined energy efficiency project, lies in adapting locally these general externalities. While the enlarged economic evaluation of such urban energy efficiency projects can rely on proven methods (ISO 15686-5, Joumni 2009), or contrary to conventional evaluation methods such as Cost Benefit Analysis, where the State arbitrates between winners and losers, local energy efficiency projects require defining externality values which find endorsement by stakeholders for them to be then be inclined to contribute to a project on a voluntary basis.
The valuation of externalities is thus linked to the creation of economic models able to integrate those actors, traditionally absent or underrepresented, in current economic transactions around building projects.
In fact, beyond the business model, which is a monetary translation, the business model implies (du [START_REF] Tertre | Modèles économiques d'entreprise, dynamique macroéconomique et développement durable[END_REF]Nösperger et al, 2015.) gathering the following prerequisites:
A basis for value creation and distribution (capturing) between the actors;
The basis and formal structure of relationships between actors (contracts, formal or informal partnerships, "cooperative" society ...);
Sources of mutual investment or financing operations;
The nature of work performed by these players: skills, qualifications, activities produced independently or co-constructed (co-production).
The methodology presented in the next section proposes to address simultaneously the issue of externalities / co-benefits to consider and business models suited to their actual transformation into a monetary or non-monetary value.
METHODOLOGY
EFFICACITY developed an 8-stage methodology intended to assist in identifying relevant externalities of an Energy Efficiency project, determine relevant monetary values, and design suitable partnerships likely to convert them into economic flows (financial or non-financial): Phase 1: Local context identification and definition of alternative solutions (stages 1-3):
-Step 1: Background and overview of the initial situation (nature of the project, scope) and identification of the sets of actors involved in the project.
-
Step 1: context
The planned urban block being subject to evaluation is located in a neighbourhood in the Paris basin shared with new buildings of a university campus, an architectural school, a Regional Express Railway station (RER), college halls of residence and a university canteen. This mixed urban block consists of 4000m² of offices and 13,500 m² student residences. The local authority is also considering building a large nautical centre (2000 m² Basin). The project is in an area with special status, a "new town", managed by a public development agency named EPA (Etablissement Public d'Amenagement).
EPA owns the grounds of the planned urban block and intends to sell it to the property developer who will commission and manage the project and supervise other contracted project managers such as for the halls of residence and the office buildings. This project is the subject of a single contract of Design-Build-Operate & Maintenance of which the incumbent manages the various components having already identified a number of subcontracting partners. EPA intends the cluster located in its district to be competitive internationally against other campuses.
Local energy resources include: an aquifer (120 m deep, 14° C), solar radiation, biomass (subject to demand by the local authority), and fatal heat from the nautical centre. A system composed of 5 wells was installed for the university building, with a discharge well back to the aquifer.
The energy balance of the territory points to: Net demand for heat from the student canteen in the campus, the building block and nautical centre. The net heat requirement of the building block is due to the predominance of residential housing (and high domestic hot water needs) Net cold requirements from the university building. This building will therefore significantly use a cooling tower before releasing water above or below the temperature of aquifer. However, the low flow rate of the aquifer will heat up the aquifer in the absence of additional uses for the waste heat.
Steps 2 & 3: Alternative energy solutions
The reference project, for calculating the additional cost of the energy efficiency solution, is a building block powered by a gas boiler and airconditioning from adiabatic air coolers. The reference block is 30% more efficient than new period buildings.
The comparative analysis that follows, with respect to this reference solution, takes an energy efficient starting point (baseline).
In the context of this article, we will develop only the most promising alternatives, especially from the perspective of their intensity of use in a context of basically very efficient buildings. It is noteworthy that all of these alternatives include installing photovoltaic sensors whose production will be sold at a fixed feed-in tariff for 20 years.
The energy concept 1 of the building block is based on combining multiple sources of energy: gas, electricity, solar photovoltaic, and wastewater. Energy recovery from the grey waters is more robust (reliable and cost-effective) than solar thermal sources because it is simpler and requires no maintenance. Energy systems are centred on a thermorefrigeratingpump (TFP) that is not able to supply in a balanced way the hot and cold needs of the block. As a result there is a systematic loss of refrigo-calories (cold) having to be released back to the aquifer. This solution could be quite effective from a functional and energy standpoint in theory; unfortunately the temperature rise of the aquifer requires a search for alternatives. Indeed, while the energy balance of Bienvenüe university building wells, on the one hand, and the building block V1 on the other (which are located at a distance) could compensate "partially", the very low flow rate of the aquifer does not allow this "compensation" to be effective.
The Energy Concept 2 relies on the Mutualisation of the abstraction from the aquifer. The thermorefrigeratingpump on this aquifer uses the same wells used for the university building. This solution has the advantage of partially addressing the problem of excess heat emitted annually into the aquifer by the Bienvenue Building (due to an overall cooling need for the year) that leads to its gradual warming (because of the low rate of flow of the aquifer preventing heat to disperse). The decision to adopt this solution is based on the assumption that coupling the university building and the building block is sufficient to balance the needs and reduce very significantly the heat discharge to the aquifer. The innovative character of these buildings with high performance outer casings accentuates the risk of a mismatch between the simulated and observed behaviour. The coupled system "Building block -University building" may still be out of balance with an annual heat surplus, which would still contribute to the gradual warming of the aquifer.
The Energy Concept 3 relies on the Mutualisation of the abstraction from the aquifer and on the addition of a nautical leisure park. The policy targets set by the local authority requires it to meet 50% of the heat demand of the nautical centre with renewable energy, in this case with solar thermal panels. The surface of these panels is downsized to avoid overheating. Assuming that the "super-block" made of the projected building block and the university building to be in net heat excess in the summer (releasing water at 35° C), this balance could be used by the nautical centre to further reduce its summer minimal heat demand and consequently the surface needed for the solar panels. Before such considerations the surface area of the solar panel stood at 2900 m². Investment savings are therefore foreseeable.
Identifying externalities and co-benefits
We have to reason differentially (instead of reasoning on the comparison of absolute life cycle costs of each alternative) with respect to a high performance reference solution. The alternatives will not be compared with the baseline reference with respect to comfort, health, and mobility, which are independent of benefit from the mutualisation of the aquifer energy source. The real-estate value should also not be impacted. Most of the cobenefits discussed here therefore are focused on the impacts of the mutualised aquifer energy/cooling source. These are treated in Section 4.4.
Estimating the market and non-market values
Cost elements (tradeable goods and services)
The following table shows the cost elements related to the tradeable goods and services as compared with the reference solution.
Monetising non-market benefits
The previous financial balance must be completed with the assessment of monetized non-market benefits.
Fighting climate change
The following table shows the cost elements related to market goods and services, always by difference with the reference solution. The three concepts outperform the reference in terms of GHG balance. This difference is greater for the concepts 2 and 3, with the reliance on the thermorefrigerationpump with a coefficient of performance (COP) greater than 3. We used the shadow price of carbon used in public investment evaluations [START_REF] Quinet | Evaluation socio-économique des investissements publics[END_REF]. This value is 52 € / t in 2015 with an escalation rate of 4.5%, equivalent to the discount rate used in public investments under the law of Hotelling (Quinet, op.cit.).
Environment, biodiversity and semiotics
While the project global warming attenuation capacity is indisputable, there may be an impact on biodiversity and the natural environment from the warming of the aquifer. Without corrective action, because of Darcy's velocity (flow rate) of the aquifer and hardness of the water, we could expect this warming over 15 years to result in two consequences:
The development of algae and amoebae, Blockage and instability of the release well from the thermorefrigeratingpump at the university building (already installed). The use of cooling towers in the university building slows the phenomenon but does not suppress it.
Preventing the warming up of the aquifer improves the overall robustness of the system (at a 15 year horizon). The alternatives (concepts 2 and 3) for mutualising and compensating the heating and cooling needs reduce the warming of the aquifer. Furthermore, each alternative would give the territory a "showcase" of an innovative university campus, and fulfil the ambition of international renown mentioned above given the university's focus on energy transition and sustainable development. However, this effect is much more significant in the case which includes the water leisure park (concept 3).
The monetization of impacts related to the aquifer was carried out by the method of costs of damage [START_REF] Pearce | Cost Benefit Analysis and the Environment: Recent Developments[END_REF]. The main physical consequence is the impossibility of using a geothermal heat pump for the university building at a 15-year horizon. The assessment of that damage may be based on the discounted residual value of the equipment, valued at € 24 / m² (relative to the surface area of the planned building block) prior to discounting. The discounted value will be chosen to monetize the efforts for preventing the warming of the aquifer in the case of concepts 2 and 3.
In addition, EPA announced it would cover 50% of the investment cost associated with these three concepts in recognition of its innovative nature and international renown.
Operating savings of the university building
The expected synergies between the building block and the university building should reduce the release of cold effluent into the aquifer, enabling both improved energy efficiency of the cooling towers of the Bienvenüe building and decrease the use intensity. Thus, it is predicted that the overall operation of these towers is expected to fall between 20-60%! This is accompanied by reduced maintenance and operating costs and longer service life of the equipment, a notable reduction of the operating costs. This represents an externality to the extent that it constitutes a cost or a benefit to either party, without it being valued economically. This externality is positive as the action of agent A (the use of the aquifer by the building block) results in improved well-being of another, agent B (a decrease in operating costs of the university building) without the latter having had to pay the price (Efficacity 2015).
In the absence of a partnership linking the two parties, the university building is outside the economic reasoning of the actors of the building block.
Based on the information provided, the operational savings from the cooling towers permitted by mutualisation between the university building and the building block (concepts 2 and 3) were estimated at € 0.1 / m² (based on the surface area of the planned building block).
Solar panel investment savings for the nautical centre
By broadening the mutualisation to include the nautical centre the synergies allow a reduction in the surface area of the solar thermal panels (see above) of approximately 10%.
As above, this benefit may be associated with an externality. Based on the information provided, 10% investment savings in solar panels represents € 180,000, or 10.3 € / m² (based on the surface of the building block).
Step 6: Arbitration of the strategies for optimising the allocation of resources
The table below shows the results of the evaluation in overall cost and extensive global cost (ISO 15686-5) of the different variants relative to the reference solution, performed over a period of 20 years with a discount rate of 3% (as per the EN 16627 standard). A negative value indicates a gain relative to the reference solution. Extended evaluation 8.9 2.9 -4.5
It appears that the energy savings will not even cover the additional maintenance costs and the competitiveness of all three concepts depends on the value of the external effects. However, at a 20-year horizon, only e Concept 3 is economically competitive with respect to the reference solution. Choosing this path requires to engage in negotiations involving numerous partnerships.
Step 7 and 8: Designing of an appropriate economic model
and Evaluation of the deployment of the selected strategies
Up to
Step 6, we were in a cost/benefit logic proceeding by an itemised accounting (costs / market and non-market benefits). It must be formalised by the development of relevant partnerships following a logic of contributing actors and economic flows from one actor to another.
Partnerships can be bi-lateral or multilateral (for instance in the case of a "mutual fund" drawing contributions for different benefits or to supervise the bilateral exchanges). There's an apparent need for "systems of rules that bind the actors together, and are developed and adopted during the project" (Pasquelin, 2015). Beyond this governance, the success of such an economic model hinges on genuine cooperation between actors, who share an interest in each other's activities and how they interaction towards achieving a common goal (du [START_REF] Tertre | Modèles économiques d'entreprise, dynamique macroéconomique et développement durable[END_REF]. This instance illustrates, within acceptable technical and economic conditions, the possibility for an assembly comprising the three parties (the university building, the building block and the nautical centre) in the context of a shared willingness to showcase the exemplarity of the project.
THE METHOD IN THE CASE OF A DISTRICT HEATING NETWORK
The second application of this methodology was carried out on a district heating project (steam provided by an incineration plant). The sensitivity analysis of the sector-specific economic models (taking into account identified externalities) has been carried out for each actor's decisions. The two scenarios under study are the retrofitting of building to make them energy efficient, and the application of household waste minimisation policies. The externalities associated with these policies leads us to imagine new configurations of player games that redistribute earnings (profits) and losses for the most beneficial outcome. This constitutes a new way to build an all-encompassing cost approach by establishing new contract arrangement.
Scenarios and simulations
We are interested in all the actors' up-and down-stream of the two scenarios:
Users of the heating network (households) recipients of the final energy bill and its share of domestic income; The incineration plant as heat supplier, and the contribution of heat sales to its operating margin (net of investments); The operator of the district heating system, who adapts the billing of its domestic customers according to the energy source and thus maintains its operating margin. Three parameters are modified independently and simultaneously to develop various scenarios and assess the economic effects associated with the choice of its actors. These three parameters are:
The percentage of waste incineration: today several local policies target waste minimisation preferring to maximize recycling to the detriment of incineration.
The percentage of the renovated buildings on the district heating network: building standards improve efficiency and reduce energy consumption. The renovation policy targets a blanket application of new building standards.
The percentage of residential housing connected to the district heating network. Recent policy orientations, national and local, have made heating network a key policy tool for attaining energy transition objectives.
Results
The fall in volumes of waste incinerated expected from waste minimisation policies may impact significantly the earning of partner stakeholders in a district heating network. The first concerned is the operator of the incineration plant, who sells less heat and therefore reduces its earnings. The plant operator must compensate by increasing prices to be able to purchase an alternative source of energy (gas) and fulfil its contractual obligations with downstream users of the district heating network. Beyond a certain threshold of waste reduction the operator of the heating network must install a booster station, increasing its operating costs. Its margin is decreased but is not erased, since it passes most of the additional cost to its users.
Arrangement of players games
The resilience of the heating network needs rethinking, without compromising either the environmental or economic objectives pursued in the two policies.
In the case of a fall in the volume of waste incinerated, it seems more difficult to remove the losses for all players, but still we can try to maintain the profitability of the system. In such a case, and to maintain a satisfactory level of renewable energy in the energy mix of the network, the thermal renovation of buildings should be encouraged (assuming it will reduce use and cost of fossil fuels to users). However, such action does not compensate the losses for the heating network and incineration plants. We can, in this case imagine a mutually beneficial compromise to encourage thermal renovation for users and to compensate the district heating operator and incinerator for the loss of business (including downsizing the furnaces to reduce operating costs).
The reduction in waste is not necessarily negative for the system; it is possible to "downsize", through public financing, all elements of the chain. The decrease in the production of household waste results in lower operating costs for the community. These savings could be redirected to finance this upgrade, and then proceed with a gradual decrease in user-fees for managing household waste. The scenarios with variations of multiple parameters showed that: It is possible to compensate for the effects of one of the parameters by those of the other.
We need to rethink about how to define the energy strategy of the urban area in an integrated way: you have to consider actions on consumption, production and distribution simultaneously. A player should have the supervisory role in planning and upholding the general interest.
CONCLUSION
The application of this methodology turns out to be effective for identifying and valuing externalities. A key point is the building of new economic models able to convert the identified economic value into financial flows (or equivalent) between stakeholders. These models will rely on a formal basis (innovative partnership, contracting) but especially on informal ties (trust between actors, organizational relevance…). The development of relevant formal and informal resources can be delegated to a trusted third-party. To help the latter achieve this complex work, operational tools still need to be developed. These may include:
• a comprehensive reference document covering externalities (literaturebased review) to assist in the complex dialogue between stakeholders on the value of externalities;
• a global life-cycle costing calculation tool including externalities valuation;
• interviews, support guides for the negotiation between stakeholders (benefiting from positive externalities) and who have the willingness to contribute to the project in order to ensure that it become viable;
• specific contracting and partnership supporting tools in the areas of building renovation and district heating.
The development of such tools is the main objective being pursued by Efficacity in the framework of this project.
Step 2: Identification of technological and organizational solutions in energy efficiency adapted to the situation (and envisaged in the EFFICACITY relevant programs). Who do they concern and to what extent? -Step 3: Classification and selection of a range of solutions Phase 2: Externalities and benefits identification, selection and monetization (stages 4-6) -Step 4: Identification and selection of externalities related to selected solutions. -Step 5: Estimated market and non-market economic values -Step 6: Arbitration of possible strategies to optimize resource allocation. Phase 3: Identification of relevant partnerships/ contractual relationships and business model design (stages 7-8). -Step 7: Design the business model adapted and sensitivity tests -Step 8: Evaluation of the implementation conditions of the selected strategies: economic and contractual conclusions
Table 1 .
1 Examples of an integrated analysis of building functions and impacts (Source: Efficacity Externality reference tool andNösperger & al., 2015) DDSS 2016, 13th International Conference on Design and decision support systems in architecture and urban planning, June 27-28, 2016, Eindhoven
Impact or Impact Description Stakeholder
externality
category
Health Occupant thanks to comfort improvement health improvement Occupier Occupying company
Improved health thanks to facility Occupant
(cycle boxes, shower facility, pedestrian
paths…) allowing environment-friendly
mobility (bicycling, walking)
Productivity Occupant wellness related Occupying company
productivity improvement thanks to
comfort improvement (temperature,
natural lighting)
productivity improvement thanks to Occupying company
building-related positive corporate
communication
Table 3 :
3 Financial results of the three concepts in relation to the reference (costs and market
benefits
Extra Impact on Impact on Electricity
investment the yearly energy annual sales from solar
Table 3 :
3 Financial results of three concepts in relation to the reference (costs and market
benefits)
Gas Electricity Primary CO2
Consumption consumption. energy Emissions
(final energy) (final energy) consumption
Concept -18.8 +4.6 kWh/m² -6.9 -3.7 kg/m²
1 kWh/m² kWh/m²
Concept -22.7 +6.7 kWh/m² -5.4 -4.3 kg/m²
2 kWh/m² kWh/m²
Concept -22.7 +6.7 kWh/m² -5.4 -4.3 kg/m²
3 kWh/m² kWh/m²
Table 4 :
4 Calculation of total cost (simple and extended) of the different concepts (€ / m²)
Additional Impact on Impact on
investment costs annual energy annual maintenance
costs costs
Initial investment 58.0 69.0 72.0
Operating costs for -10.9 -12.4 -12.4
energy and water
Maintenance costs 11.5 13.1 13.5
Residual value for -1.7 -2.0 -2.1
subtraction
Social costs CO2/ -4.2 -4.9 -4.9
period
DDSS 2016, 13th International Conference on Design and decision support systems in architecture and urban planning, June 27-28, 2016, Eindhoven |
03208785 | en | [
"sdv"
] | 2024/03/04 16:41:22 | 2021 | https://univ-angers.hal.science/hal-03208785/file/S0195670121001663.pdf | Laurence Pougnet
Claire V Hoffmann
Richard Pougnet
Fabien Le Ny
Léa Gaitan
Pierrick Cros
Mathilde Artus
Solène Le Gal
Gilles Nevez
email: [email protected]
Pneumocystis exhalation by infants developing Pneumocystis primary infection: putative infectious sources in hospitals and the community
Keywords: Pneumocystis jirovecii, primo-infection, airborne transmission, exhalation, Pediatrics
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Pneumocystis jirovecii is a transmissible and uncultivable fungus, specific for humans, that causes severe pneumonia [i.e. Pneumocystis pneumonia (PCP)], in immunosuppressed adults and children [START_REF] Hughes | Pneumocystis Pneumonia[END_REF]. Pneumocystis primary infection linked to first contact with the fungus occurs early in life in both immunosuppressed or immunocompetent infants, mostly those younger than 12 months [START_REF] Vargas | Search for Primary Infection by Pneumocystis carinii in a Cohort of Normal, Healthy Infants[END_REF][START_REF] Larsen | Primary Pneumocystis infection in infants hospitalized with acute respiratory tract infection[END_REF][START_REF] Nevez | Pneumocystis primary infection in infancy: Additional French data and review of the literature[END_REF]. Pneumocystis primary infection in immunocompetent infants appears to be a worldwide phenomenon [START_REF] Vargas | Search for Primary Infection by Pneumocystis carinii in a Cohort of Normal, Healthy Infants[END_REF][START_REF] Larsen | Primary Pneumocystis infection in infants hospitalized with acute respiratory tract infection[END_REF][START_REF] Nevez | Pneumocystis primary infection in infancy: Additional French data and review of the literature[END_REF], the infection being asymptomatic or symptomatic. In this latter case, it occurs contemporary to a self-limiting respiratory tract infection [START_REF] Larsen | Primary Pneumocystis infection in infants hospitalized with acute respiratory tract infection[END_REF][START_REF] Nevez | Pneumocystis primary infection in infancy: Additional French data and review of the literature[END_REF] in the course of which the fungus can be detected using PCR assays in nasopharyngeal aspirates (NPAs) [START_REF] Vargas | Search for Primary Infection by Pneumocystis carinii in a Cohort of Normal, Healthy Infants[END_REF][START_REF] Larsen | Primary Pneumocystis infection in infants hospitalized with acute respiratory tract infection[END_REF][START_REF] Nevez | Pneumocystis primary infection in infancy: Additional French data and review of the literature[END_REF].
Pneumocystis species are host specific and no environmental ecological niche for any of them has been identified so far. For these reasons, it is assumed that patients infected by P. jirovecii are the main part of the fungus reservoir. Indeed, the aerial interindividual transmission of Pneumocystis murina has been demonstrated in mouse models [START_REF] Dumoulin | Transmission of Pneumocystis carinii disease from immunocompetent contacts of infected hosts to susceptible hosts[END_REF] and was strongly suggested for P. jirovecii in humans considering the occurrence of PCP case clusters in hospitals (see review in [START_REF] Pougnet | Airborne Interindividual Transmission of Pneumocystis jirovecii[END_REF]). Moreover, P. jirovecii DNA has been detected in the air surrounding PCP patients or patients colonized by the fungus, consistent with P. jirovecii exhalation and spread from patients in their environment [START_REF] Choukri | Quantification and Spread of Pneumocystis jirovecii in the Surrounding Air of Patients with Pneumocystis Pneumonia[END_REF][START_REF] Gal | Pneumocystis jirovecii in the air surrounding patients with Pneumocystis pulmonary colonization[END_REF][START_REF] Pougnet | Pneumocystis jirovecii Exhalation in the Course of Pneumocystis Pneumonia Treatment[END_REF]. These patients represent potential infectious sources from which, susceptible other individuals may acquire P. jirovecii.
Considering the very high prevalence of Pneumocystis primary infection in infancy [START_REF] Vargas | Search for Primary Infection by Pneumocystis carinii in a Cohort of Normal, Healthy Infants[END_REF][START_REF] Larsen | Primary Pneumocystis infection in infants hospitalized with acute respiratory tract infection[END_REF][START_REF] Nevez | Pneumocystis primary infection in infancy: Additional French data and review of the literature[END_REF], infants in these circumstances may represent the main part of the human reservoir of the fungus. P. jirovecii detection in the surrounding environment of infants would strengthen their potential role as infectious sources. In this context, we performed the detection and characterization of P. jirovecii DNA in air samples from the surrounding environment of
Patients and Methods
Pneumocystis primary infection was defined as detection of P. jirovecii in NPA samples from infant younger than 12 months, an infant who was a priori immune-naïve to P. jirovecii [START_REF] Nevez | Pneumocystis primary infection in infancy: Additional French data and review of the literature[END_REF].
The detection of P. jirovecii was routinely performed in our diagnostic laboratory in NPAs contemporary to that of viruses within the framework of investigation of pulmonary symptoms or fever. This detection was done using a real-time polymerase chain reaction (qPCR) assay targeting the gene of mitochondrial large subunit ribosomal RNA (mtLSUrRNA) on 7500 Real Time PCR System (Applied Biosystems, Foster City, CA, USA) as described elsewhere [START_REF] Nevez | Pneumocystis primary infection in infancy: Additional French data and review of the literature[END_REF].
Fourteen out of 87 infants younger than 12 months (sex ratio male/female 9/5, median age 2 months 24 days [limits, 1 month 0 day -9 months 19 days]), admitted at the Brest University Hospital, Brest, France between January 23th, 2018 and January 8 th , 2020, and diagnosed with Pneumocystis primary infection, were enrolled in the study. Characteristics of the 14 infants are described in table I. All infants were hospitalized in the same pediatric ward, in a single or a two-bed room.
Fourteen air samples were collected in hospital rooms at one-meter distance from infants' head. One cubic meter of air (airflow, 100 L per min. during 10 min.) was collected using the Coriolis® μ air sampler (Bertin Technologies, Montigny-le-Bretonneux, France) which collect and concentrates aerial biological particles into Phosphate-buffered Saline added with 0.002% Tween80. DNA extraction of air samples was performed using the QIAamp DNA Mini Kit (Qiagen, Courtaboeuf, France) according to the recommendations by Choukri et al. [START_REF] Choukri | Quantification and Spread of Pneumocystis jirovecii in the Surrounding Air of Patients with Pneumocystis Pneumonia[END_REF]. DNA was eluted in 100 μL of elution buffer and stored at -80 °C until amplification using the same aforementioned PCR assay [START_REF] Nevez | Pneumocystis primary infection in infancy: Additional French data and review of the literature[END_REF] but with 50 amplification cycles. All air samples and negative controls were run in triplicate whereas positive controls were run in duplicate. An internal positive control (TaqMan® Exogenous Internal Positive Control controls were used to managed putative cross contamination. Moreover, UDG was used to prevent putative cross contamination. The genotyping of P. jirovecii isolates from the nasopharyngal aspirates and air samples was performed by examining the sequences of mtLSUrRNA gene. Genotypes are determined by single nucleotide polymorphisms at positions 85 and 248 [START_REF] Beard | Genetic variation in Pneumocystis carinii isolates from different geographic regions: implications for transmission[END_REF].
The study was noninterventional, which did not require inform consents and ethical approval according to French laws and regulations (CSP Art L1121e1.1).
Statistical analyses were performed using Prism software (version 7.0, GraphPad Software, San Diego, CA, USA). Quantitative variables were compared using Mann-Whitney test. A two-sided P-value < 0.05 was considered significant.
Results and discussion
The results of P. jirovecii DNA detection air samples collected in the surrounding aerial environment of infants' room are presented in table I. In the absence of inhibitors or contamination, P. jirovecii DNA was detected in three out of 14 air samples (21.4%). These three air samples were collected within one day of NPAs collection from infants I4, I9, and I10. P. jirovecii genotyping was successful in only one positive air sample that was collected near infant I9. The allele 2 (A85C248) was identified in both NPA and air samples, objectivizing a genotype match in this pair of samples. This match of genotype was consistent with the fact that P. jirovecii detected in the air sample was effectively exhaled by the infant.
Infant I9 was hospitalized in a single bedroom.
Only 14 infants were enrolled the study, while 87 infants were diagnosed with Pneumocystis primary infection during the study period. The availability of the air sampling device, the disposal of the operator as well as that of the infants may have also hampered in air sampling.
Although the detection of P. jirovecii in NPAs using PCR is a routine technique in our laboratory, it is performed only twice a week. Pneumocystis primary infection is a benign disease frequently contemporaneously to viral infection and with prompt improvement. Most infants infected with P. jirovecii in this context were already discharged from the hospital when the pediatricians were informed of the positive results of P. jirovecii detection in NPAs.
Consequently, in this context the air sampling in infants' room was not be done. The time interval between the date of NPA sampling and that of P. jirovecii detection in NPAs also explains the time interval between the date of NPA sampling and that of air sampling in infants' room. The median of this latter was 3,5 days (range, 1 -10 days). P. jirovecii DNA was detected using the real-time PCR assay in air samples in the rooms of infants I4, I9, and I10 with Ct values of 40.9, 40.6, and 42 respectively. Considering these values, the fungal burdens were probably low and below the quantification limit we previously described elsewhere [START_REF] Pougnet | Pneumocystis jirovecii Exhalation in the Course of Pneumocystis Pneumonia Treatment[END_REF]. We did not perform the quantification in positive air samples for this reason. P. jirovecii DNA was not detected in air samples in the rooms of the remaining 11 infants (I1, I2, I3, I5, I6, I7, I8, I11, I12, I13, I14). Several uncontradictory hypotheses may explain these results: i) P. jirovecii exhalation by these 11 infants was faint or insignificant due to their low P. jirovecii pulmonary burdens. In favor of this hypothesis, it is noteworthy that the median of the Ct values of P. jirovecii DNA amplification in NPAs from these 11 infants was 39.6 whereas it was 36.2 in the three infants from whom P. jirovecii DNA was detected in air samples (P, 0.07, Mann-Whitney test); ii) the time interval between the date of NPA sampling and that of air sampling in infants' room may also be involved. It was one day for the three infants from whom P. jirovecii DNA was detected in air samples whereas it was 4 days (median) (range, 1-10 days) (P, 0.016, Mann-Whitney test) for the 11 remaining infants from whom P. jirovecii DNA was not detected in air samples. This delay in air sampling was concomitant to infant improvement, P. jirovecii clearance from the lungs, and consequently, the decrease or disruption of P. jirovecii exhalation; iii) although, we used the same real time PCR assay that we previously described elsewhere [START_REF] Gal | Pneumocystis jirovecii in the air surrounding patients with Pneumocystis pulmonary colonization[END_REF], a 1-cubic-meter was collected per air sample (100 L per min. during 10 min.) instead of 1.5-cubic meter (300 L per min. during 5 min.) in order to reduce the sound level of the device from 75 dB to 52 dB close to the infants, and to respect the criteria of a non-interventional study. Thus, the air collection of P. jirovecii organisms may have been less efficient.
A plus/minus real-time PCR assay amplifying the mtLSUrRNA gene has been routinely used to perform diagnoses of P. jirovecii infection in our laboratory for many years. For this reason, the same gene was also chosen as the target of the qPCR assay for detection of P. jirovecii in air samples. This multicopy gene provides a highly sensitive amplification, which is required for detection of low fungal burden in NPAs from infants developing primary infection, and in the related air samples. For the same reasons, choosing this gene for P. jirovecii genotyping based on DNA sequence analysis rendered it easy the comparison of putative genotypes in NPAs and air samples. The genotyping was successful in the air sample from infant I9 and unsuccessful in the two other air samples from infants I4 and I10. The vagaries between the real-time PCR-assay that was used for P. jirovecii detection in air samples and the conventional PCR assay that was used for the first step of the method of mtLSUrRNA gene sequencing in this context of low fungal burdens may explain these unsuccessful results. Additionally, some infants were more or less spreaders because they coughed or not, and they were awake or not. This is a hypothesis to explain why fungal loads in air samples were not strictly correlated with fungal loads in NPAs and why detection and genotyping of P. jirovecii failed.
In the course of the present study, none unoccupied rooms in the pediatrics department were available during the winter bronchiolitis epidemic to get negative controls of air samples.
Nonetheless, it was noteworthy, that 11 out of 14 air samples were negative for P. jirovecii detection. Moreover, in our previous studies taken together [START_REF] Choukri | Quantification and Spread of Pneumocystis jirovecii in the Surrounding Air of Patients with Pneumocystis Pneumonia[END_REF][START_REF] Gal | Pneumocystis jirovecii in the air surrounding patients with Pneumocystis pulmonary colonization[END_REF], 43 control air samples were negative providing data excluding putative environmental contaminations.
To sum up, despite the small number of infants enrolled in the study, we obtained the first data on P. jirovecii exhalation by infants with primary infection. This study provides additional data on P. jirovecii human reservoir. Infants may represent potential active infectious sources of the fungus in the hospital environment as previously shown for PCP patients. This observation may have repercussions in terms of infection control in pediatric units monitoring infants with respiratory infections. Infants with primary Pneumocystis infection may also represent potential active sources of infection in the community. Following these original but limited results, a new study considering the rapid turnover of infants hospitalized in pediatric units, and including the daily use of PCR for the detection of P. jirovecii in air samples is warranted.
Table I :
I Pneumocystis jirovecii detection in the air surrounding infants developing Pneumocystis primary infection
Infant code (sex) Age Clinical presentation Number of days between NPA sampling and air sampling Results of P. jirovecii detection in NPA expressed in Ct Results of P. jirovecii detection in air (Ct) mtLSUrRNA genotype identification in air samples
I1 (M) 2 m, 12 days Bronchiolitis* and pyelonephritis (E. coli) 5 37.8 Negative -
I2 (F) 8 m, 24 days Rhinitis* 3 40.8 Negative -
Bronchiolitis
I3 (M) 3 m (rhinovirus/enterovirus, Haemophilus influenzae 3 34.9 Negative -
concurrent infection)
I4 (F) 1 m, 23 days Cow's milk protein allergy** 1 33.4 Positive (40.9) Unsuccessful amplification
Acute obstructive
respiratory deficiency
I5 (M) 7 m, 12 days due to rhinopharyngitis, 10 40 Negative -
Pierre Robin
syndrome**
Bronchiolitis, asthma
I6 (M) 8 m, 20 days West Syndrome, brain 9 34.4 Negative -
malformation**
I7 (F) 1 m Bronchiolitis (virus parainfluenzae type III) 4 39.2 Negative -
Funding source
This study was supported in part by the European Union (grant number, ERANet-LAC HID-0254). |
03217164 | en | [
"sdv.spee"
] | 2024/03/04 16:41:22 | 2021 | https://unilim.hal.science/hal-03217164/file/S0035378721005051.pdf | Philippe Couratier
email: [email protected]
Géraldine Lautrette
Jaime Andres Luna
Philippe Corcia
Phenotypic variability in Amyotrophic Lateral Sclerosis
Keywords: Amyotrophic Lateral Sclerosis, phenotype, upper motor neuron, lower motor neuron, Transactivation Response DNA Binding Protein
Clinically, ALS phenotypes depend on the areas of the body that are affected, the different degrees of involvement of upper and lower motor neurons, the degrees of involvement of other systems, particularly cognition and behavior, and rates of progression.
Phenotypic variability of ALS is characteristic and can be declined on the distribution of motor manifestations but also on the presence of extra-motor signs present in a variable manner in ALS patients. Neuropathologically, ALS is defined by the loss of UMN and LMN and the presence of two representative motor neuronal cytoplasmic inclusions, Bunina bodies and 43 kDa Transactivation Response DNA Binding Protein (TDP-43) -positive cytoplasmic inclusions. The distribution of cytopathology and neuronal loss in patients is variable and this variability is directly related to phenotypic variability. Key regulators of phenotypic variability in ALS have not been determined. The functional decrement of TDP-43, and region-specific neuronal susceptibility to ALS, may be involved. Due to the selective vulnerability among different neuronal systems, lesions are multicentric, region-oriented, and progress at different rates. They may vary from patient to patient, which may be linked to the clinicopathological variability across patients.
Introduction
The concept of amyotrophic lateral sclerosis (ALS) was first established by Jean-Martin Charcot of the Salpêtrière Hospital. The name ALS was speculatively derived by Charcot who assumed that spinal cord degeneration would be a primary lesion and that the amyotrophy associated with anterior horn motor neuron degeneration could be secondary to spinal cord degeneration [START_REF] Charcot | Amyotrophies spinales deuteropathiques sclérose latérale amyotrophique[END_REF]. However, regardless of this hypothesis, the name "ALS" with only three words with the implication that amyotrophy and spinal cord degeneration coexisted was accepted and persists to this day [START_REF] Goetz | Amyotrophic lateral sclerosis: Early contributions of Jean-Martin Charcot[END_REF]. The main defining characteristic of ALS is progressive weakness due to neurodegeneration of the upper motor neuron (UMN) and lower motor neuron (LMN). Clinically, ALS is defined by a history of weakness over time and space, and by an examination showing evidence of upper and lower motor neuron dysfunction in one or more areas of the body. Neuropathologically, ALS is defined by the loss of UMN and LMN and the presence of two representative motor neuronal cytoplasmic inclusions, Bunina bodies and 43 kDa Transactivation Response DNA Binding Protein (TDP-43)positive cytoplasmic inclusions [START_REF] Okamoto | Intracytoplasmic inclusions (Bunina bodies) in amyotrophic lateral sclerosis[END_REF][START_REF] Neumann | Ubiquitinated TDP-43 in frontotemporal lobar degeneration and amyotrophic lateral sclerosis[END_REF]. The distribution of cytopathology and neuronal loss in patients is variable and this variability is directly related to phenotypic variability. Clinically, ALS phenotypes depend on the areas of the body that are affected, the different degrees of involvement of UMN and LMN, the degrees of involvement of other systems, particularly cognition and behavior, and rates of progression. Phenotypic variability of ALS is characteristic and can be declined on the distribution of motor manifestations but also on the presence of extra-motor signs present in a variable manner in ALS patients (Table 1). In this article, the clinical and neuropathological archetypes of ALS will be reviewed.
Clinical phenotypes of ALS
Based on body region of involvement
The archetypic form of ALS: spinal form
The major type of ALS has a spinal onset, known as Charcot's type. This type begins with asymmetric weakness in a limb around the age of 60 years. Most patients demonstrate a gradual worsening of weakness within a year, which progresses to a contralateral limb or to other spinal and/or bulbar areas. On neurological examination, muscle atrophy with fasciculations and hyperreflexia of limbs are detected. Babinski's sign is occasionally positive.
Typically, hyperreflexia is more evident in the lower limbs than in the upper limbs. As muscle atrophy progresses, the pyramidal signs are less evident on clinical examination. Some young patients may develop a predominant upper motor neuron form with spasticity. Patients may face a life-threatening condition as respiratory muscle weakness progresses and mean survival time has been reported to be approximately 3 years [START_REF] Oskarsson | Amyotrophic lateral sclerosis: An update for 2018[END_REF].
Flail arm phenotype
The term flail arm syndrome (FAS) is most common, but it is also referred to as Vulpian-Bernhardt's type, neurogenic man-in-a-barrel syndrome, or scapulohumeral form of ALS. Patients with FAS present with predominantly proximal, progressive and symmetric wasting and paresis of the upper limb muscles, while lower limbs and bulbar muscles are spared [START_REF] Couratier | Clinical features of flail arm syndrome[END_REF]. Upper motor neuron signs in the legs are occasionally present [START_REF] Hu | Flail arm syndrome: A distinctive variant of amyotrophic lateral sclerosis[END_REF]. Wijesekera et al. defined FAS as weakness of both upper limbs without bulbar and lower-limb symptoms for a period of at least 12 months from disease onset [START_REF] Wijesekera | Natural history and clinical features of the flail arm and flail leg ALS variants[END_REF]. Chio et al. demonstrated on a population-based study that FAS was relative rare and more common in men (incidence rates, 0.28 in men and 0.07 in women), with a men to women rate ratio of 4:1 and mean age at onset was 62.6 years (SD 11.8). In this phenotype, Frontotemporal-Dementia (FTD) is very rare (1.4%). Flail arm phenotype is relatively benign, with a median survival time of 4.0 years and a 10-year survival rate of 17% [START_REF] Chio | Phenotypic heterogeneity of amyotrophic lateral sclerosis: a population based study[END_REF].
Flail leg phenotype
Some ALS patients have weakness confined to the lumbosacral spinal cord region. This subtype is known as flail leg syndrome, Marie-Patrikios' type, or a peroneal form of ALS [START_REF] Wijesekera | Natural history and clinical features of the flail arm and flail leg ALS variants[END_REF]. Flail leg accounts for between 2.5-6.3% of motor neuron disease, and has a similar mean age of symptom onset to classic ALS. This phenotype has a similar incidence in the two genders. Onset is asymmetric in about half of patients, but typically progresses to include both lower extremities, and muscle stretch reflexes are absent or diminished. The definitions for Flail leg differ by case series but common features include insidious onset of weakness isolated to the legs, decreased or absent reflexes at presentation and symptoms confined to one spinal region for 12-24 months [START_REF] Dimachkie | Leg amyotrophic diplegia: prevalence and pattern of weakness at US neuromuscular centers[END_REF]. About half of patients show an initial pattern of weakness described as pelviperoneal, with sparing of the quadriceps and ankle plantarflexors.
The remaining patients show either diffuse weakness, or a distal pattern of weakness. The median survival time (3.0 years) and the 10-year survival rate (13%) of this phenotype are similar to those of classic ALS. FTD is present in 4% of patients with this phenotype.
Bulbar form : progressive bulbar palsy (PBP)
This phenotype is characterized by the onset of dysarthria or dysphagia with bulbar muscle atrophy. The first report of the bulbar type of ALS was likely carried out by Gombault, who was a student of Charcot [START_REF] Gombault | Sclérose symétrique des cordons latéraux de la moelle et des pyramides antérieurses dans le bulbe[END_REF]. Bulbar phenotype accounts for 20% of patients with ALS [START_REF] Winnen | The phenotypic variability of amyotrophic lateral sclerosis[END_REF]. Some patients have isolated bulbar palsy (IBP) marked by speech abnormalities or dysphagia but lack bulbar muscle atrophy or fasciculation. This condition should not be diagnosed as PBP; instead, it should be considered pseudobulbar palsy in the strictest sense. IBP patients are predominantly female and are characterized by the presence of UMN bulbar symptoms such as spastic dysarthria and emotional lability. In contrast, more typical bulbar ALS patients exhibit mixed or flaccid dysarthria, more prominent tongue fasciculations and more prominent limb involvement. Although progressive tongue atrophy and fasciculations are relatively specific to ALS, they must be distinguished from Kennedy's disease [START_REF] Garg | Differentiating lower motor neuron syndromes[END_REF]. There is an increase in frequency of bulbar phenotype with increasing age [START_REF] Chio | Phenotypic heterogeneity of amyotrophic lateral sclerosis: a population based study[END_REF]. The bulbar type of ALS with a mean survival of 2 years has poorer prognosis compared to that of the limb onset. This is because the patients are more likely to develop aspiration pneumonia and malnutrition due to dysphagia [START_REF] Chio | Phenotypic heterogeneity of amyotrophic lateral sclerosis: a population based study[END_REF].
Respiratory phenotype.
This is the rarest phenotype (annual incidence rate: men, 0.06/100,000; women, 0,01/100,000). Its median survival time is 1.4 years and no patient with this phenotype survive up to 10 years [START_REF] Chio | Phenotypic heterogeneity of amyotrophic lateral sclerosis: a population based study[END_REF].
Based on Level of involvement of LMN and UMN
Progressive muscular atrophy (PMA)
PMA is a rare, sporadic, adult-onset, clinically isolated LMN syndrome due to the degeneration of LMN, including anterior horn cells and brainstem motor nuclei. It is clinically characterized by progressive flaccid weakness, muscle atrophy, fasciculations, and reduced or absent tendon reflexes. The term PMA was first coined by the French neurologists Aran and Duchenne in 1850 to describe patients with progressive muscle atrophy. In 1853, Cruveilhier provided the first evidence of PMA being a neurogenic disorder based on the atrophy of the ventral spinal roots and the motor nerves found on autopsy of Aran's patients [START_REF] Aran | Recherches sur une maladie non encore décrite du système musculaire: Atrophie musculaire progressive[END_REF][START_REF] Cruveilhier | Sur la paralysie musculaire progressive amyotrophique[END_REF]. It has been recognized that a substantial number of patients with the initial diagnosis of PMA progress to a diagnosis of ALS through the development of UMN signs or may have UMN pathology at autopsy despite the absence of clinical UMN features during their lifetime [START_REF] Ince | Corticospinal tract degeneration in the progressive muscular atrophy variant of ALS[END_REF].
These observations support the notion of PMA belonging to an ALS spectrum rather than being a unique variant of MNDs. However, there remains a significant proportion of patients with PMA who have no clinical or subclinical evidence of UMN dysfunction, supporting the existence of PMA as a separate entity. At present, the term PMA is reserved for sporadic patients with MND with pure LMN findings on examination, who may or may not later develop clinically defined UMN features. Patients who subsequently develop UMN signs are reclassified as having ALS. Patients with PMA have LMN features, namely, progressive flaccid weakness, muscle atrophy, fasciculations, and hyporeflexia or areflexia. Weakness and atrophy typically starts in distal limb muscles in an asymmetric manner following neuropathy pattern and then spreads over months and years. There is a mean delay of approximately 23 months between the onset and the diagnosis. Symmetric proximal limb weakness (myopathy pattern) occurs in only 20% of patients. Bulbar muscles are generally spared at onset but may be involved in up to 40% of patients within a median of 19 months from onset of limb weakness. Patients with bulbar involvement are more likely to progress to ALS or run a relentless course, as seen in ALS. It is uncommon for respiratory muscles to be involved at the onset of PMA [START_REF] Kim | Study of 962 patients indicates progressive muscular atrophy is a form of ALS[END_REF]. The rate of progression in patients with PMA varies from slow (over years and decades) to very rapid (months to a year). The median survival duration after onset in patients with PMA is about 12 months longer than in patients with ALS (48.3 vs 36 months) [START_REF] Visser | Disease course and prognostic factors of progressive muscular atrophy[END_REF].
Primary lateral sclerosis
In 1865, Charcot reported on a patient with isolated degeneration of the spinal lateral cord who clinically manifested severe spastic tetraparesis [START_REF] Charcot | Sclérose des cordons latéraux de la moelle épiniére chez une femme hystérique atteinte de contracture permanente des quatre membres[END_REF]. Erb described the clinical and pathological features, naming this condition PLS [START_REF] Erb | Uber einen wenig bekannten spinalen Symptomencomplex[END_REF]. Patients with PLS present with upper motor neuron signs only. Therefore, they do not meet the clinical criteria of Awaji [START_REF] Boekestein | Sensitivity and specificity of the 'Awaji' electrodiagnostic criteria for amyotrophic lateral sclerosis: Retrospective comparison of the Awaji and revised El Escorial criteria for ALS[END_REF]. Many patients with PLS develop lower motor neuron signs within 4 years of onset [START_REF] Finegan | Primary lateral sclerosis: A distinct entity or part of the ALS spectrum? Amyotroph Lateral[END_REF][START_REF] Tartaglia | Differentiation between primary lateral sclerosis and amyotrophic lateral sclerosis: Examination of symptoms and signs at disease onset and during follow-up[END_REF][START_REF] Gordon | The natural history of primary lateral sclerosis[END_REF] and some patients manifest with frontotemporal dementia [START_REF] Kosaka | Primary lateral sclerosis: Upper-motor-predominant amyotrophic lateral sclerosis with frontotemporal lobar degeneration-immunohistochemical and biochemical analyses of TDP-43[END_REF]. PLS constitutes approximately 1-5% of all cases of ALS. Mean survival time is reported to be approximately 8 years [START_REF] Finegan | Primary lateral sclerosis: A distinct entity or part of the ALS spectrum? Amyotroph Lateral[END_REF]. The differences and similarities between PLS and ALS are still not well understood, but recent reports suggest that the majority of clinical features of PLS resemble those of ALS [START_REF] Wais | The concept and diagnostic criteria of primary lateral sclerosis[END_REF]. Thus, if PLS is seemingly regarded as a subtype of ALS, at least for the first four years of followup, examination by needle electromyography is required to assess LMN involvement. New criteria have been established in 2020 [START_REF] Turner | Primary lateral sclerosis: consensus diagnostic criteria[END_REF].
Hemiplegic form
Some patients manifest with unilateral upper motor neuron involvement. This extremely rare clinical syndrome of UMN-predominant (1% of all patients with ALS), progressive hemiparesis was first described by American neurologist Charles Karsner Mills [START_REF] Mills | A case of unilateral progressive ascending paral-ysis, probably representing a new form of degenerative dis-ease[END_REF]. This form slowly develops ipsilateral involvement and spreads to eventually involve the contralateral side. After a variable period, lower motor neuron signs will gradually be evident, suggestive of ALS [START_REF] Jaiser | Mills' syndrome revisited[END_REF]. Some patients with this variant could develop frontotemporal degeneration [START_REF] Doran | Mills syndrome with dementia: Broadening the phenotype of FTD/MND[END_REF]. Estimated survival periods are variable, but are usually longer than 10 years [START_REF] Jaiser | Mills' syndrome revisited[END_REF].
Based on involvement of non-motor regions
ALS and frontotemporal dementia (FTD) spectrum
The overlap of FTD and ALS has been well documented in FTD patients with comorbid motor neuron degeneration and in ALS patients with frontotemporal dysfunction [START_REF] Kiernan | Frontal lobe atrophy in motor neuron diseases[END_REF][START_REF] Abe | Cognitive function in amyotrophic lateral sclerosis[END_REF][START_REF] Lomen-Hoerth | The overlap of amyotrophic lateral sclerosis and frontotemporal dementia[END_REF][START_REF] Lomen-Hoerth | Are amyotrophic lateral sclerosis patients cognitively normal?[END_REF]. Cognitive impairment has been reported in approximately 50% of patients with ALS [START_REF] Ringholz | Prevalence and patterns of cognitive impairment in sporadic ALS[END_REF]. Of these patients, 10-20% manifest with FTD [START_REF] Rippon | An observational study of cognitive impairment in amyotrophic lateral sclerosis[END_REF][START_REF] Phukan | Cognitive impairment in amyotrophic lateral sclerosis[END_REF]. Cognitive decline in ALS is variable and is characterized by personality changes, irritability, social misconduct, executive dysfunction, language problems, and memory impairments [START_REF] Beeldman | The cognitive profile of ALS: A systematic review and meta-analysis update[END_REF]. Behavioral or cognitive abnormalities may be subtle leading to identify behaviorally impaired (ALS-bvi) and cognitively impaired (ALS-ci) ALS patients as compared to patients without deficit, according to the revised criteria for the diagnosis of frontotemporal cognitive and behavioural syndromes in ALS [START_REF] Strong | Amyotrophic lateral sclerosisfrontotemporal spectrum disorder (ALS-FTSD): Revised diagnostic criteria[END_REF]. The Edinburgh team proposed a new scale, ECAS avoiding the potential interaction of motor deficit with cognitive-behavioural assessment [START_REF] Abrahams | Screening for cognition and behaviour changes in ALS Amyotrophic Lateral Sclerosis and Frontotemporal[END_REF]. In ALS-FTLD patients, frontotemporal atrophy may be detected by magnetic resonance imaging of the brain with decreased cerebral blood flow in single photon emission computed tomography [START_REF] Waldemar | Focal reductions of cerebral blood flow in amyotrophic lateral sclerosis: A [99mTc]-d,l-HMPAO SPECT study[END_REF]. Neuropathologically, spreading of TDP-43 pathology and neuronal loss are involved in the frontotemporal cortices and subcortical regions, known as FTLD-TDP.
ALS and parkinsonism
In the Western Pacific area (West New Guinea, the Mariana Islands and the Kii peninsula), several endemic foci with a high incidence of ALS and PDC have been reported. Epidemiological investigations begun in 1960 have revealed the incidence of ALS in West New Guinea to be surprisingly high, at approximately 1.4%. This incidence was ten times higher than that in Guam or Kii and was estimated to be a 100 times higher than that in the continental United States [START_REF] Garruto | Amyotrophic lateral sclerosis in the Mariana Islands[END_REF]. The clinical phenotype of ALS in ALS/PDC resembles that of sporadic ALS. Phenotypic variations exist, such as PBP or PMA types. ALS/PDC has a similar neuropathology to that of sporadic ALS, characterized by the presence of Bunina bodies and TDP-43-positive cytoplasmic inclusions. The key clinical features of PDC are a lack of spontaneity and decrease in verbal frequency. Akinesia and rigidity are typically observed as parkinsonian signs in patients with PDC. The neuropathology of PDC is characterized by frontotemporal atrophy and the presence of neurofibrillary tangles (NFTs) positive for both 3-and 4-repeat tau and the absence of senile plaques [START_REF] Itoh | Biochemical and ultrastructural study of neurofibrillary tangles in amyotrophic lateral sclerosis/parkinsonism-dementia complex in the Kii peninsula of Japan[END_REF]. Interestingly, the abundance of NFTs was well observed not only in Chamorro patients with PDC but also in healthy Chamorro. In contrast, the number of NFTs in the brains of Kii ALS/PDC varied, whereas, in the brains of non-ALS/PDC patients from the village, the number of NFTs was similar to that in Japanese controls from other areas. A previous study revealed that the coexistence of TDP-43, tau, and α-synuclein is a key feature of Western Pacific ALS/PDC [START_REF] Mimuro | Amyotrophic lateral sclerosis and parkinsonism-dementia complex of the Hohara focus of the Kii Peninsula: A multiple proteinopathy?[END_REF].
Besides these endemic focal cases, an association between sporadic and familial Parkinson's disease (PD) and ALS, i.e., Brait-Fahn-Schwartz disease, has been proposed as a syndrome characterized by the co-presence of these two disorders without dementia or dysautonomia.
Parkinsonian symptoms and signs have been described in cross-sectional studies in ALS patients, with frequency from 5 to 17% [START_REF] Pradat | Extrapyramidal stifness in patients with amyotrophic lateral sclerosis[END_REF][START_REF] Pupillo | Extrapyramidal and cognitive signs in amyotrophic lateral sclerosis: a population based cross-sectional study[END_REF] Backward falls, impaired postural reflexes, retropulsion, bradykinesia, and decreased arm swing in ALS have been reported in early-stage ALS and are often linked to basal ganglia alterations [START_REF] Desai | Extrapyramidal involvement in amyotrophic lateral sclerosis: backward falls and retropulsion[END_REF]. A recent paper confirmed gait initiation in ALS patients with postural instability [START_REF] Feron | Extrapyramidal deficits in ALS: a combined biomechanical and neuroimaging study[END_REF]. On the neuropathologic level, the demonstration of a pallido-nigro-luysian (PNL) degeneration with TDP-43 pathology remains rare. Some patients manifest with parkinsonism [START_REF] Gray | Luyso-pallido-nigral atrophy and amyotrophic lateral sclerosis[END_REF][START_REF] Bergmann | Motor neuron disease with pallido-luysio-nigral atrophy[END_REF] whereas some do not manifest these symptoms [START_REF] Miki | Sporadic amyotrophic lateral sclerosis with pallidonigro-luysian degeneration: A TDP-43 immunohistochemical study[END_REF] and undetectable parkinsonism in patients with PNL involvement may be masked by motor paralysis. A recent case report described increased glial TDP-43 pathology in the PNL system and brainstem motor neurons in facial-onset sensory motor neuropathy, suggesting that the molecular pathogenesis in PNL-involved type and sporadic ALS/FTLD spectrum differs [START_REF] Rossor | TDP43 pathology in the brain, spinal cord, and dorsal root ganglia of a patient with FOSMN[END_REF]. Nigral degeneration in patients with ALS had been recognized as a rare condition [START_REF] Bonduelle | Amyotrophic lateral sclerosis[END_REF]. In 1993, Kato et al. performed a clinicopathological retrospective analysis of 15 patients with sporadic ALS, finding a decreased number of nigral melanin-containing neurons in seven patients (46.7%) and they described supranuclear ophthalmoparesis in four patients [START_REF] Kato | Diminution of dopaminergic neurons in the substantia nigra of sporadic amyotrophic lateral sclerosis[END_REF]. Nishihira et al. reported TDP-43 pathology in the substantia nigra of 16 out of 35 patients (45.7%) with ALS, in addition to nigral neuronal loss in 12 out of 35 patients (34.3%) [START_REF] Nishihira | Sporadic amyotrophic lateral sclerosis: Two pathological patterns shown by analysis of distribution of TDP-43-immunoreactive neuronal and glial cytolasmic inclusions[END_REF]. Nigral degeneration may be related to cognitive dysfunction with concomitant degeneration of frontotemporal cortices, hippocampus, and striatum. Further investigation should be performed to clarify the clinicopathological relationship of nigral degeneration in patients with ALS.
ALS with cerebellar degeneration
Cerebellar ataxia is a very rare symptom in patients with ALS. The ataxia could occur in some patients with C9orf72 mutation. Polyglutamine (polyQ) expansion in ataxin-2 (ATXN2) is known to cause the onset of spinocerebellar ataxia 2 (SCA2). In 2010, intermediate-length expansions (27-33 Qs) of polyQ was identified as a potent risk factor for ALS [START_REF] Elden | Ataxin-2 intermediate-length polyglutamine expansions are associated with increased risk for ALS[END_REF]. Both phenotypes of SCA and ALS may coexist in the same family [START_REF] Bäumer | FTLD-ALS of TDP-43 type and SCA2 in a family with a full ataxin-2 polyglutamine expansion[END_REF]. A case report demonstrated that the initial manifestation was SCA followed by motor neuron signs 16 years later [START_REF] Nanetti | Rare association of motor neuron disease and spinocerebellar ataxia type 2 (SCA2): A new case and review of the literature[END_REF]. Motor neurons in patients with ALS that have ATXN2 expansions [START_REF] Wais | The concept and diagnostic criteria of primary lateral sclerosis[END_REF][START_REF] Turner | Primary lateral sclerosis: consensus diagnostic criteria[END_REF][START_REF] Mills | A case of unilateral progressive ascending paral-ysis, probably representing a new form of degenerative dis-ease[END_REF][START_REF] Jaiser | Mills' syndrome revisited[END_REF][START_REF] Doran | Mills syndrome with dementia: Broadening the phenotype of FTD/MND[END_REF][START_REF] Kiernan | Frontal lobe atrophy in motor neuron diseases[END_REF][START_REF] Abe | Cognitive function in amyotrophic lateral sclerosis[END_REF] Qs) may contain filamentous inclusions, such as skein-like inclusions, while those in ALS without ATXN2 expansion (22-24 Qs) may contain large rounded inclusions [START_REF] Hart | Distinct TDP-43 pathology in ALS patients with ataxin 2 intermediate-length polyQ expansions[END_REF].
ALS with vacuolar degeneration of cerebral white matter
In the majority of patients with ALS, ocular movement is preserved [START_REF] Leveille | Eye movements in amyotrophic lateral sclerosis[END_REF]. However, Takeda et al. previously reported that approximately 10% of all ALS patients with subcortical white matter vacuolation presented with vertical ophthalmoparesis and suggested that the symptoms could be related to impairments in the upper motor neuron system innervating nuclei of external ocular muscles [START_REF] Takeda | Supranuclear ophthalmoparesis and vacuolar degeneration of the cerebral white matter in amyotrophic lateral sclerosis: A clinicopathological study[END_REF].
ALS with autonomic dysfunction
Autonomic dysfunction in ALS is a rare symptom, at least in the early stage of illness [START_REF] Mannen | Preservation of a certain motoneurone group of the sacral cord in amyotrophic lateral sclerosis: Its clinical significance[END_REF]. However, literature increasingly revealed the manifestation of urinary impairments in patients with ALS. Arlandis et al. reported that approximately 26% of patients with ALS demonstrated symptomatic urinary urgency [START_REF] Arlandis | Urodynamic findings in amyotrophic lateral sclerosis patients with lower urinary tract symptoms: Results from a pilot study[END_REF]. With disease progression, Onuf's nucleus may be involved and the number of neurons decreases by almost half in the advanced phase of illness [START_REF] Takeda | Dendritic retraction, but not atrophy, is consistent in amyotrophic lateral sclerosis-comparison between Onuf's neurons and other sacral motor neurons[END_REF]. Thus, patients with ALS may manifest with urinary impairments more frequently than previously thought. A subset of patients treated with tracheostomy-positive pressure ventilation (TPPV) may develop abnormal ocular movements and autonomic impairments [START_REF] Atsuta | Age at onset influences on wide-ranged clinical features of sporadic amyotrophic lateral sclerosis[END_REF].
ALS with sensory symptoms
In conjunction with the progressive damage of the corticospinal tract (CST), autopsy cases and animal studies showed involvement of sensory pathways further confirmed by morphometric measures in the somatosensory cortex as well as functional imaging and structural imaging in the white matter. Moreover, electrophysiological measurements in ALS patients showed sensory symptoms and lower amplitude of compound action potential amplitudes of the sural nerve [START_REF] Hammad | Clinical electrophysiologic and pathologic evidence for sensory abnormalities in ALS[END_REF].
Factors that influence phenotypic variability
Lessons from neuropathology: the role of TDP-43 inclusions
Upper motor neuron degeneration is based on histological demonstration of neuronophagia of Betz cells and myelin pallor of the pyramidal tract. Historically it was proposed that pyramidal degeneration progresses from its distal site, associated with the dying back phenomenon [START_REF] Ince | Motor neuron disorders[END_REF]. In fact, the site of origin is increasingly recognized as being corticofugal, which is a dying-forward process primarily starting in the corticomotoneuronal system.
The dyingforward hypothesis proposes that glutamate excitotoxicity at the level of the cortical motorneuron ultimately results in anterior horn cell metabolic deficit.
The involvement of lower motor neurons is characterized by neuronal loss and dendritic retraction of spinal and brainstem motor neurons. The cytopathology common to patients with ALS consists of the formation of inclusions in motor neurons, including Bunina bodies and TDP-43-positive cytoplasmic inclusions. Bunina bodies, first reported in 1962 in patients with familial ALS are immunoreactive to anti-cystatin C [START_REF] Okamoto | Bunina bodies in amyotrophic lateral sclerosis immunostained with rabbit anti-cystatin C serum[END_REF] and anti-transferrin antibodies [START_REF] Mizuno | Transferrin localizes in Bunina bodies in amyotrophic lateral sclerosis[END_REF].
The origin of Bunina bodies is not fully known, but it is suggested that they derived from lysosomes or the endoplasmic reticulum [START_REF] Sasaki | Ultrastructural study of Bunina bodies in the anterior horn neurons of patients with amyotrophic lateral sclerosis[END_REF]. TDP-43-positive intracytoplasmic inclusions in spinal and brainstem motor neurons appear in various forms. They can be classified into three categories: dot-like inclusions, skein-like inclusions, and round inclusions. These inclusions were identified in 1988 as ubiquitin-positive cytoplasmic inclusions [START_REF] Chou | Motoneuron inclusions in ALS are heavily ubiquitinated[END_REF]. The first identification of TDP-43 as a major component of ubiquitin-positive inclusions was reported in 2006 by Neumann et al. and Arai et al. [START_REF] Arai | TDP-43 is a component of ubiquitin-positive tau-negative inclusions in frontotemporal lobar degeneration and amyotrophic lateral sclerosis[END_REF]. The most prominent feature of TDP-43 cytopathology is the mislocalization of TDP-43 characterized by loss of native TDP-43 in the nucleus and abnormal aggregation of TDP-43 in the cytoplasm. TDP-43 in cerebral neurons is deposited in a morphologically different manner from that in spinal motor neurons. In the cerebrum, many cytoplasmic inclusions positive for TDP-43 are present in small neurons.
They are deposited in rounded, perinuclear, or curved shapes. The cerebral pathology of ALS resembles that of FTLD with motor neuron disease (FTLD-MND), with the majority of abnormal TDP-43 aggregation present in the neuronal cytoplasm and minimal aggregation in neurites and nuclei [START_REF] Mackenzie | A harmonized classification system for FTLD-TDP pathology[END_REF]. FTLD with mutations in progranulin (FTLD-GRN) is characterized by short neuritic deposition of TDP-43 and cytoplasmic aggregation of TDP-43. In C9orf72mutated FTLD-MND patients, a type B TDP-43 pathology is mostly found but some reported type A [START_REF] Mackenzie | Reappraisal of TDP-43 pathology in FTLD-U subtypes[END_REF]. Although a genetic mutation is not the only factor determining the topography of TDP-43 deposition, knowing the type is sometimes useful for understanding the relationship between genetic factors and clinical conditions in patients with ALS and FTLD. However, some patients cannot be classified into any subtype. It remains unknown how subcortical TDP-43 pathologies differ from one another [START_REF] Nishihira | Revisiting the utility of TDP-43 immunoreactive (TDP-43-ir) pathology to classify FTLD-TDP subtypes[END_REF]. Further analyses will be needed to identify cases determining the morphology of TDP-43 aggregation.
The regional discrepancy in relationship of TDP-43 accumulation and neuronal loss
The relationship between inclusion cytopathology and neuronal loss is inconsistent based on the involved area. In the spinal anterior horn or hypoglossal nucleus, the prevalence of TDP-43 pathology and neuronal loss may almost be equivalent. The second most representative regions in which the prevalence of TDP-43 aggregation and neuronal loss are equivalent are the amygdala, orbitofrontal cortex, entorhinal cortex, and frontotemporal cortices. In contrast, there are areas in which the degree of neuronal loss is very mild despite the frequent detection of TDP-43 aggregation. These include granular cells of the dentate gyrus, inferior olivary nucleus, red nucleus, and brainstem reticular formation. The neurons least sensitive for TDP-43 aggregation relative to neuronal loss are pyramidal neurons (Betz cells). It has been reported that despite the evident loss of native TDP-43 from the nucleus, abnormal aggregation in the cytoplasm is relatively rare in these neurons [START_REF] Braak | Pathological TDP-43 changes in Betz cells differ from those in bulbar and spinal α-motoneurons in sporadic amyotrophic lateral sclerosis[END_REF]. The inconsistent relationship between TDP-43 pathology and neuronal loss obscures how TDP-43 contributes to neuronal degeneration linked to neuronal loss. These discrepancies suggest that neuronal loss is plausible to be linked to decrement of the normal function of TDP-43 rather than gain of toxic function of TDP-43 and there are differences in susceptibility to TDP-43 dysfunction among the neuronal system [START_REF] Nana | Neurons selectively targeted in frontotemporal dementia reveal early stage TDP-43 pathobiology[END_REF].
Variability of disease progression in ALS
The selective vulnerability of distinct regions of the brain and spinal cord is a critical factor. The sites where ALS lesions begin and how they spread into the central nervous system have not yet been fully elucidated. Lesions may occur in one or more of susceptible areas and progress locally at various rates in each region [START_REF] Sekiguchi | Spreading of amyotrophic lateral sclerosis lesions-multifocal hits and local propagation?[END_REF]. So, a different histological susceptibility, resulting in abnormal TDP-43 function among neuronal groups, or a different cellular susceptibility associated with cellular characteristics, may at least partly explain patterns of onset and progression. A systemic TDP-43 dysfunction but also region-specific susceptibility of the motor neurons and frontotemporal cortices could contribute to lesion formation and its progression [START_REF] Ragagnin | Motor neuron susceptibility in ALS/FTD[END_REF]. The focal susceptibility related to onset may not be random, as it is determined by the age at onset and gender: bulbar onset ALS has been shown to increase with advancing years and flail arm syndrome has been shown to affect mostly men [START_REF] Chio | Phenotypic heterogeneity of amyotrophic lateral sclerosis: a population based study[END_REF]. Thus, ALS lesions seem to be multicentric, region-oriented, and have different rates of progression and phenotypic variability is based on these underlying pathological features.
Role of genetic factors
Sporadic ALS (SALS) occurs in 90% of cases, and familial ALS (FALS) in 10%, are essentially indistinguishable from each other by phenotype. In populations of European origin, the 4 major ALS causing genes are C9orf72, SOD1, TARDBP and FUS and concern two third of FALS [START_REF] Millecamps | TARDBP, and FUS mutations in familial amyotrophic lateral sclerosis: genotype-phenotype correlations[END_REF][START_REF] Millecamps | Phenotype difference between ALS patients with expanded repeats in C9ORF72 and patients with mutations in other ALS-related genes[END_REF]. For these genes it is possible to establish genotype-phenotype correlations. C9orf72 is the major genetic cause of disease over 40 years of age, whereas FUS is the the major cause for the juvenile ALS and for disease occurring in younger adults.
Trio analyses allows to confirm that de novo occurrence was preponderant in the FUS-linked SALS. For all major ALS causing genes, penetrance is incomplete. Clinically, ALS patients with SOD1 mutations frequently present with lower limbs onset, predominant lower motor neuron signs and very rarely a frontal cognitive dysfunction is associated. Disease progression appears to be bimodal in SOD1-ALS patients, with some patients showing either rather rapid (<3 years, fast progressors) or rather slow (>7 years, slow progressors) disease course. Some SOD1 mutations, such as the G41S mutation, are associated with a particularly short survival, of around one year following the diagnosis; at the opposite, the N139D mutation leads to a long disease duration of more than 10 years. Moreover, the disease course can also be different among members of the same family carrying the same mutation, as D83G with a disease course of several months to more than 10 years. In TARDBP-mutated patients, disease onset appears to be predominantly in the upper limbs, frequently associated with marked bulbar dysfunction and with FTD. FUS-related ALS phenotype is severe, with early onset and fast progression of disease leading to shorter survival. C9orf72-linked patients have more frequent bulbar onset, disease starts later in life, compared to FUS or SOD1, and is more frequently associated with FTD. Prognosis in terms of survival appears to be pejorative, compared to SOD1 or TARDBP mutation carriers.
Conclusion
The highly distinctive molecular neuropathological subtypes of ALS do not completely correlate with the various clinical phenotypes. This point is crucial as it raises the following question: is ALS a single disease with common biological mechanisms or is it several diseases with different mechanisms? Advances in genetics have shown that different genetic mutations can induce common phenotypes. This suggests that ALS is more like a syndrome. Key regulators of phenotypic variability in ALS have not been determined. The functional decrement of TDP-43 and region-specific neuronal susceptibility to ALS may be involved. Due to the selective vulnerability among different neuronal systems, lesions are multicentric, region-oriented, and progress at different rates. They may vary from patient to patient, which may be linked to the clinicopathological variability across patients.
Table 1 :
1 Motor and extra motor variability in ALS
Phenotype Prevalence
Motor variability
Bulbar onset Bulbar form 20%
Spinal onset Spinal form 60%
Lower motor neuron dominant Progressive muscular atrophy Flail arm syndrome Flail leg syndrome ≈5-10% ≈10% ≈5%
Upper motor neuron dominant Hemiplegic form 1%
Upper-motorneuron dominant ALS
Primary Lateral sclerosis 1-5% of ALS
Extra
motorvariability
Cognition and ALS/FTLD 50% of ALS
Behavior patients
Extrapyramidal signs ALS-Parkinsonism ≈5-10%
ALS with pallido-nigro-luysian rare
degeneration
Cerebellar ALS-spino cerebellar ataxia rare
degeneration ALS patients with C9ORF72 mutation
Neurogenic bladder Primary Lateral Sclerosis 50% of PLS
patients
ALS rare
Sensory signs Variable
Abnormal ocular Advanced phase of ALS patients
movement ALS with vacuolar degeneration of the rare
cerebral white matter
ALS with multisystem degeneration |
03252969 | en | [
"phys.meca.mefl",
"phys.meca.stru",
"phys.meca.vibr"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03252969/file/S0003682X21002139.pdf | Emil Garnell
Bekir Aksoy
Corinne Rouby
Herbert Shea
Olivier Doaré
Geometric optimization of dielectric elastomer electrodes for dynamic applications
Keywords: Dielectric elastomer, modal analysis, parametric optimization
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Dielectric elastomers (DEs) are active materials capable of large electrically-triggered deformations (up to 500 % area strain [START_REF] Pelrine | High-Speed Electrically Actuated Elastomers with Strain Greater Than 100%[END_REF][START_REF] Goh | Electrically-Induced Actuation of Acrylic-Based Dielectric Elastomers in Excess of 500% Strain[END_REF]). They consist of a soft elastomer membrane sandwiched between compliant and stretchable 5 electrodes, forming a deformable capacitor. When a high voltage is applied between the electrodes, an electrostatic pressure squeezes the membrane whose area therefore increases [START_REF] Suo | Theory of dielectric elastomers[END_REF], see fig. 1. This mechanism is very fast, and can be effective for frequencies up to 16 kHz [START_REF] Hosoya | Hemispherical breathing mode speaker using a dielectric elastomer actuator[END_REF] especially 10 if silicone is used as the membrane material [START_REF] Rosset | The need for speed[END_REF]. DEs have been investigated for a wide range of applications including soft robotics [START_REF] Gu | Soft wall-climbing robots[END_REF][START_REF] Chen | Controlled flight of a 450 microrobot powered by soft artificial muscles[END_REF], artificial muscles [START_REF] Duduta | Realizing the potential of dielectric elastomer artificial mus-455 cles[END_REF], micro-pumps [START_REF] Loverich | Concepts for a new class of 460 all-polymer micropumps[END_REF][START_REF] Cao | A magnetically coupled dielectric elastomer pump for soft robotics[END_REF], vibration absorbers [START_REF] Lu | Electrically tunable and 470 broader-band sound absorption by using micro-perforated dielectric elastomer actuator[END_REF], and loudspeakers [START_REF] Keplinger | Stretchable, Transparent, Ionic Conduc-475 tors[END_REF].
Electrodes
Dielectric elastomer membrane (a) (b) As explained in fig. 1 the membrane is compressed by 15 the electrostatic pressure only between the electrodes, and the rest of the membrane is passive. It is therefore possible to tune the actuation by patterning the electrode, and choosing cleverly where electrodes should be deposited.
URL: [email protected] (Olivier Doaré)
This idea has been exploited by several research groups,
20
for very diverse objectives. Conn et al. [START_REF] Conn | Towards holonomic electro-elastomer actuators with six degrees of freedom[END_REF] patterned the electrodes on a cone DE actuator to generate a rotating motion during actuation. Zou et al. [START_REF] Zou | Active Shape Control and Phase Coexistence of Dielectric Elastomer Membrane With Patterned Electrodes[END_REF] studied the influence of the electrode shape on the static deformation of an inflated DE membrane. Langham et al. [START_REF] Langham | Modeling shape selection of buckled dielectric elastomers[END_REF] analysed
25
the buckling of DE structures, and in particular the influence of the electrode shape on the type of buckling. They showed that the buckling wavelength can be controlled by the aspect ratio of an annular electrode on a circular membrane.
30
By patterning multiple independently addressable electrodes on both sides of the elastomer membrane, different actuation shapes can be obtained. This principle has been used for instance to manufacture rotating motors [START_REF] Anderson | A thin membrane ar-490 tificial muscle rotary motor[END_REF][START_REF] Rosset | Towards fast, reliable, and manufacturable DEAs: Miniaturized motor and Rupert the rolling robot[END_REF] by using three or four independent electrodes, and to make 35 reconfigurable surfaces [START_REF] Hajiesmaili | Reconfigurable shape-morphing dielectric elastomers using spatially varying electric fields[END_REF].
Patterned electrodes have also been widely investigated for soft grippers using the electro-adhesion principle [START_REF] Shintake | Soft Robotic Grippers[END_REF]. Strong fringing fields are obtained by using several electrodes polarized with opposite voltages. Shintake et al. 40 [START_REF] Shintake | Versatile Soft Grippers with Intrinsic Electroadhesion Based on Multifunctional Polymer Actuators[END_REF] used electrode patterning techniques for DE actuators to combine electro-adhesion and actuator capabilities. Similarly, Gao et al. [START_REF] Gao | Elastic Electroadhesion with Rapid Release by Integrated Resonant Vibration[END_REF] used patterned electrodes to actuate a soft gripper at resonance to improve the release. A prototype of a wall-climbing robot has been developed 45 by Gu et al., using a DE actuator and electro-adhesion obtained with patterned electrodes [START_REF] Gu | Soft wall-climbing robots[END_REF].
The above-mentioned applications of electrode patterning for DE actuators are for static [START_REF] Shintake | Versatile Soft Grippers with Intrinsic Electroadhesion Based on Multifunctional Polymer Actuators[END_REF][START_REF] Shintake | Soft Robotic Grippers[END_REF], or dynamic cases [START_REF] Gu | Soft wall-climbing robots[END_REF], in which the kinetic energy of the DEA can be neglected compared either to the kinetic energy of a heavier biasing load, or to the potential elastic energy of biasing elements. However, because of their quick response, DE actuators are also used in high frequency applications where the DE membrane itself exhibits a modal behavior [START_REF] Hosoya | Hemispherical breathing mode speaker using a dielectric elastomer actuator[END_REF][START_REF] Zhu | Resonant behavior of a membrane of a dielectric elastomer[END_REF][START_REF] Fox | Electric field-induced surface transformations and experimental dynamic characteristics of dielectric elastomer membranes[END_REF]. Few studies focus on the relation between the electrode shape and the dynamic behavior of DE structures.
For other electro-active materials, such as piezo-electric materials for example, the optimal placement of the electrodes for vibration control has been investigated [START_REF] Clark | Optimal placement of piezoelectric actuators and polyvinylidene fluoride error sensors in active structural acoustic control approaches[END_REF]. Such studies are closely related to the modeshapes analysis of the structure, as playing on the location of the excitation will increase or decrease the modal forces on the different eigenmodes. This idea has been exploited to perform modal control on flat loudspeakers: Doaré et al. [START_REF] Doaré | Design of a Circular Clamped Plate Excited by a Voice Coil and Piezoelectric Patches Used as a Loudspeaker[END_REF] used piezoelectric patches to cancel the modal forces of undesirables modes, and Jiang et al. [START_REF] Jiang | Sound radiation of panelform loudspeaker using flat voice coil for excitation[END_REF] optimized the location of several voice-coils to minimize the contribution of unwanted modes. In the present study, we focus on a given DE actuator geometry, namely an inflated DE loudspeaker (see fig. 2). When the DE membrane is inflated, the increase in area when the high voltage is applied is converted to an increase of volume of the inflated balloon [START_REF] Keplinger | Harnessing snap-through instability in soft dielectrics to achieve giant voltage-triggered deformation[END_REF][START_REF] Zhu | Resonant behavior of a membrane of a dielectric elastomer[END_REF]. If an oscillating voltage is applied, an acoustic volume source is obtained. This configuration has been widely studied for acoustic applications [START_REF] Heydt | Acoustical performance of an electrostrictive polymer film loudspeaker[END_REF][START_REF] Hosoya | Hemispherical breathing mode speaker using a dielectric elastomer actuator[END_REF], and we have previously developed a numerical model to compute its sound radiation [START_REF] Garnell | Dynamics and sound radiation of a dielectric elastomer membrane[END_REF][START_REF] Garnell | Coupled vibro-acoustic modeling of a dielectric elastomer loudspeaker[END_REF].
cm
Here, we show that the shape of the electrodes can be tuned to adjust the acoustic frequency response of the structure, by playing on the modal forces. A finite element model is used to obtain a modal description of the structure, which is then fed into a global optimization algorithm, to improve a simple optimization objective: maximizing or minimizing the contribution of a given eigenmode to the acoustic radiation of the loudspeaker. The article is organized as follows: first a quick overview of the finite element model of the DE loudspeaker is presented, and the optimization algorithm is introduced. A prototype with the optimal electrodes is then manufac-tured, and the process to deposit the patterned electrodes is presented in section 3. The effect of the optimization on the radiated acoustic pressure is finally analysed in the results section 4, and perspectives are discussed.
Optimization procedure
The optimization procedure presented in this article is based on a finite element model of the inflated DE loudspeaker shown in fig. 2, which is described in details in [START_REF] Garnell | Dynamics and sound radiation of a dielectric elastomer membrane[END_REF][START_REF] Garnell | Coupled vibro-acoustic modeling of a dielectric elastomer loudspeaker[END_REF]. In the following only a short overview of this 100 model is given, to provide the key elements which are needed for the optimization.
Model of the inflated DE loudspeaker
To simplify the notations in the whole article, all variables are non-dimensional.
105
We consider an elastomer membrane coated by electrodes on a portion of its surface described by the electrode indicative function Γ , which equals unity when an electrode is present, and zero otherwise. The membrane is then placed over a closed cavity, pre-stretched, and in-110 flated with a static pressure (see figs. 2 and 3). A high voltage u is applied between the electrodes to vibrate the membrane and radiate sound.
To study this setup, a reference configuration is defined, where the membrane is flat and at rest, and The membrane behavior is modeled by a hyper-elastic Gent law [START_REF] Gent | A New Constitutive Relation for Rubber[END_REF], and the electromechanical coupling appears as an added stress (σ Max = u 2 /h 2 where is the elastomer dielectric permittivity) in the constitutive relations relating the stress to the deformation of the membrane [START_REF] Suo | Theory of dielectric elastomers[END_REF]. Note that the electrostatic stress is proportional to the voltage squared, which creates a large non-linearity. For loudspeaker applications, a linear relation between the input signal and the radiated pressure is desired, to avoid harmonic distortion. In order to obtain a linear excitation, the loudspeaker is driven with the following voltage [START_REF] Heydt | Acoustical performance of an electrostrictive polymer film loudspeaker[END_REF]:
u(t) = U 1 + w(t) , with |w(t)| < 1 , (1)
where U is a static bias voltage, and w the audio signal. This implies that electrostatic stress is proportional to w.
The coupling between the membrane vibrations and acoustics needs to be taken into account because the membrane is very thin, and the added mass effect which arises when the membrane vibrates the surrounding air is strong [START_REF] Garnell | Coupled vibro-acoustic modeling of a dielectric elastomer loudspeaker[END_REF]. To this aim, a fully coupled model of the membrane dynamics and acoustics inside and outside of the cavity is setup in the open source finite element software FreeFEM [START_REF] Hecht | New development in freefem++[END_REF]. The mesh density is 12 elements per flexural or acoustic wavelength (respectively for the membrane and for acoustics) at the highest studied frequency, with a conforming mesh at the interface. The free-field radiation boundary condition is implemented by using perfectly matched layers (PMLs) [START_REF] Berenger | A perfectly matched layer for the absorption of electromagnetic waves[END_REF]. The system of coupled governing equations can finally be written in the following form after discretization by finite elements:
(-ω 2 M + K)X = F , (2)
where ω is the frequency of the electrical excitation, X = [x, q i , q e ] is the vector gathering all unknowns, where q i = p i /ω 2 and q e = p e /ω 2 are the fluid displacement poten-125 tials. M and K are the total mass and stiffness matrices, and F the force vector, generated by the electrostatic stress. Two modelling choices (frequency-independent PMLs and hysteretic damping) imply that the obtained mass and stiffness matrices M and K are frequency-130 independent. Without the forcing term F , eq. ( 2) is therefore a linear eigenvalue problem, which can be solved by standard eigenvalue solvers. Modal expansions of the displacement of the membrane x, and of the radiated pressure p e are obtained:
x = n F n ω 2 n -ω 2 Ψ x n , p e = n ω 2 F n ω 2 n -ω 2 Ψ e n , (3)
where F n is the modal force on mode n, ω n the eigenfrequency, Ψ x n and Ψ e n the parts of the coupled modeshape 135 n containing the structural and exterior acoustics degrees of freedom respectively. We are interested in the pressure at a given receiver position x r , so by defining the modal participation factor A n = F n Ψ e n (x r ), the pressure radiated at the receiver location reads:
p e (x r ) = n ω 2 A n ω 2 n -ω 2 . ( 4
)
The contribution of mode n to the radiated pressure at x r is thus proportional to A n .
Definition of the optimization problem 140
In the present study, we aim at optimizing the shape of the electrode to improve objective functions which will be defined in section 2.3. The prototype is axisymmetric, and only axisymmetric electrodes will be considered in the following. The electrode shape is thus parametrized by a 145 set of radii, defined in fig. 4.
Membrane Electrode
(a) (b) For the electrode shown in fig. 4(b), the electrode indicative function Γ is defined as follows:
Γ (R) = 1 for R ≤ R 1 or R 2 < R ≤ R 3 . 0 otherwise. (5)
and the electrode radii can vary in the range:
R i-1 < R i ≤ R i+1 . (6)
In the following, the radii of the electrodes will be optimized to minimize the cost functions introduced in the following section 2.3. The definition of the optimization parameters given in fig. 4 is unpractical, as it defines a bounded optimization problem, where the bounds for one parameter depends on the values of the other parameters. In order to overcome this difficulty, relative radii are defined as:
Ři = R i R i+1 for i < N R , Ři = R i for i = N R , (7)
where N R is the number of optimized radii. All Ři vary between 0 and 1. We define for the following a vector Ř containing all the optimized radii Ři : Ř = [ Ř1 , Ř2 , Ř3 , ...].
Objective functions 150
In order to validate the optimization method, simple objective functions will be used. The contribution of the second membrane mode to the radiated pressure will be either maximized or minimized. These goals have been chosen because they yield clear results for relatively simple 155 electrode shapes, as it will be shown in section 4.
To maximize the contribution of the second mode to the total sound radiation compared to the other modes, we define from eq. ( 4) the following objective function which should be minimized [START_REF] Doaré | Radiation optimization of piezoelectric plates[END_REF]:
I = n =2 |A n | 2 |A 2 | 2 . ( 8
)
If on the other hand the radiation of the second mode should be minimized, the following objective function should be used:
J = |A 2 | 2 n =2 |A n | 2 . (9)
Optimization algorithm
The general principle of the optimization algorithm is that the time-consuming operations, namely computing the modal parameters (eigenfrequencies, eigenmodes and modal loss factors) should be performed only once. To this end, we make the following assumption:
Assumption. The electrodes have a negligible influence on the eigenfrequencies, modal damping and modeshapes of the inflated DE membrane.
This presumes that the mass and stiffness of the electrodes are negligible compared to those of the membrane. As a result, the modal parameters do not change when the electrode shape is varied during the optimization process. The electrode shape only changes the modal forces, which can be computed quickly using the following formula [START_REF] Garnell | Dynamics and sound radiation of a dielectric elastomer membrane[END_REF]:
F n ( Ř) = 1 0 Γ Ř h rλ 2 ∂Ψ x n ∂R • t + λ 1 Ψ x n • e r λ 1 λ 2 dR , (10)
Inserting this expression in the definition of A n yields:
A n ( Ř) = 1 0 g n (R)Γ Ř(R)dR . ( 11
)
where we have defined g n as:
g n (R) = Ψ e n (x r ) 1 h rλ 2 ∂Ψ x n ∂R • t + λ 1 Ψ x n • e r λ 1 λ 2 . ( 12
)
All the terms appearing in eq. ( 12) are independent of the electrode shape, so g n only needs to be computed once. The cost functions I and J defined by eqs. ( 8) and ( 9) depend only on the modal participation factors A n ( Ř), which can be computed quickly as eq. ( 11) is only a summation.
The optimization is performed as follows, and is described in fig. 5: 1. Compute the modal parameters using the model. In the present study we used the MultiStart algorithm from the Matlab Global Optimization Toolbox. This algorithm 180 runs a local minimization on a randomly chosen set of starting points, and the smallest minimum retained at the end. There is no guaranty that the obtained minimum is the global minimum, but the more starting points are used, the higher is the chance to find the global minimum. The 185 optimization has been run with increasing number of starting points, and when the obtained minimum was found to be independent of the number of starting points it is considered to be a good estimation of the global minimum. Finally 10 3 starting points are used.
190
The optimization process described above yields as a result the optimal radii of the electrodes. The results obtained with the two cost functions I and J are given in table 1.
Table 1: Optimal electrode radii, obtained for the two cost functions. In the initial design the electrode covers the whole membrane.
N • Goal
Cost function
Optimal radii Ropt
Electrode shape The finite element model can then be used to compute 195 the frequency response of the loudspeaker with these optimal electrodes, which provides a first validation of the optimization procedure. To validate further the proposed algorithm, prototypes with the optimal electrodes are manufactured, and their frequency response is measured.
1 Initial design - [1, 1, 1]
Manufacturing process
In this section, the manufacturing process to build DE loudspeaker prototypes with optimal electrodes is described. First, general design considerations on the electrodes connections to the electrical supply are discussed, and the process to deposit the electrodes is then presented.
Electrode design
As explained in section 2.2, the electrode consists in several concentric rings. However, these different rings must be connected to the electrical supply, so connections need to be designed and added.
The electrostatic pressure which deforms the membrane applies only on areas of the membrane which are covered on both sides by electrodes. If an electrode is present only on one side, or if there is no electrode on neither sides, no electrostatic pressure applies. This can be exploited to design the connections, by extending the electrodes while making sure that they are overlapping only in the desired areas.
Silicone membrane
Electrodes
Top electrode Bottom electrode Active area
Holding frames (PMMA) Aluminium tape The chosen electrode design is shown in fig. 6, where the desired active area is a disk. To connect the disk to the power supply, six radial electrodes are added between the disk and the frame, where aluminum tape is placed in contact with the electrode. The connections of the top and bottom electrodes are rotated by 30°, so that they are not superimposed. This way, electrodes are present on both sides of the membrane only in the central disk, which is the desired active area. The electrodes are not perfectly conductive [START_REF] Mccoul | Recent Advances in Stretchable and Transparent Electronic Materials[END_REF], so the connections add electrical resistance to the electrical circuit, which reduces the bandwidth in which the actuator can operate. The resistance created by the electrode connections is minimized in the designed proposed in fig. 6 by using all the available membrane area as connections: by increasing the number and the width of the connections, the connection resistance decreases. With the chosen electrode design and a surface resistivity of 10 kΩ/sq, the electrical cutting frequency lies around 1/2πR e C ≈ 10 kHz, where R e and C are the lumped resistance and capacitance of the membrane.
Electrical connections
The system should remain as axisymmetric as possible, which is why we decided to use many small connections instead of a large one.
Pad-printing the electrodes
To deposit with good accuracy the electrodes on the di-245 electric membrane, the pad-printing method proposed by Rosset et al. [START_REF] Rosset | Fabrication Process of Silicone-based Dielectric Elastomer Actuators[END_REF] has been used. The dielectric membrane is a 50 µm thick silicone film (Silpuran Film 2030 from Wacker Chemie AG). A small prestretch of 1.1 is applied to keep the membrane flat during the fabrication. The elec-250 trodes are made of carbon-black (Ketjenblack EC-600JD from Akzo Nobel N.V) loaded silicone (Silbione LSR 4305 from Elkem Silicones), which yields compliant and durable electrodes [START_REF] Rosset | Fabrication Process of Silicone-based Dielectric Elastomer Actuators[END_REF].
Electrode ink Mylar mask The pad-printing process is described by the schemat-255 ics in fig. 7. First, a cliché is manufactured. It consists in a metal plate where a disk is etched. The disk diameter corresponds to the largest electrode diameter that can be deposited (40 mm in our case). When the ink reservoir of the pad-printer is slid over the cliché, ink stays in 260 the etched area, and will be picked up by the soft silicone stamp.
To pattern the electrode, a thin Mylar film is laser-cut to form a mask corresponding to the shape of the electrode. It is placed on the membrane before the pad applies the 265 ink. The electrostatic adhesion between the mylar and the silicone membrane prevents the ink from penetrating under the mask. Pictures of the membranes with the three different types of electrodes defined in table 1 which have been manufac-270 tured at the LMTS by the method described above are shown in fig. 8. The pad printing process yields precise electrodes, with clean edges.
The thickness of the electrodes (and consequently their surface resistivity) can be controlled by repeating the padprinting (stages 1 to 7 in fig. 7). With a single printing, the obtained electrodes are approximately 3 µm-thick, which is small compared to the thickness of the membrane (this estimation is obtained from the mass added during the pad-printing process).
Results and discussion
Experimental validation of the optimal designs
Given the chosen cost functions I and J, the electrode shape has been optimized and resulted in the electrodes defined in table 1. The optimal electrodes have been manufactured following the process described in the previous section 2. In the present section, the sound radiation obtained with the three electrode designs is analysed to assess the efficiency of the optimization process.
All acoustical measurements have been performed in 290 the anechoic chamber of ENSTA, which is 3×3×3 m 3 large and specified down to 120 Hz. The pressure is measured on axis at 1 m, and the transfer function between the electrical excitation w and the radiated pressure is obtained by the exponential swept sine method [START_REF] Farina | Simultaneous Measurement of Impulse Response and Distortion with a Swept-Sine Technique[END_REF].
The measured transfer functions are plotted in fig. 9 together with the results of the model, for the three electrode designs.
The global shape of the radiated pressure is the same for the three electrode shapes: at low frequencies (500 Hz to 3000 Hz), the response is dominated by the membrane modes. At high frequencies (above 3 kHz), large peaks can be observed. They are caused by the acoustic resonances inside the cavity on which the membrane is inflated [START_REF] Garnell | Coupled vibro-acoustic modeling of a dielectric elastomer loudspeaker[END_REF] (see figs. 2 and3).
When the electrode occupies the whole membrane (electrode 1), the acoustic radiation is dominated by the second membrane mode, at 750 Hz. When the radiation of this second mode is maximized (electrode 2), the difference between the peak level of mode 2 and of the neighboring 310 modes increases further. In particular, the radiation of the third mode around 1000 Hz has decreased. Moreover, the overall radiated level decreased, as a consequence of a smaller electrode area (36 % of the initial electrode area) which reduces the electrostatic excitation.
315
The third electrode design also succeeds is satisfying its goal: minimizing the radiation of the second mode. The amplitude of the second peak in fig. 9(c) is similar to the surrounding modes 1, 3, 4 and 5. Compared to the initial design fig. 9(a), the difference in amplitude between 320 the second peak and the neighboring peaks has decreased by 15 dB. Of course, the electrode area is even smaller than electrode 2 (10 % of the initial area, see fig. 8), so the This can be understood by looking at fig. 10, where the functions g n defined by eq. ( 12) are plotted for the first 4 modes. We recall that the contribution of a given mode n to the radiated pressure is proportional to A n , which is the integral of g n on portions of the membrane where an electrode is present on both sides (see eq. ( 11)).
Figure 10 shows that g 2 is always positive, so the integral of g 2 times the indicative function of the electrode Γ cannot be cancelled, except if there is no electrode. As a consequence, whatever the electrode shape, mode 2 will contribute to the sound radiation. Figure 10 also helps to interpret the optimal electrode shapes: for example when the radiation of the second mode is maximized, the optimal electrode is a ring around radius R ≈ 0.6. This corresponds to a radius where g 3 and g 4 change signs, so the integral of these functions on this ring will be very small. At the same time, R ≈ 0.6 corresponds to electrode locations where g 2 is very large.
Discussion
In the previous section, we showed that the optimization is limited by the shape of the modal participation functions g n (see fig. 10). The contribution of the second mode cannot be reduced to zero since the function g 2 is positive for all radii.
One option to further decrease the contribution of mode 2 to the sound radiation could be to excite one of the electrode rings out of phase. Indeed, this is equivalent to setting the electrode indicative function to 1 on the electrodes actuated in phase, and to -1 on the electrodes actuated out of phase.
To test this idea, we perform an optimization of the cost function J, for an electrode with 5 radii (three rings), and set the second ring out of phase (see fig. 11(b)). This electrode is denoted electrode 4.
The numerical results presented in fig. 11 show that the optimization with the second electrode out of phase succeeds in completely canceling the contribution of mode 2 to sound radiation, which is now dominated by the contributions of modes 3 and 4, around 1 kHz.
Possible applications of this kind of optimizations may be to control the directivity of DE loudspeakers. Indeed, each membrane mode has a specific directivity pattern, 370 so by controlling which mode dominates the frequency response the directivity can be controlled. For example, for the optimization presented in fig. 11, the mode 2 has a more omnidirectional radiation pattern than higher order modes. As a consequence, decreasing the 375 contribution of mode 2 to the radiation increases the nearfield directionality of the loudspeaker, as shown in fig. 12. This is particularly visible at high frequencies, because the loudspeaker is acoustically compact at low frequencies below approximately 3 kHz (its size is small compared to the acoustical wavelength) and radiates as a monopole. Near field directivity control may be useful for headphones applications for example.
This simple example highlights the wide range of tuning possibilities which is opened by electrode shape optimization.
Conclusion
In this article, we investigated the influence of the electrode shape on the dynamics of dielectric elastomer actuators. We focused on an inflated DE loudspeaker to demonstrate that the electrode can be optimized to tune the dynamic behavior.
An optimization method based on a finite element model of the loudspeaker has been presented, and two objective functions have been considered, aiming at either increasing or decreasing the contribution of the second membrane mode to the sound radiation. It has been shown experimentally that the optimization succeeds in improving the considered objectives.
In the present article we focused on simple cost functions which yield clear and easily interpreted results, and demonstrated that the sound radiation can be tuned by adjusting the electrode shape. More acoustically relevant cost functions could then be considered, such as minimizing the deviation of the sound pressure level from its mean value [START_REF] Lu | Optimization of orthotropic distributed-mode loudspeaker using attached masses and multi-exciters[END_REF], which could help flattening the acoustic frequency response.
If several electrodes are used, numerical simulations suggest that the optimization results could be improved by playing on the phase at which the different electrodes are actuated. One may imagine that the different electrodes could be excited with different amplitudes, and with arbitrary phase between each of them. The excitation of the different electrodes could also be frequency-dependent, and the optimization then becomes a combined filter design and modal control problem.
The large design freedom provided by the patterning of the electrodes of DE actuators still remains to be explored, and we believe accurate models of the dynamics of DE actuators may help investigating this path.
Figure 1 :
1 Figure 1: Principle of dielectric elastomer actuators. (a) Initial state. (b) Deformed state, when a high voltage u is applied.
Figure 2 :
2 Figure 2: Picture of an inflated DE loudspeaker prototype during acoustical measurements in the anechoic chamber of ENSTA.
95
115Figure 3 :
3 Figure 3: Model of an inflated DE membrane, and definition of the needed variables. (a) Reference configuration: the membrane is flat and at rest. Variables in the reference configuration are written with capital letters. (b) Deformed configuration: the membrane is inflated with the pressure p i and a high voltage u is applied.
Figure 4 :
4 Figure 4: Definition of the optimized parameters: the nondimensional radii R i of the different electrode rings. Here, an electrode with 3 radii is shown. (a) Initial electrode (where the second electrode between R 2 and R 3 is infinitely narrow). (b) Optimized electrode.
2 .Figure 5 :
25 Figure 5: Flowchart of the optimization procedure.
Figure 6 :
6 Figure 6: Design of the electrodes and of the electrical connections. The active area is enclosed by the red dashed circle.
Figure 7 :
7 Figure 7: Pad-printing procedure used to apply the soft conductive electrodes. The steps 1-8 are then repeated on the other side of the membrane to apply the second electrode.
Figure 8 :
8 Figure 8: Manufactured membranes with the three electrode shapes defined in table 1. The active areas are enclosed by the red dashed circles. (a) Electrode 1: initial design. (b) Electrode 2: maximize radiation of mode 2. (c) Electrode 3: minimize radiation of mode 2.
280
2 Figure 9 :
29 Figure 9: Transfer function between the radiated acoustic pressure at 1 m on axis and the excitation signal. The electrode shape is shown in the bottom right corner. (a) Electrode 1. (b) Electrode 2. (c) Electrode 3.
Figure 10 :
10 Figure 10: Modal participation factor as a function of the membrane radius for the first four modes.
2 Figure 11 :
211 Figure 11: (a) Computed radiation of the DE loudspeaker with electrode 4, for cost function J. (b) Optimal electrode shape. The second electrode ring (hatched) is driven out of phase of the two other rings.
Figure 12 :
12 Figure 12: Near field directivity of the inflated DE loudspeaker, computed by the finite element model on a circle of 5 cm diameter centered on the membrane, and normalized by the level on axis. (a) Initial design, with electrode 1 (see table 1). (b) Optimal electrode obtained when the second electrode ring is actuated out of phase, shown in fig. 11(b).
Acknowledgements
The authors acknowledge the support of the French National Research Agency within the project SMArT (ANR-15-CE08-0007-02) and of the Swiss National Science Foundation, grant 200020 184661. |
03476063 | en | [
"sde"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03476063/file/S0048969721025092.pdf | Sarah Louise Robin
email: [email protected]
Cyril Marchand
Brian Ham
France Pattier
Christine Laporte- Magoni
Arnaud Serres
Influences of species and watersheds inputs on trace metal accumulation in mangrove roots
Keywords: Coastal wetlands, Elements cycling, Ultrabasic, Bioconcentration, Iron plaque, New Caledonia
Mangrove forest is a key ecosystem between land and sea, and provides many services such as trapping sediments and contaminants. These contaminants include trace metals (TM) that can accumulate in mangroves soil and biota. This paper innovates by the comparative study of the effects of the watershed inputs on TM distribution in mangrove soil, on roots bioconcentration factors of two species (Avicennia marina and Rhizophora stylosa), and on Fe plaque formation and immobilization of these TM. Two mangrove forests in New Caledonia were chosen as study sites. One mangrove is located downstream ultramafic rocks and a Ni mine (ultrabasic site), whereas the second mangrove ends a volcano-sedimentary watershed (non-ultrabasic site). TM concentrations (Co, Cr, Cu, Fe, Hg, Mn, Ni, Pb, Zn) were measured in soil, porewaters, and roots of both species via ICP-OES or Hg analyzer. Analyzed TM were significantly more concentrated in soils at the ultrabasic site with Fe, Cr, and Ni the most abundant. Iron, Mn, and Ni were the most concentrated in the roots with mean values of 9,651, 192, and 133 mg kg -1 respectively. However, the bioconcentration factors (BCF) of Fe (0.16) and Ni (0.11) were low due to a lack of ions in the dissolved phase and potential uptake regulation. The uptake of TM by mangrove trees was influenced by concentrations in soil, but more importantly by their potential bioavailability and the physiological characteristics of each species. TM concentrations and BCF were lower for R. stylosa probably due to less permeable root system. A. marina limits TM absorption through Fe plaque formation on its pneumatophores with a capacity to retain TM up to 94 % for Mn. Mean Fe plaque formation is potentially correlated to Fe concentration in soil. Eventually, framboids of pyrite were observed within root tissues in the epidermis of A. marina's pneumatophores.
Introduction
Mangrove forest is an intertidal ecosystem developing along subtropical and tropical coastlines. Even though mangrove accounts for less than 0.5% of the world's forest surface area, it is a remarkable sink for atmospheric CO2 and plays a crucial role in the flow of energy, nutrients, and in the carbon cycle [START_REF] Kristensen | Organic carbon dynamics in mangrove ecosystems: A review[END_REF][START_REF] Donato | Mangroves among the most carbon-rich forests in the tropics[END_REF]. Mangroves provide a physical protection to the coastline against storms and wave actions and act as a buffer between the land and the sea by accumulating sediments and associated contaminants [START_REF] Massel | Surface wave propagation in mangrove forests[END_REF][START_REF] Mcivor | Reduction of wind and swell waves by mangroves[END_REF][START_REF] Lee | Ecological role and services of tropical mangrove ecosystems: a reassessment[END_REF]. Nevertheless, this ecosystem is particularly endangered, notably due to urbanization and the release of anthropogenic effluents, some countries using mangroves as filters. In the 80s and the 90s, about 35% of mangrove forests area were lost [START_REF] Valiela | Mangrove forests: One of the world's threatened major tropical environments[END_REF][START_REF] Alongi | Present state and future of the world's mangrove forests[END_REF], Duke et al., 2007). In a few emerging countries, the estimation reached up to 8% per year [START_REF] Polidoro | The loss of species: mangrove extinction risk and geographic areas of global concern[END_REF]. Between 2000 and 2012, the annual loss rates decreased significantly in some regions, ranging from 0.16% to 0.39%, which demonstrates global awareness on the significance of the conservation of mangroves [START_REF] Hamilton | Creation of a high spatio-temporal resolution global database of continuous mangrove forest cover for the 21st century (CGMFC-21): CGMFC-21[END_REF].
Trace metals (TM) are metallic elements naturally present in limited concentrations in the environment due to their occurrence in the Earth's crust [START_REF] Turekian | Distribution of the elements in some major units of the Earth's crust[END_REF]. However, anthropogenic activities have considerably increased their concentrations in many ecosystems [START_REF] Shtiza | Chromium and nickel distribution in soils, active river, overbank sediments and dust around the Burrel chromium smelter (Albania)[END_REF][START_REF] Cao | Temporal variation and ecological risk assessment of metals in soil nearby a Pb-Zn mine in southern China[END_REF]. As TM are not biologically or chemically degraded, they can accumulate in the environment or be transported over long distances [START_REF] Marx | Long-distance transport of urban and industrial metals and their incorporation into the environment: sources, transport pathways and historical trends[END_REF]. They can be transferred to mangrove forests via atmospheric routes (wind, rain) or aquatic routes (rivers, sea) [START_REF] Marx | Long-distance transport of urban and industrial metals and their incorporation into the environment: sources, transport pathways and historical trends[END_REF]McGowan, 2010, Chowdhury et al., 2017). Mangrove soils are mainly characterized by low oxygen levels (anoxic conditions) and high sulfur and organic matter (OM) contents [START_REF] Holmer | Biogeochemical cycling of sulfur and iron in sediments of a South-East Asian mangrove, Phuket island, Thailand[END_REF][START_REF] Dittmar | Mangroves, a major source of dissolved organic carbon to the oceans[END_REF]. Strong biogeochemical activity, OM richness, and high sedimentation rate are three factors that make mangrove ecosystem a sink for contaminants [START_REF] Harbison | Mangrove muds--A sink and a source for trace metals[END_REF], Wu et al., 2014[START_REF] Cennerazzo | Dynamique des HAP et des composés organiques issus de leur transformation dans les compartiments du sol et de la rhizosphère[END_REF]. TM are quickly immobilized in soil by precipitation with pyrite (FeS2) or by adsorption onto OM for example [START_REF] Marchand | Relationships between heavy metals distribution and organic matter cycling in mangrove sediments (Conception Bay, New Caledonia)[END_REF][START_REF] Chakraborty | Partitioning of metals in different binding phases of tropical estuarine sediments: importance of metal chemistry[END_REF][START_REF] Cennerazzo | Dynamique des HAP et des composés organiques issus de leur transformation dans les compartiments du sol et de la rhizosphère[END_REF]. The dynamics of TM within mangrove soil are well studied worldwide with their partitioning mainly impacted by physico-chemical parameters (redox potential, pH) and bonding phases (carbonates, OM, sulfides, oxides, …) [START_REF] Jayachandran | Effect of pH on transport and transformation of Cu-sediment complexes in mangrove systems[END_REF][START_REF] Duan | Neutral monosaccharides and their relationship to metal contamination in mangrove sediments[END_REF][START_REF] Huang | Study of mercury transport and transformation in mangrove forests using stable mercury isotopes[END_REF].
Mangroves flora adapted to the habitat specificities such as poor O2 availability and high salinity (Duke et al., 1998, Kathiresan and[START_REF] Kathiresan | Biology of mangroves and mangrove ecosystems[END_REF]. Metallic stress is another challenge for mangrove species since TM can inhibit development processes and reduce photosynthetic activity [START_REF] Prasad | Metal fractionation studies in surfacial and core sediments in the Achankovil river basin in India[END_REF][START_REF] Nagajyoti | Heavy metals, occurrence and toxicity for plants: a review[END_REF]. Many authors studied the mechanisms of mangrove plants against metallic stress [START_REF] Zheng | Accumulation and biological cycling of heavy metal elements in Rhizophora stylosa mangroves in Yingluo Bay, China[END_REF][START_REF] Cheng | Metal (Pb, Zn and Cu) uptake and tolerance by mangroves in relation to root anatomy and lignification/suberization[END_REF][START_REF] Nath | Assessment of biotic response to heavy metal contamination in Avicennia marina mangrove ecosystems in Sydney Estuary, Australia[END_REF]. Multiple studies showed that essential metals (i.e. Cu and Zn) are regulated with their transfer to shoots limited [START_REF] Machado | Trace metals in mangrove seedlings: role of iron plaque formation[END_REF][START_REF] Macfarlane | Accumulation and partitioning of heavy metals in mangroves: A synthesis of field-based studies[END_REF]. Non-essential metals (i.e. Cr and Pb) are rather excluded from the root system or accumulated on the surface of the roots, and are not transferred to the highest organs (MacFarlane andBurchett, 2000, Chowdhury et al., 2017).
In some mangrove species such as the Avicennia, the formation of an iron plaque was observed at the surface of their root system [START_REF] Machado | Trace metals in mangrove seedlings: role of iron plaque formation[END_REF][START_REF] Pi | Effects of wastewater discharge on formation of Fe plaque on root surface and radial oxygen loss of mangrove roots[END_REF][START_REF] Lin | Seedlings influenced by sulfur and iron amendments[END_REF]. This plaque results from the precipitation of Fe (III) via the oxidation of free Fe (II) from the diffusion of O2 by the plant in its rhizosphere [START_REF] Taylor | Use of the DCB technique for extraction of hydrous iron oxides from roots of wetland plants[END_REF]Crowder, 1983, Williams et al., 2014). Studies have also exposed the role of bacteria at the root surface for biological oxidation of Fe 2+ into Fe plaque, but to a lesser extent [START_REF] Maisch | Iron lung: how rice roots induce iron redox changes in the rhizosphere and create niches for microaerophilic Fe(II)-oxidizing bacteria[END_REF]. This Fe plaque helps prevent the absorption of excessive TM and other pollutants by the mangrove through the root system [START_REF] Otte | Iron plaque on roots of Aster tripolium L.: interaction with zinc uptake[END_REF][START_REF] Pi | Formation of iron plaque on mangrove roots receiving wastewater and its role in immobilization of wastewater-borne pollutants[END_REF][START_REF] Yamaguchi | Arsenic distribution and speciation near rice roots influenced by iron plaques and redox conditions of the soil matrix[END_REF]. Iron plaque can only be formed under waterlogged conditions since Fe 2+ must be soluble, brought to the surface by transpiration pull, oxidized automatically by O2 diffusion, and precipitated [START_REF] St-Cyr | Microscopic observations of the iron plaque of a submerged aquatic plant (Vallisneria americana Michx)[END_REF]. The formation of the Fe plaque is driven by many abiotic and biotic factors [START_REF] Mendelssohn | Factors controlling the formation of oxidized root channels: A review[END_REF]. Higher Fe 2+ solubility leading to higher Fe plaque formation is observed at lower pH and redox potential. However, pH is higher in the rhizosphere of more efficient O2-diffusing species that also lead to higher Fe plaque formation [START_REF] Tripathi | Roles for root iron plaque in sequestration and uptake of heavy metals and metalloids in aquatic and wetland plants[END_REF]. There is still a debate on how the Fe fraction in the soil influence Fe plaque formation. [START_REF] St-Cyr | Iron oxide deposits on the roots of phragmites australis related to the iron bound to carbonates in the soil[END_REF] showed positive correlation between Fe plaque formation and the Fe fraction bound to carbonates while other studies exposed a positive correlation with the exchangeable fraction only [START_REF] Tripathi | Roles for root iron plaque in sequestration and uptake of heavy metals and metalloids in aquatic and wetland plants[END_REF]. Consequently, it is difficult to predict the magnitude of Fe plaque formation at a specific site due to the number of factors involved but it has been extensively confirmed that Fe 2+ availability and O2 diffusion capacity of the species are the two predominant factors affecting Fe plaque development [START_REF] Tripathi | Roles for root iron plaque in sequestration and uptake of heavy metals and metalloids in aquatic and wetland plants[END_REF].
In New Caledonia (NC), some mangrove forests develop between ultramafic soils, including Ni mining sites, and the largest lagoon in the world. Massive exploitation of ultramafic laterites intensifies erosion and favors TM transfer towards rivers and the lagoon, as laterites display high concentrations of some TM [START_REF] Bird | The impacts of opencast mining on the rivers and coasts of New Caledonia[END_REF][START_REF] Fernandez | A combined modelling and geochemical study of the fate of terrigenous inputs from mixed natural and mining sources in a coral reef lagoon (New Caledonia)[END_REF]. Multiple studies have been conducted on the dynamic of TM in New Caledonian mangroves with different stress factors (Bourgeois et al., 2019a[START_REF] Marchand | The partitioning of transitional metals (Fe, Mn, Ni, Cr) in mangrove sediments downstream of a ferralitized ultramafic watershed (New Caledonia)[END_REF][START_REF] Noël | EXAFS analysis of iron cycling in mangrove sediments downstream a lateritized ultramafic watershed (Vavouto Bay, New Caledonia)[END_REF]. It has been shown that the distribution and partitioning of TM is partially affected by the initial concentrations and speciation of TM from the effluent. Iron, Ni, and Cr, characteristic metals of the ultramafic rocks, have been found in concentrations much higher than world average in mangroves soil downstream those rocks, and those elements are mostly detected in oxide or oxy/hydroxide forms [START_REF] Marchand | The partitioning of transitional metals (Fe, Mn, Ni, Cr) in mangrove sediments downstream of a ferralitized ultramafic watershed (New Caledonia)[END_REF][START_REF] Noël | Ni cycling in mangrove sediments from New Caledonia[END_REF]. The mobility and bioavailability of these TM are mostly influenced by the vegetation [START_REF] Marchand | Structuration écologique et bilan des processus biogéochimiques au sein d'une mangrove 'atelier[END_REF], the OM content [START_REF] Marchand | Relationships between heavy metals distribution and organic matter cycling in mangrove sediments (Conception Bay, New Caledonia)[END_REF], and the redox conditions (Bourgeois et al., 2019a), which will affect the biogeochemical processes that take place in the soil and will consequently impact the speciation of the TM. The two predominant species are Rhizophora stylosa seaward and Avicennia marina landward [START_REF] Marchand | Typologies et biodiversité des mangroves de Nouvelle-Calédonie[END_REF]. The root systems of these two species are broadly different with distinct particularities. R. stylosa has extensive aerial stilt support roots [START_REF] Kathiresan | Biology of mangroves and mangrove ecosystems[END_REF] while A. marina possesses pneumatophores (roots growing upward) equipped of lenticels that allow the passive diffusion of oxygen [START_REF] Purnobasuki | Morphology of four root types and anatomy of rootroot junction in relation gas pathway of Avicennia marina (Forsk) Vierh roots[END_REF]. To our knowledge, only one study looked at the bioaccumulation of TM from soil to roots and shoots of mangroves in NC and was not interested in the functions of the Fe plaque in TM absorption [START_REF] Marchand | Trace metal geochemistry in mangrove sediments and their transfer to mangrove plants (New Caledonia)[END_REF]. Globally, the processes of TM exchange at the soil-vegetation interface are understudied worldwide, especially in situ.
In the context of mangrove developing downstream lateritic soils enriched in TM, our main objective was to assess the influence of watershed inputs on the accumulation of TM in the root systems of the two main mangrove species developing along the semi-arid coastline of New Caledonia. In addition, we were interested in the influence of elemental inputs and soil redox conditions on the formation of Fe plaque at the root surface of A. marina. Eventually, a potential relationship between Fe plaque concentrations and the amount of TM retained within the plaque was investigated. To achieve those objectives, concentrations of TM in the rhizosphere and root system of the two species as well as in the Fe plaque of A. marina were quantified by Optical Emission Spectrometry by Inductive Current Plasma (ICP-OES), and the roots were analyzed by Scanning Electron Microscopy (SEM). Two study sites, one ultrabasic and one non-ultrabasic, were chosen to evaluate the impact of TM-rich watersheds on TM distribution in mangroves downstream.
We hypothesize that due to the capacity of A. marina to diffuse O2 to its rhizosphere, thanks to a more permeable root system, the root bioconcentration of TM is higher in this species than in R. stylosa. We also expect that the accumulation of TM is not subject to a threshold and therefore, TM are found in higher concentrations in the roots when it is also the case in the soil.
Material & method
Study sites
New Caledonia is a French archipelago, located in the South Pacific Ocean, 1,500 km away from the East coast of Australia and enclosed by the largest lagoon in the world, registered at UNESCO since 2008 (UNESCO, 2020). The main island is about 500 km long and 50 km wide between 20°S and 23°S. Mangrove forest represents more than 35,000 hectares and 80% of the western coast of the main island is covered by this ecosystem [START_REF] Marchand | Typologies et biodiversité des mangroves de Nouvelle-Calédonie[END_REF]. The western coastline is subject to a semi-arid climate and to semidiurnal tidal cycle [START_REF] Douillet | A numerical model for fine suspended sediment transport in the southwest lagoon of New Caledonia[END_REF]. The intertidal zone of NC is the habitat of more than 20 mangrove species. Floristic diversity of mangrove forests is divided into monospecific stands depending on multiple factors such as frequency of tidal immersion and porewater salinity [START_REF] Duke | Factors influencing biodiversity and distributional gradients in mangroves[END_REF]. On the West coast of the main island, three distinct monospecific stands are observed depending on the elevation.
On the seashore, Rhizophora spp. cover 50% of mangrove area with porewater salinity ranging from 5 to 40 g L -1 . A. marina covers >15% of mangrove area and develops higher in the intertidal zone with salinity between 35 and 70 g L -1 . Eventually, salt-flat is the highest topographical stand of vegetation composed mainly of Salicornia, and flourishes on soil with salinity ranging from 60 to 100 g L -1 [START_REF] Marchand | Typologies et biodiversité des mangroves de Nouvelle-Calédonie[END_REF][START_REF] Marchand | Trace metal geochemistry in mangrove sediments and their transfer to mangrove plants (New Caledonia)[END_REF].
On the main island, ultramafic soil represents a third of the territory with alteration profiles particularly rich in Ni [START_REF] Dalvi | The past and the future of nickel laterites[END_REF][START_REF] Nicholson | Geochemistry and age of the Nouméa Basin lavas, New Caledonia: Evidence for Cretaceous subduction beneath the eastern Gondwana margin[END_REF]. Open-casts Ni mines are exploited all along the central mountain chain. Ultramafic soils, including those open-cast mines, are eroded during rainfall events, and part of it flows into nearby rivers. Two study sites were therefore chosen. The first one for lateritic inputs from its watershed and another one for the absence of known lateritic inputs.
The first study site is a mangrove located at the mouth of the Dumbéa River (22°172.797'S, 166°429.493'E), in the Gadji Bay (Fig. 1). 9 km upstream the river is an old Ni open-cast mine (22˚143.293'S, 166˚491.718'E) (Fig. 1) exploited from the 19 th to mid-20 th century but deserted since then [START_REF] Marchand | The partitioning of transitional metals (Fe, Mn, Ni, Cr) in mangrove sediments downstream of a ferralitized ultramafic watershed (New Caledonia)[END_REF]. The first site is named "ultrabasic site" thereafter. The second site is a mangrove located in the Apogoti bay, 3.5 km south of the mouth of the Dumbéa River (22°202.072'S, 166°439.939'E) (Fig. 1). It is named thereafter "non-ultrabasic site" since it is a volcano-sedimentary soil mainly made of quartz and plagioclase and its watershed does not include the Dumbéa River or ultramafic soils (Service de la Géologie de Nouvelle-Calédonie, 2016). The hydrography of the Gadji Bay is rather south to north and therefore the effluent from the Dumbéa River does not flow into the Apogoti Bay [START_REF] Douillet | A numerical model for fine suspended sediment transport in the southwest lagoon of New Caledonia[END_REF]. At both sites, the monospecific stands identified are homogeneous in terms of tree density and height.
Sampling and processing
Samples were collected in June 2020, during the short dry season. On both sites, stands of A. marina and R. stylosa stands were identified (Fig. 1). For each species, triplicates of 30 cm long soil cores were collected with an Eijkelkamp gouge auger about 20 m apart from each other (Fig. 1). Soil cores were cut along depth with a wooden knife into 5 sections of 6 cm each. This sampling strategy was chosen because we are studying the transfer of TM to the root system that develops in the 30 first centimeters and smaller steps in sections probably would not give further information [START_REF] Kathiresan | Biology of mangroves and mangrove ecosystems[END_REF]. The pH and redox potential (Eh) were immediately measured using a pH meter (pH3110 -WTW). A glass electrode (SENTIX -Xylem Analytics) was used to measure the pH, calibrated with standards prior to measurements, while a combined Pt and Ag/AgCl electrode (SENTIX -Xylem Analytics) was used to measure the Eh, calibrated with standards prior to measurements. The soil sections were then placed into tightly closed plastic bags. To extract porewater from the soil, a rhizon (Rhizon SMS -10 cm, OD 2.5 mm -Rhizosphere) was inserted into each soil section to which a 12 mL syringe was connected. The syringe was kept fully pulled with an adapted wooden block. The plastic bags with the soil and the attached syringes were immediately placed into a cooler (~4 °C) until processed at the laboratory less than 6 h after sampling. Within the same area where the soil cores were collected, coarse roots of R. stylosa were chopped with a saw with a segment out of the soil and a segment immersed in the soil. For A. marina, pneumatophores were gently teared from the main roots. All biotic samples were immediately placed into plastic bags and kept in a cooler (~4 °C) until processed at the laboratory less than 6 h after sampling.
Upon arrival at the laboratory, the porewaters were collected and salinity was measured using a refractometer (ATC). Porewater samples were filtered at 0.45 µm and 2 drops of 70% HNO3 were added before storage at 4 °C. The soil samples were tightly closed and kept in the freezer at -20 °C. Triplicate of pneumatophores at both sites were kept fresh for Fe plaque extraction (see 2.3.3) but the rest of the biotic samples were also placed in the freezer. Frozen soil samples were lyophilized 72 h (FreeZone -LABCONCO) before sieving at 2 mm and then ground with a ball mill (FRITSCH). The frozen roots and pneumatophores were washed with distilled water. Transversal sections were obtained using a proxxon (Dremmel 3000, SpeedClic SC544) and dried in a heat chamber at 40 °C until reaching constant mass. One dried section per root and per pneumatophore were kept intact in a desiccator for scanning electron microscopy (SEM). The other dried sections were ground using a cutting mill (POLYMIX -px-mfc90d) for TM analysis.
Analysis
Reagents
All reagents were analytical grade or better. Sodium bicarbonate and ammonium fluoride were obtained from Sigma Aldrich. Tri-sodium citrate dihydrate was obtained from AnalaR NORMAPUR, VWR. Sodium dithionite was obtained from Panreac. Concentrated nitric acid (70%) was obtained from Ajax Finechem.
Multi-element (Ca, Co, Cr, Cu, Fe, K, Mg, Mn, Ni, Na, Zn) standard solution for ICP (1000 mg L -1 in HNO3 2%) was obtained from CPAChem and Pb standard solution for ICP (1000 mg L -1 in HNO3 2%) from Perkin-Elmer. Certified reference materials (ISE 1729 sample ID 910 "Clay Soil" and IPE 1804 sample ID 198 "Banana") were obtained from the Wageningen Evaluating Programmes for Analytical Laboratories.
Iron plaque extraction
Triplicate of fresh pneumatophores for both sites were washed with distilled water and immerged sections were obtained using a proxxon. In order to extract the Fe plaque from the root surface, a treatment with a solution of dithionite-citrate-bicarbonate (DCB) established by [START_REF] Taylor | Use of the DCB technique for extraction of hydrous iron oxides from roots of wetland plants[END_REF] and modified by [START_REF] Lin | Seedlings influenced by sulfur and iron amendments[END_REF] was used. Briefly, 0.5 g of pneumatophore was immersed in 20 mL of the DCB solution (0.03 M Na3C6H5O7 •2H2O, 0.125 M NaHCO3, and 0.144 M Na2S2O4 in MilliQ water) for 3 h at room temperature. Samples were then rinsed with MilliQ water to obtain 25 mL of final volume. The extract was filtered at 0.45 µm and kept at 4 °C until analysis. Treated samples were dried in a heat chamber at 40 °C until constant mass, then ground using a cutting mill.
Trace metals extraction
Acid extraction elaborated by Bourgeois et al. (2019a) was used to extract TM from soil and biota.
Briefly, 100 mg of dried sample were weighed in a 15 mL polypropylene tube and 1 mL of 10% FNH4:HNO3 concentrated (w/v) for the soil samples and 1% FNH4:HNO3 concentrated (w/v) for biotic samples was added. Sample was vortexed and let at room temperature overnight. The sample was then heated in a sand bath for 6 h at 100 °C. Once cooled down, the volume was adjusted to 10 mL with MilliQ water and the sample was vortexed. After centrifugation at 3,000 rpm for 5 min, the supernatant was transferred to a new tube and kept at 4 °C until analysis. For quality control, certified materials (ISE sample ID 910 and IPE sample ID 198) were also extracted and analyzed. Results can be exploited but questionable. In biotic extracts, the z-score's absolute value is inferior to 2 for all TM except for Co that has a score between 2 and 3 (SM 1). Due to matrix interferences, Cu and Pb were not measured in the DCB extracts.
Trace metals analysis
Total organic carbon
The percentage of total organic carbon (TOC) in the soil was obtained via a TOC analyzer (TOC-LCPH-SSM500A -Shimadzu Corporation).
Scanning electron microscopy SEM
Semi-quantitative analysis of dried transversal sections of roots and pneumatophores was performed by Scanning Electron Microscopy (SEM) using a JEOL JSM-IT 300 LV apparatus coupled with an Energy Dispersive Spectroscopy (EDX) Oxford CAM 80 device at the Electronic Microscopy Platform (P2M) at the University of New Caledonia. Before analysis, because of the electric insulator nature of the samples, they were coated with a 4 nm Pt layer using a Leica EM ac600 vacuum evaporator to avoid electrons accumulation on the surface. Coated biotic sections and dried ground soil samples were then glued to aluminum stubs with conductive carbon tape before observation.
Mineralogical analysis
Mineralogical composition of soil samples was determined by X-ray powder diffraction (XRD) (PANalytical -AERIS XRD Diffractometer) with a Co source at the ISEA laboratory. A rock sample originated from the watershed of the non-ultrabasic site and sampled nearby the mangrove site was also analyzed by XRD. Scans were taken between 5 and 80 °2θ with a generator power of 600 W. Spectra were treated and analyzed with the High Score software.
Data analysis
Iron plaque calculations
Iron plaque concentration was calculated using the following equation [START_REF] Lin | Seedlings influenced by sulfur and iron amendments[END_REF]:
[Fe plaque]= 0.1591 × (mg) (kg) (1)
To obtain the mass percentages of TM retained in the Fe plaque, the following equation was used:
% = mg mg + mg ×100
(2)
Bioconcentration factor
The bioconcentration factor (BCF) was calculated by dividing the concentration of TM in roots by the concentration of TM in the soil in the section between 6 and 12 cm deep that corresponds to the depth at which roots were collected. The collection of roots between 6 and 12 cm validates our sampling strategy of 30 cm soil cores.
b). Concentration of dissolved Fe along the vertical gradient is positively correlated to the pH (0.66) and negatively correlated to the Eh (-0.91) under R. stylosa at the ultrabasic site (SM 2).
For both mangrove sites and in the watershed rock of the non-ultrabasic site, the main identified mineralogical phase quartz (SiO2) (Fig. 3). Plagioclase (NaAlSi3O8 / CaAl2Si2O8) and illite (KAl4Si2O12)
were identified for the non-ultrabasic site and the watershed rock of this site (Fig. 3c-e). For this rock sample, vermiculite ((Mg,Ca)0,7(Mg,Fe,Al)6(Al,Si)8O22(OH)4) and enstatite (Mg,Fe)SiO3, two Fe-bearing phases, were named (Fig. 3e) while talc (Mg3Si4O10(OH)2) and chlorite ((Fe,Mg,Al)6(Si,Al)4O10(OH)8) were identified in the mangrove soil of the non-ultrabasic site (Fig. 3c,d). The neoformed pyrite crystals (FeS2)
were detected at both mangrove sites (Fig. 3a-d). For the ultrabasic site, three Fe-bearing mineralogical phases were identified: the oxyhydroxide goethite (α-FeOOH) and the phyllosilicates antigorite ((Mg,Fe)3Si2O5(OH)4) and clinochlore ((Mg,Fe)5Al(Si3Al)O10(OH)8).
For the non-ultrabasic site, all identified mineralogical phases are present along the cores (Fig. 3c,d).
However, for the ultrabasic site, goethite and clinochlore are mainly detected close to the surface and tend to vanish as depth increases. Pyrite is present in the sections lacking those two Fe-bearing phases (Fig. 3a,b).
Trace metals distribution in soil
For each soil core, all TM except for Pb and Zn are significantly more concentrated in the soil at the ultrabasic site (SM 3). As shown in the supplementary materials (SM 4), no significant differences in TM concentrations in porewaters between species have been detected.
Sections of roots and pneumatophores were collected between 6 and 12 cm deep and the concentrations of TM in the soil for this section are presented in Table 1. In the soil between 6 and 12 cm deep, all TM are significantly more concentrated at the ultrabasic site. Iron, Ni, and Cr are the three most concentrated TM in the soil at both sites with mean concentrations of 176,991 mg kg -1 , 3,499 mg kg -1 , and 2,294 mg kg -1 , respectively, at the ultrabasic site, and 38,828 mg kg -1 , 560 mg kg -1 , and 460 mg kg -1 , respectively, at the non-ultrabasic site. Under both species and at both sites, Hg is the least concentrated TM with a mean values of 0.063 mg kg -1 at the ultrabasic site and to 0.046 mg kg -1 at the non-ultrabasic site followed by Pb with mean values of 20 mg kg -1 at the ultrabasic site and 11 mg kg -1 at the non-ultrabasic site.
Bioaccumulation of trace metals from soil to roots
Iron is the most concentrated TM in the roots of both species (16,375 mg kg -1 for A. marina and 2,928 mg kg -1 for R. stylosa) with Mn (301 mg kg -1 for A. marina and 82 mg kg -1 for R. stylosa) and Ni (164 mg kg -1 for A. marina and 102 mg kg -1 for R. stylosa) the 2 nd and 3 rd most concentrated TM (Table 1).
However, Fe and Ni, as well as Co, Cr, and Hg have low BCF (0.16, 0.11, 0.16, 0.07, and 0.11, respectively) (Fig. 4a). Manganese, Cu, and Zn have higher BCF (0.93, 0.42, 0.26), and Pb is the TM with the highest BCF (1.60) (Fig. 4a).
The ratio of TM concentrations in porewaters over concentrations in soil samples (dissolved/solid ratio) are low for Fe (0.14), Ni (0.27), and Cr (0.08), while high for Mn (3.34), Cu (1.86), and Zn (3.03) (Fig. 4b).
The dissolved/solid ratio of Co is high (2.81) while that of Pb is low (0.40) (Fig. 4b).
Chromium, Cu, Fe, Pb, and Zn are significantly more concentrated in the pneumatophores of A. marina than in the roots of R. stylosa (Table 1). Cobalt, Hg, Ni, and Mn are also more concentrated in pneumatophores but not significantly (p-value > 0.05) (Table 1). The BCF of A. marina is always greater than that of R. stylosa (Fig. 4a), which is not always the case regarding TM dissolved/solid ratios (Fig. 4b).
Iron plaque and associated immobilized trace metals
Variabilities are high but the concentration of the Fe plaque at the surface of A. marina's pneumatophores is higher at the ultrabasic site (4,132 mg kg -1 ) compared to the non-ultrabasic site (1,863 mg kg -1 ) (Fig. 5a). However, the percentages of TM retained in the plaque is not site-dependent with slight differences between the ultrabasic site and the non-ultrabasic site (Fig. 5b). Mass percentages of TM contained in the Fe plaque vary between 47% and 94% (Fig. 5b). Nickel is the TM the least immobilized by the Fe plaque with mean mass concentration of 51% while Co and Mn are the most immobilized with mean values of 85% and 92%, respectively (Fig. 5b).
Scanning electron microscopy -energy dispersive X-ray spectroscopy
Roots of R. stylosa and pneumatophores of A. marina were analyzed by SEM. Figure 6 shows a cross section of a pneumatophore of A. marina, cut into two images, which allows scanning of the root from the vascular bundle to the epidermis. EDX elemental spectra associated to the four different parts of the root are also displayed. From all roots observed, Fe is the only TM that has been detected by the EDX apparatus on biotic samples, and seem to be localized in the epidermis. The vascular bundle, inner, and outer cortical cells seem to be free of TM according to SEM-EDX analysis. Interestingly, aggregates of framboidal pyrite like that can be seen in the soil were observed in the epidermis of some pneumatophores of A. marina (Fig. 7a)
and were identified via elemental mapping using EDX (Fig. 7b).
Discussion
Influence of watersheds inputs on mangrove soil conditions
At the non-ultrabasic site, the highly negative Eh values associated to the low Fe concentrations in the dissolved phase indicate that the soil is anoxic. The presence of pyrite under both species but not in the watershed rock implies that sulfate reduction processes occur in the soil. Conversely, at the ultrabasic site, less negative Eh values and higher concentrations of dissolved Fe in the upper soil layers (Fig. 2a,b) suggest suboxic conditions, which most probably result from continuous goethite inputs from upstream lateritic soils allowing Fe reduction. The latter process and goethite dissolution led to the release of Fe in porewaters, which can be further precipitated as pyrite during sulfate-reduction processes in the deeper anoxic layers [START_REF] Marchand | Distribution and characteristics of dissolved organic matter in mangrove sediment pore waters along the coastline of French Guiana[END_REF][START_REF] Kristensen | Biogeochemical cycles: global approaches and perspectives[END_REF].
Larger organic carbon accumulation was measured at the non-ultrabasic site. Considering that mangrove characteristics (zonation, tree density, height) were similar between the two sites, we suggest that different mineralization processes and rates may explain this trend. In fact, the suboxic conditions that prevail at the ultrabasic site due to the presence of large amount of goethite may result in more efficient decay processes than the anoxic non-ultrabasic site.
At both sites, pH is significantly more acidic under R. stylosa than under A. marina (Fig. 2). In mangrove soil, OM mineralization and sulfide oxidation are the two main processes that result in H + release and thus in soil acidification [START_REF] Clark | Redox stratification and heavy metal partitioning in Avicennia-dominated mangrove sediments: a geochemical model[END_REF]. R. stylosa is closer to the shore and more influenced by daily tidal fluctuations. Inputs of electron acceptors is greater for OM mineralization or sulfides oxidation under this species leading to a more acidic soil (Bourgeois et al., 2019a). The pH is not significantly different between sites under A. marina and is significantly more acidic at the non-ultrabasic site under R. stylosa even though we suggest mineralization rates are slower at this latter site (Fig. 2). The soil anoxia at the nonultrabasic site may indicate a greater amount of reduced sulfide in its soil compared to the suboxic ultrabasic-site. This leads to more oxidation reactions of sulfides under R. stylosa, which contributes to the soil acidification [START_REF] Noël | EXAFS analysis of iron cycling in mangrove sediments downstream a lateritized ultramafic watershed (Vavouto Bay, New Caledonia)[END_REF].
The geology of the watershed affects the mineralogy of mangrove soil and thus the conditions (TOC, Eh, pH…) as well as the elemental composition. Chemical interactions between different ligands in the soil and TM determine the speciation they take in the system. Those forms control the mobility, the bioavailability, and the toxicity of these TM [START_REF] Chakraborty | Partitioning of metals in different binding phases of tropical estuarine sediments: importance of metal chemistry[END_REF].
Concentrations of trace metals in soil driven by watersheds inputs
At the ultrabasic site, the higher mean concentrations of the analyzed TM in the soil can be explained by the difference in the geology of the watersheds (Table 1). The ultrabasic-site's watershed is composed mostly of laterites rich in goethite and other phases that contain TM such as antigorite (Fig. 3a,b) [START_REF] Baltzer | La sédimentation et la diagenèse précoce sur les côtes à mangrove en aval des massifs ultrabasiques en Nouvelle-Calédonie[END_REF][START_REF] Vithanage | Occurrence and cycling of trace elements in ultramafic soils and their impacts on human health: A critical review[END_REF]. Furthermore, the erosion and leaching of the old Ni mine located upstream the Dumbéa River contributes to the input of Ni and other TM such as Co, Cr, and Mn [START_REF] Fandeur | XANES Evidence for oxidation of Cr(III) to Cr(VI) by Mn-oxides in a lateritic regolith developed on serpentinized ultramafic rocks of New Caledonia[END_REF][START_REF] Metian | Trace element bioaccumulation in reef fish from New Caledonia: Influence of trophic groups and risk assessment for consumers[END_REF][START_REF] Gwenzi | Occurrence, behaviour, and human exposure pathways and health risks of toxic geogenic contaminants in serpentinitic ultramafic geological environments (SUGEs): A medical geology perspective[END_REF]. At the non-ultrabasic site, the volcano-sedimentary soil mainly composed of quartz and plagioclase (Fig. 3c,d) contributes less importantly to analyzed TM inputs to the mangrove soil. Nevertheless, two Fe-bearing phases were identified in the watershed rock: vermiculite and enstatite (Fig. 3e). Enstatite was previously identified in the Dumbéa watershed and can also be a Ni-bearing phase [START_REF] Trescases | L'évolution géochimique supergène des roches ultrabasiques en zone tropicale: formation des gisements nickélifères de Nouvelle-Calédonie[END_REF][START_REF] Baltzer | La sédimentation et la diagenèse précoce sur les côtes à mangrove en aval des massifs ultrabasiques en Nouvelle-Calédonie[END_REF]. In the conditions that prevail in our study sites, medium to important rainfall and slightly acidic soil with daily basic sea water influx, enstatite is not stable [START_REF] Oelkers | An experimental study of enstatite dissolution rates as a function of pH, temperature, and aqueous Mg and Si concentration, and the mechanism of pyroxene/pyroxenoid dissolution[END_REF]Schott, 2001, Halder and[START_REF] Halder | Far from equilibrium enstatite dissolution rates in alkaline solutions at earth surface conditions[END_REF]. The alteration products of enstatite are talc and chlorite, which were identified in the mangrove soils, with the latter potentially bearing Fe and Ni as well (Fig. 3c,d) [START_REF] Trescases | L'évolution géochimique supergène des roches ultrabasiques en zone tropicale: formation des gisements nickélifères de Nouvelle-Calédonie[END_REF]. Illite can also potentially contribute to Fe input or storage [START_REF] Xie | Soluble Fe release from iron-bearing clay mineral particles in acid environment and their oxidative potential[END_REF].
TM concentrations in the soil of the ultrabasic site are close to those measured by Marchand et al.
(2016) on the same site in 2012 with Fe, Ni, and Cr the three most concentrated in descending order. Iron and Ni concentrations in soil are much higher than world average. Here, Fe represents as high as 18% in mass of the soil, whereas average values in mangroves vary between 0.02 and 6% with a mean value around 1-2% [START_REF] Alagarsamy | Distribution and seasonal variation of trace metals in surface sediments of the Mandovi estuary, west coast of India[END_REF][START_REF] Chatterjee | Distribution and possible source of trace elements in the sediment cores of a tropical macrotidal estuary and their ecotoxicological significance[END_REF][START_REF] Kruitwagen | Status of pollution in mangrove ecosystems along the coast of Tanzania[END_REF]. In the present study, Ni values as high as 4,000 mg kg -1 were measured (Table 1) downstream the Dumbéa River while literature review exposed maximum values of Ni in mangroves soil of 200 mg kg -1 [START_REF] Bayen | Occurrence, bioavailability and toxic effects of trace metals and organic contaminants in mangrove ecosystems: A review[END_REF]. Even though, the concentrations of TM in the soil at the non-ultrabasic are lower than those of the ultrabasic site, relatively high values of Fe and Ni were still measured when compared to world average: 680 mg kg -1 for Ni and 4% in mass of the soil for Fe (Table 1).This can be explained by the high background noise of TM in New-Caledonian soils and marine sediments, mainly Fe, Ni, Cr, and Mn [START_REF] Beliaeff | Guide pour le suivi de la qualité du milieu marin en Nouvelle-Calédonie[END_REF]. Natural or anthropogenic-driven erosion of nearby soils in addition to wild and man-made effluents driven by the currents contribute to TM accumulation in the soil of the non-ultrabasic site. This mangrove is located in an urbanized area and models of current show possible inputs from urban discharged as well as an influence from a containment site located 2.5 km south-east of the studied mangrove [START_REF] Douillet | Atlas hydrodynamique du lagon Sud-Ouest de Nouvelle-Calédonie[END_REF].
TM concentrations in the soil are therefore dependent on the watershed inputs with the leaching of laterized soil the major factor causing the ultrabasic site to be richer in TM than the other site. We hypothesize that the bioaccumulation of TM in the roots of both studied mangrove species is partly dependent on other factors than TM concentrations in the soil such as the nature of the TM (essential or nonessential).
Bioaccumulation factors differ between trace metals and are independent of the watershed inputs
Iron and Ni concentrations in mangrove roots at both sites are relatively high (Table 1). Iron concentrations obtained in tissues for the ultrabasic site are as high as values reported in Marchand et al.
(2016) in the same mangrove around 10,000 mg kg -1 . Measured Fe concentrations at both sites are much higher than those measured in roots of mangroves in India with a maximum value of 938 mg kg -1 [START_REF] Chowdhury | Accumulation of trace metals by mangrove plants in Indian sundarban wetland: Prospects for phytoremediation[END_REF]. We measured Ni mean values of 164 mg kg -1 in A. marina's pneumatophores, which is greater than the maximum values reported in literature of 100 mg kg -1 [START_REF] Lewis | Fate and effects of anthropogenic chemicals in mangrove ecosystems: A review[END_REF][START_REF] Bayen | Occurrence, bioavailability and toxic effects of trace metals and organic contaminants in mangrove ecosystems: A review[END_REF]. However, the BCF of these two TM are low (Fig. 4a) with the dissolved fractions minor in comparison to the solid fractions in the soil (Fig. 4b). [START_REF] Chowdhury | Bioremoval of trace metals from rhizosediment by mangrove plants in Indian Sundarban Wetland[END_REF] estimated that 95% of Fe present in soil of mangrove forests is not bioavailable for the biota since most of it is associated to sulfides.
Therefore, even though Fe and Ni are essential metals, their absorption by the plant may be limited due to a lack of ions in the dissolved phase, which is also the case for the non-essential metal Cr that is usually one of the least bioavailable TM in the soil as it was observed here (Fig. 4) [START_REF] Zhu | Factors influencing the heavy metal bioaccessibility in soils were site dependent from different geographical locations[END_REF]. In the previous study done at the ultrabasic site, Fe and Ni were the most concentrated TM in roots and leaves of A. marina and R.
stylosa but with low bioconcentration and translocation factors as well, partly due to biological barriers that were not explored in the study [START_REF] Marchand | Trace metal geochemistry in mangrove sediments and their transfer to mangrove plants (New Caledonia)[END_REF]. The high concentrations measured in the roots, especially Fe and Ni, despite low BCF, are explained by the very high concentrations of those TM in the soil at both sites as exposed previously in the discussion.
A low BCF was also calculated for Co at both sites but with a high dissolved/solid ratio in soil (Fig. 4), confirming the observation made by [START_REF] Marchand | Trace metal geochemistry in mangrove sediments and their transfer to mangrove plants (New Caledonia)[END_REF] at the ultrabasic site. A higher dissolved fraction potentially indicates more ions in the bioavailable form. This result indicates that the uptake of Co by mangroves must be regulated at the biological level rather than at the physico-chemical level. Cobalt is an essential metal since it is a component of many enzymes and co-enzymes and particularly participates to plant growth and CO2 assimilation [START_REF] Nagajyoti | Heavy metals, occurrence and toxicity for plants: a review[END_REF][START_REF] Reece | Plant structure, growth, and development[END_REF]. However, some publications acknowledge Co as a non-essential metal for mangroves (Bourgeois et al., 2019b[START_REF] Nath | Assessment of biotic response to heavy metal contamination in Avicennia marina mangrove ecosystems in Sydney Estuary, Australia[END_REF]. It is therefore possible that only small quantities of Co are required by the plant, and for the rest of it, mangroves regulate Co uptake through biological mechanisms such as regulation at the root surface with the epidermis acting as a barrier [START_REF] Macfarlane | Accumulation and partitioning of heavy metals in mangroves: A synthesis of field-based studies[END_REF].
The non-essential metal Pb, weakly present in the dissolved phase compared to the solid phase in the soil (Fig. 4b), is highly bioaccumulated in the root systems at both sites (Fig. 4a). The maximum value of 39 mg kg -1 is still much less than the alarming values obtained in roots of the same species worldwide reaching up to more than 200 mg kg -1 [START_REF] Macfarlane | Accumulation and partitioning of heavy metals in mangroves: A synthesis of field-based studies[END_REF][START_REF] Lewis | Fate and effects of anthropogenic chemicals in mangrove ecosystems: A review[END_REF][START_REF] Bayen | Occurrence, bioavailability and toxic effects of trace metals and organic contaminants in mangrove ecosystems: A review[END_REF]. Lead is usually easily mobilized in the soil [START_REF] Zhu | Factors influencing the heavy metal bioaccessibility in soils were site dependent from different geographical locations[END_REF][START_REF] Yan | Accumulation and tolerance of mangroves to heavy metals: a review[END_REF], and has a high chelation affinity with roots cell walls (associated to carbohydrates) [START_REF] Turner | The accumulation of zinc by subcellular fractions of roots of Agrostis Tenuis sibth. in relation to zinc tolerance[END_REF]Marshall, 1972, Verkleij and[START_REF] Verkleij | Mechanisms of metal tolerance in higher plants, in, Heavy metal tolerance in plants -evolutionary aspects[END_REF]. In A.
marina, it was observed that Pb is largely accumulated as immobile, bound to cell walls, and sequestered in the roots mainly in the epidermal layers (MacFarlane andBurchett, 2000, 2002). This implies that even though not much Pb is present in the dissolved phase, the plant absorbs a large proportion of it. For A.
marina, BCF values greater than 1 were also described for Pb in literature [START_REF] Nath | Assessment of biotic response to heavy metal contamination in Avicennia marina mangrove ecosystems in Sydney Estuary, Australia[END_REF].
Regarding the essential metals Cu, Mn, and Zn, their uptakes by mangroves may not be regulated.
Relatively high concentrations of those TM in the dissolved phase compared to the solid phase (Fig. 4b) imply that they are potentially more bioavailable and therefore are better absorbed by the roots of both species (high to moderate BCF) (Fig. 4a). The BCF values obtained for Mn in A. marina are closed to those exposed in literature at both sites [START_REF] Nath | Assessment of biotic response to heavy metal contamination in Avicennia marina mangrove ecosystems in Sydney Estuary, Australia[END_REF][START_REF] Yadav | Effect of heavy metals on the carbon and nitrogen ratio in Avicennia marina from polluted and unpolluted regions[END_REF]. [START_REF] Marchand | Trace metal geochemistry in mangrove sediments and their transfer to mangrove plants (New Caledonia)[END_REF] also exposed a greater mobility and bioavailability of Zn and Cu compared to Fe and Ni at the ultrabasic site. Copper is important in ATP synthesis and plays an essential role in the respiratory electron transport chain [START_REF] Demirevska-Kepova | Biochemical changes in barley plants after excessive supply of copper and manganese[END_REF]. Zinc is the second most abundant TM in plants after Fe and the only metal present in the 6 enzymes classes [START_REF] Broadley | Zinc in plants[END_REF]. Copper and Zn also have a high chelation affinity with the cell walls of mangroves roots with Cu preferentially chelated with amino acids and Zn with organic acids [START_REF] Verkleij | Mechanisms of metal tolerance in higher plants, in, Heavy metal tolerance in plants -evolutionary aspects[END_REF]Schat, 1990, MacFarlane et al., 2007).
High values of some TM in roots, particularly Fe and Ni, are therefore due to high concentrations of TM in the soil at both sites. TM are not similarly bioconcentrated in the roots, with Pb the most transferred from soil to roots and Cr the least. The BCF of TM is not site-dependent indicating that the watershed inputs do not influence the uptake of TM by the plants. TM mobility and biological and/or physiological factors do affect TM transfer from soil to roots.
Trace metals bioaccumulation in roots is species-dependent
In this study, results evidenced the greater TM uptake in roots of A. marina compared to R. stylosa (Table 1). One explanation is that roots of R. stylosa are weakly permeable due to a thickening mechanism and exodermis cell lignification facing metallic stress that act as a physical barrier to TM uptake [START_REF] Cheng | Interactions among Fe2+, S2-, and Zn2+ tolerance, root anatomy, and radial oxygen loss in mangrove plants[END_REF][START_REF] Cheng | Metal (Pb, Zn and Cu) uptake and tolerance by mangroves in relation to root anatomy and lignification/suberization[END_REF]. A. marina, with more permeable pneumatophores, is more subject to TM absorption through the root system. We focused on pneumatophores rather than on main roots. One hypothesis is that TM are accumulated in pneumatophores as a mechanism to limit metallic stress in the central root system [START_REF] Macfarlane | Accumulation and partitioning of heavy metals in mangroves: A synthesis of field-based studies[END_REF]. Since A. marina diffuses oxygen in its rhizosphere, oxidation of reduced TM can occur, which mobilizes the TM that become bioavailable for direct absorption by the plant [START_REF] Thakur | Plant-driven removal of heavy metals from soil: uptake, translocation, tolerance mechanism, challenges, and future perspectives[END_REF]. [START_REF] Macfarlane | Accumulation and partitioning of heavy metals in mangroves: A synthesis of field-based studies[END_REF] showed a trend in higher root BCF for species that are less frequently inundated, which results in less anoxic soil (i.e. Kandelia spp.) and species with pneumatophores (i.e.
Avicennia spp.). The difference in BCF between the two species can also result from different metabolisms and therefore different physiological needs [START_REF] Yan | Accumulation and tolerance of mangroves to heavy metals: a review[END_REF]. Also, A. marina is able to excrete some TM in excess through the salt glands on its leaves as it was observed for Cu and Zn [START_REF] Naidoo | Ecophysiological responses of the mangrove Avicennia marina to trace metal contamination, Flora -Morphology, Distribution[END_REF]. In the review of [START_REF] Macfarlane | Accumulation and partitioning of heavy metals in mangroves: A synthesis of field-based studies[END_REF] on TM accumulation in mangroves, high variability of Cu, Pb, and Zn for each species were displayed but the means BCF of reviewed literature confirm the higher capacity of A.
marina to absorb TM through its roots. The Rhizophora genre has already been characterized in the literature as a low TM-accumulating species [START_REF] Peters | Ecotoxicology of tropical marine ecosystems[END_REF]. Consequently, there are differences in TM bioconcentration between species due to factors expressed above and different mechanisms are used by species in order to limit metallic stress such as the formation of Fe plaque, which is the case for A. marina.
Role of the iron plaque against metallic stress
The mean concentration of Fe plaque is two times higher on Avicennia's pneumatophores at the ultrabasic site. However, the high variability results in the absence of a significant difference (Fig. 5a). The variability can be explained by the fact that the Fe plaque was extracted from multiple pneumatophores in the same area. The age, length, and tree belonging can influence Fe plaque formation capacity. Higher mean Fe plaque formation is consistent with the higher concentrations of Fe obtained in the solid and dissolved phases of the soil at the ultrabasic site. More Fe in the soil, if in reduced form, enable more reactions between Fe 2+ and O2 to form the plaque. Multiple studies showed positive relationship between Fe plaque concentration on roots surface and reduced Fe in the soil [START_REF] Christensen | Formation of root plaques and their influence on tissue phosphorus content in Lobelia dortmanna[END_REF][START_REF] Pi | Effects of wastewater discharge on formation of Fe plaque on root surface and radial oxygen loss of mangrove roots[END_REF], 2011). Pi et al.
(2011) even showed that Fe plaque formation does not reach a threshold of Fe concentration in the soil since plaque formation occurs until the mangrove's death. Sequential extractions are though required in order to get the proportion of Fe in the reduced form to confirm what was presented in literature.
Iron plaque formation is extensive on the pneumatophores at both sites since more than 40% of Fe extracted from the pneumatophores come from the Fe plaque. Available Fe 2+ ions are phytotoxic for mangroves as they can cause DNA damages and a reduction of photosynthesis [START_REF] Nagajyoti | Heavy metals, occurrence and toxicity for plants: a review[END_REF].
Through radial oxygen loss (ROL) via pneumatophores, the formation of Fe plaque is an efficient mechanism to reduce Fe 2+ concentrations and uptake through the root system [START_REF] Youssef | Anatomical adaptive strategies to flooding and rhizosphere oxidation in mangrove seedlings[END_REF].
Our results suggest that ROL is significant enough to ensure important Fe 2+ oxidation in a mangrove with large concentrations of dissolved Fe.
Cobalt, Cr, Mn, Ni and Zn are retained in significant quantities in the Fe plaque at both sites (> 47%) (Fig. 5b). Iron oxides and hydroxides have a strong TM adsorption capacity due to their large specific surface area (> 200 m² g -1 ) [START_REF] Lin | Seedlings influenced by sulfur and iron amendments[END_REF]. Several authors reported the significant role of Fe plaque against metallic stress, mainly by trapping TM [START_REF] Taylor | Use of the DCB technique for extraction of hydrous iron oxides from roots of wetland plants[END_REF][START_REF] Machado | Trace metals in mangrove seedlings: role of iron plaque formation[END_REF][START_REF] Pi | Formation of iron plaque on mangrove roots receiving wastewater and its role in immobilization of wastewater-borne pollutants[END_REF][START_REF] Tripathi | Roles for root iron plaque in sequestration and uptake of heavy metals and metalloids in aquatic and wetland plants[END_REF]. The calculated percentages of TM immobilized in the plaque of A. marina are consistent with previous studies and show the significance of the Fe plaque against metallic stress for this species [START_REF] Machado | Trace metals in mangrove seedlings: role of iron plaque formation[END_REF][START_REF] Pi | Formation of iron plaque on mangrove roots receiving wastewater and its role in immobilization of wastewater-borne pollutants[END_REF][START_REF] Lin | Seedlings influenced by sulfur and iron amendments[END_REF]. For example in Brazil, more than 60% of immobilization on the surface of an Avicennia species, and between 25 and 60%, was observed for Mn and Zn, respectively [START_REF] Fonseca | Biogeoquímica de metais pesados na rizosfera de plantas de um manguezal do Rio de Janeiro[END_REF]. In a study concerning the role of Fe plaque on three mangroves species (Bruguiera gymnorrhiza, Excoecaria agallocha, and Acanthus ilicifolius), authors described a mean immobilization of TM in the following order: Mn > Ni > Zn > Cr [START_REF] Pi | Formation of iron plaque on mangrove roots receiving wastewater and its role in immobilization of wastewater-borne pollutants[END_REF]. These results are slightly different from what we observed here since Ni is the least trapped TM but not significantly compared to Zn and Cr (Fig. 5b). However, in this previous study by [START_REF] Pi | Formation of iron plaque on mangrove roots receiving wastewater and its role in immobilization of wastewater-borne pollutants[END_REF], TM immobilization order was speciesdependent as multiple studies exposed previously [START_REF] Tripathi | Roles for root iron plaque in sequestration and uptake of heavy metals and metalloids in aquatic and wetland plants[END_REF]. In addition, the experiment was set up in a laboratory in controlled conditions that are not always representative of in situ conditions, which has an influence on TM speciation.
More Fe plaque was measured on pneumatophores at the ultrabasic site; however, no significant difference between sites in percentages of TM retained in the plaque was observed (Fig. 5b). Two groups of TM can be identified from this observation; one composed of Co, Mn, and Ni, and the other one of Cr and Zn. Larger concentrations of Co, Mn, and Ni are retained in the plaque at the ultrabasic site, where there is more of these TM in the soil compared to the non-ultrabasic site. There are also larger concentrations of these TM in the DCB-treated pneumatophores (pneumatophores without Fe plaque). In this case, a greater volume of Fe plaque equals a greater volume of TM immobilized; however, in terms of percentage of immobilization (capacity of the plaque to immobilize TM), it is equivalent at both sites. This scenario is consistent with previous findings that show a positive correlation between Fe plaque concentrations and TM immobilization [START_REF] Pi | Formation of iron plaque on mangrove roots receiving wastewater and its role in immobilization of wastewater-borne pollutants[END_REF][START_REF] Tripathi | Roles for root iron plaque in sequestration and uptake of heavy metals and metalloids in aquatic and wetland plants[END_REF]. For Cr and Zn, the concentrations of TM in the treated pneumatophores and in the DCB extract (Fe plaque), are similar between both sites. Therefore, there is no positive relationship between Fe plaque concentration and TM immobilization. This observation may indicate a threshold of concentration of TM that can be retained in the plaque. Some studies highlighted the ability of the Fe plaque of the rice species Oryza sativa to trap TM such as Cr and Cd from the environment without affecting uptake and translocation of the TM [START_REF] Liu | Influence of iron plaque on uptake and accumulation of Cd by rice (Oryza sativa L.) seedlings grown in soil[END_REF][START_REF] Hu | Influence of iron plaque on chromium accumulation and translocation in three rice (Oryza sativa L.) cultivars grown in solution culture[END_REF]. In the present study, this means that Fe plaque concentration does not impact the amount of Cr and Zn that will be bioaccumulated in the pneumatophores. Fe plaque has never been previously studied in New Caledonian mangroves and further research must be conducted to confirm or disprove the results obtained.
Iron and pyrite accumulation within roots epidermis
Iron plaque formation is an efficient mechanism for A. marina to limit metallic stress by trapping TM and limiting their uptake by the roots as discussed earlier. Nevertheless, a fraction of TM passes through this physical barrier and can accumulate in the roots or be transferred to other plant organs. The amount of TM in the roots depend on the TM considered, its concentration in the soil, its mobility, the redox conditions and the mangrove species. Using SEM-EDX analysis, we looked for possible zones of accumulation of TM in the roots of both species. Fe was the only TM observed in roots of mangroves (Fig. 6). Iron is the most concentrated TM in roots with mean values between 2,486 and 17,125 mg kg -1 , which is between 15 and more than 1,500 times greater than the other TM (except Hg) (Table 1). The detection limit of the apparatus must restrain the observation of TM other than Fe.
Iron has only been detected in the epidermis of the roots of both A. marina and R. stylosa (Fig. 6). The epidermis is a barrier against physical damages and pathogens. The epidermis limits water loss and is the first line for water and nutrients absorption path by the roots [START_REF] De Granville | Aperçu sur la structure des pneumatophores de deux espèces des sols hydromorphes en Guyane[END_REF][START_REF] Reece | Plant structure, growth, and development[END_REF]. Iron is preferentially accumulated in the root system limiting transport to higher organs, which is why it will be more concentrated on, or in, the epidermis rather than in the cortex or vascular cylinder [START_REF] Machado | Trace metals in mangrove seedlings: role of iron plaque formation[END_REF]. Free Fe is highly toxic for plants and is therefore rarely observed on its own in tissues. Iron, mostly in its reduced form Fe 2+ , is bound to chelators and chaperones or present as Fe sequestering proteins such as ferritin and frataxin. Iron is essential for the synthesis of Fe-S clusters, fundamental in the structure of Fe-S proteins such as ferredoxins that intervenes in electron transfer in many metabolic reactions [START_REF] Morrissey | Iron uptake and transport in plants: the good, the bad, and the ionome[END_REF]. With the EDX elemental maps of mangroves roots, we observed clusters of Fe linked to S in the epidermis of A. marina's pneumatophores (Fig. 7b). However, the clusters were identified as framboidal pyrite. The sphere of FeS2 visible in Figure 7b seems attached to the cell walls. A hypothesis is that pyrite formed within the root from organic sulfur. Plants naturally fix sulfides in multiple forms (carbonates, amino acids, esters, polysaccharides, …) that can then produce H2S and will precipitate within the vascular tissues if Fe is present [START_REF] Vallentyne | Isolation of pyrite spherules from recent sediments[END_REF][START_REF] Altschuler | Sulfur diagenesis in everglades peat and origin of pyrite in coal[END_REF]. A suggested scenario is that pyrite formation within the pneumatophores of A. marina is a mechanism to limit Fe, and other chalcophile TM, transfer to higher organs. Pyrite crystals, framboidal and in lines, were previously observed in vascular channels of roots of Spartina alterniflora, a salt-marsh cordgrass of the East coast of the United States [START_REF] Giblin | Pyrite formation in marshes during early diagenesis[END_REF]. Framboids were also identified in the root vascular tissues of mangroves in the Everglades in Florida [START_REF] Altschuler | Sulfur diagenesis in everglades peat and origin of pyrite in coal[END_REF]). The explanation mentioned by [START_REF] Altschuler | Sulfur diagenesis in everglades peat and origin of pyrite in coal[END_REF] and [START_REF] Giblin | Pyrite formation in marshes during early diagenesis[END_REF] is that in-roots pyrite formation happens when the main source of sulfide within the soil is organic sulfide, which is not the case in intertidal ecosystems. To our knowledge, observation of pyrite crystals within vascular tissues has never been reported in mangroves. Further research must be conducted on the occurrence of those FeS2 crystals and on its drivers, soil properties, seasons, and species.
Conclusion
Despite their geographical proximity, two mangroves located on the semi-arid West coast of New Caledonia differ greatly in the properties of their soil and in the distribution of TM due to distinct watersheds. Mangrove soil located downstream an old Ni mine, the ultrabasic site, was characterized by TM concentrations higher than world average due to the laterized soil leached and eroded upstream. In addition, inputs of Fe oxy/hydroxide allows a good renewal of electron acceptors, and consequently, soil was characterized by suboxic conditions, which influences TM dynamics. The nature of the watershed should therefore be taken into account when studying geochemical processes and TM dynamics in mangroves. If significant differences were observed in TM concentrations in the soil of the two mangroves, TM concentrations in the roots did not seem influenced by the watershed inputs, but rather by the species. We suggest that TM uptake by those two mangroves were mainly driven by plant metabolisms and speciesrelated biochemical mechanisms. Iron and Ni were of the most concentrated TM in the roots of both species but showed low BCF possibly due to the lack of ions in the dissolved phase and the excessively large concentrations in the soil leading to potential uptake regulation by the plants. Uptake of TM by A. marina was greater than that of R. stylosa potentially due to a more permeable root system. Eventually, Fe plaque formation on pneumatophores of A. marina may be correlated to Fe concentration in the soil. Iron plaque plays an important role in limiting TM uptake by the plant up to 94% for Mn. Pyrite crystals were observed for the first time via SEM-EDX in the epidermis of pneumatophores of A. marina. We hypothesize that those framboids limit TM uptake by the plant via TM immobilization the same way as in mangroves soil. The acute investigation of pyrite crystals in root epidermis is considered for perspectives. Eventually, studies on TM species and locations within mangroves' organs should be further explored.
Figure captions
All figures are 2-column fitting images if possible. *** p < 0.001 for soil between sites † p < 0.05, † †p < 0.01 for roots between species
TM concentrations(Co, Cr, Cu, Fe, Hg, Mn, Ni, Pb, and Zn) in soil extracts, biotic extracts (roots of R. stylosa, treated pneumatophores, and untreated pneumatophores of A. marina), DCB solutions, and porewaters were obtained via ICP-OES (Varian 730-ES) at the chemistry laboratory LAMA of the French Research Institute for Sustainable Development in New Caledonia (IRD) except for Hg, which was measured using a Hg analyzer (Brooks rand, Merx) at IRD for soil and biotic samples. Concentrations were obtained using a calibration curve previously prepared with the right matrices from a stock solution of TM at 1,000 mg L -1 . Certified reference materials were used to calculate a z-score for each TM. In the soil, for Co, Cr, Fe, Mn, Ni, and Zn, |z-score| < 2. Results are therefore validated. For Cu and Pb, 2 < |z-score| < 3.
Figure 1 .
1 Figure 1. Map of the two study sites with the Dumbéa River and the old open-cast nickel mine. Stands of
Figure 2 .
2 Figure 2. Depth profile of mean pH, Eh (mV), TOC (%) (n=3) and dissolved Fe (mg L -1 except for (d) µg L -1 ) (n=2) on both sites under both mangrove species.
Figure 3 .
3 Figure 3. XRD spectra of soil at both sites under both mangrove species (a-d) and XRD spectra of rocks
Figure 4 .
4 Figure 4. (a) Bioconcentration factors of TM in the pneumatophores of A. marina and roots of R. stylosa at both sites and (b) ratio of the dissolved/solid fractions (µg L -1 /mg kg -1 ) of TM except Hg in the soil.
Figure 5 .
5 Figure 5. (a) Mean concentrations and standard deviations of iron plaque extracted from pneumatophores of
Figure 6 .
6 Figure 6. SEM images of a cross-section of a pneumatophore from A. marina and elemental spectra obtained
Figure 7 .
7 Figure 7. SEM images of framboidal pyrite (a) in a soil sample and (b) in a cross-section of the pneumatophore
Table 1 .
1 Influence of sites on TM concentrations (mean ± SD) in soil between 6 and 12 cm deep (n=3 except Hg n=1), which corresponds to the depth at which root sections were taken, and influence of species on TM concentrations (mean ± SD) in roots (n=3 except Hg n=2) in mg kg -1 except for Hg in µg kg -1 .
8,000
(mg kg 6,000
concentration 4,000 4,132
plaque 2,000 1,863
Iron
0
Ultrabasic Non-ultrabasic
Acknowledgments
The authors acknowledge Kapeliele Gututauava and Franck Bouilleret for their technical help on site. Sarah Gigante and Valérie Medevielle are acknowledged for their technical support at the lab. The authors are grateful to Valérie Sarramegna for the knowledgeful discussions. Aurélie Monnin and Olivia Barthélémy are acknowledged for the technical assistance during SEM observations. The authors thank Leocadie Jamet and Monika Lemestre for Hg analysis and ICP-OES measurements, respectively. Eventually, the authors are grateful to the two anonymous reviewers who helped improve the manuscript with their constructive comments and suggestions.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-forprofit sectors.
Statistical analysis
Statistical analyses were performed on R studio software (version 1.2.5001). First, normality was checked with a Shapiro Wilk test. For comparison between species or study sites (two samples), parameters following a normal law were statistically verified with a Student's t-test with equal or unequal variances depending on F-test scores. For parameters not following a normality curve, the Mann-Whitney test was performed. All other statistical analyses with more than two samples were verified with one-way ANOVA analysis followed by a Tukey's HSD test for parameters following a normal curve. Other parameters were tested with a Kruskal-Wallis test followed by a Wilcoxon test. All tests were performed with a 95% confidence interval and n ≥ 3.
Results
Soil description
Percentages of TOC, pH, and Eh of the soil as well as concentrations of Fe in the dissolved phase (porewater) are presented for both species at both sites as a function of depth in Figure 2. Eh values along soil cores are significantly more negative at the non-ultrabasic site (p-value < 0.001) with an average value of -277 mV, whereas at the ultrabasic site the average value is -100 mV. The pH of the soil at both sites is slightly acidic (6 -6.6). The soil is significantly more acidic under R. stylosa (p-value < 0.001) with an average of 6.2 against 6.4 under A. marina. Even though there is no significant difference in pH overall between sites, pH is significantly lower at the non-ultrabasic site under R. stylosa compared to the ultrabasic site (p-value < 0.001). Negative correlations are obtained between Eh and pH for cores under R. stylosa only (-0.59 and -0.97) (SM 2).
The amount of TOC in the soil is larger at the non-ultrabasic site compared to the ultrabasic site with values ranging from 9.95 to 23.70%, with a mean value of 15.17%, and ranging from 6.91 to 18.20%, with a mean value of 11.71%, respectively. TOC is correlated negatively with pH for soil cores under A. marina (-0.73 and -0.71) (SM 2).
Concentrations of dissolved Fe are much higher at the ultrabasic site with an average value of 64 mg L - 1 , and with a large increase of dissolved Fe at a depth of 15 cm followed by a significant decrease (Fig. 2a |
03331917 | en | [
"sdv.spee"
] | 2024/03/04 16:41:22 | 2021 | https://unilim.hal.science/hal-03331917/file/S0035378721004641.pdf | Philippe Corcia
email: [email protected]
Stéphane Beltran
Salah Eddine Bakkouche
Philippe Couratier
Therapeutic news in ALS
Keywords:
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
1 Summary: Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease characterized by death of motor neurons in the cortex and the spinal cord. This loss of motor neurons causes progressive weakness and amyotrophy. To date, the median duration of survival in patients with ALS, from first symptoms to death, is estimated to be 36 months. Currently the treatment is limited to two options: riluzole which prolongs survival for a few months and edaravone which is available in only a few countries and also has a small impact on disease progression.
There is an urgent need for more effective drugs in this disease to significantly improve progression. Over the last 30 years, all trials have failed to find a curative drug for ALS. This is due, partially, to the heterogeneity of the clinical features and the pathophysiology of motor neuron death.
We present in this review the various treatment options currently being developed for ALS, with an emphasis on the range of therapeutic approaches being explored, from old drugs tested in a new indication to innovative drugs obtained via biotechnology or gene therapy.
Introduction:
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disorder characterized by the progressive loss of both upper (UMN) and lower (LMN) motor neurons in motor cortex, brainstem and spinal cord. This causes weakness and wasting of muscles in all four limbs and the bulbar territory. Death classically occurs once respiratory muscles become involved [START_REF] Brown | Amyotrophic Lateral Sclerosis[END_REF].
ALS is no longer considered as a pure motor neuron disorder since 15-50% of patients develop cognitive disturbances, ranging from cognitive impairment to typical fronto-temporal lobar dementia (FTLD) related to injury of the frontal and temporal cortices [START_REF] Burrell | The frontotemporal dementia-motor neuron disease continuum[END_REF]. Disease incidence is around 2.5/100.000 and the lifetime risk is estimated around 1/300 for men and 1.400 for women [START_REF] Brown | Amyotrophic Lateral Sclerosis[END_REF]. The prognosis of ALS is fatal in all cases with a median duration of survival from first symptoms to death of 36 months (3).
Around 5-10% of ALS cases are considered familial (FALS) with a familial history of motor neuron diseases or FLTD affecting first or second-degree relatives (4). Since 1993 and the identification of pathogenic mutations in the SOD1 gene, more than 30 genes have been linked to ALS [START_REF] Rosen | Mutations in Cu/Zn superoxide dismutase gene are associated with familial amyotrophic lateral sclerosis[END_REF][START_REF] Renton | State of play in amyotrophic lateral sclerosis genetics[END_REF]: among them, four (C9orf72, SOD1, TARDBP, FUS) explain around 60% of FALS cases [START_REF] Renton | State of play in amyotrophic lateral sclerosis genetics[END_REF]. The remaining cases are considered sporadic.
Although there are no phenotypic differences between sporadic and familial ALS, some features appear rather linked to familial cases: FALS populations may be distinguished from sporadic cases by younger age of onset (50 vs 63 years), predominance of lower limbs onset, and disease duration which exhibits a bimodal survival curve with the majority of cases lasting less than two years or more than five years [START_REF] Camu | Genetics of familial ALS and consequences for diagnosis. French ALS Research Group[END_REF].
To date, there are no reliable diagnostic and prognostic biomarkers for ALS. Diagnosis relies on exclusion of all ALS-mimicking diseases [START_REF] Hardiman | Amyotrophic lateral sclerosis[END_REF]. Diagnostic certainty remains, to date, neuropathological, depending on the presence of inclusion bodies in the cytoplasm in spinal cord and brainstem LMN [START_REF] Ince | Amyotrophic lateral sclerosis: current issues in classification, pathogenesis and molecular pathology[END_REF]. These inclusions are labeled by ubiquitin. TDP-43 has been identified as the principal component of these ubiquitinated inclusions which is found in 97% of all ALS cases; SOD1 and FUS are found in 2% and 1% of the inclusions respectively [START_REF] Scotter | TDP-43 Proteinopathy and ALS: Insights into Disease Mechanisms and Therapeutic Targets[END_REF].
Only two drugs are currently labeled for ALS: riluzole and edaravone which is available in only a few countries worldwide (USA, Canada, Japan, and Switzerland in Europe) [START_REF] Bensimon | ALS/Riluzole Study Group. A Controlled Trial of Riluzole in Amyotrophic Lateral Sclerosis. ALS/Riluzole Study Group[END_REF][START_REF] Group | Edaravone (MCI-186) ALS 19 Study Group. Safety and efficacy of edaravone in well-defined patients with amyotrophic lateral sclerosis: a randomised, doubleblind, placebo-controlled trial[END_REF]. Riluzole principally acts as an anti-glutamatergic drug while edaravone acts as an antioxidant. Since the therapeutic effect of these drugs remains modest with only an extension of survival of a few months, there is an urgent need for new treatments that would have greater impact on the natural history of the disease. Although numerous molecules have been tested in ALS over the last 30 years, none have shown a positive effect in this disease [START_REF] Mitsumoto | Clinical trials in amyotrophic lateral sclerosis: why so many negative trials and how can trials be improved?[END_REF]. Among all the hypotheses put forward, important aspects to be stressed include disease heterogeneity concerning age of onset (from 17 to 95 years), site of onset, prominence of UMN or LMN involvement, and disease duration extending from less than one year (in the case of ALS linked to A5V SOD1 mutation) to decades (in the case of ALS linked to D91A SOD1 mutation) [START_REF] Renton | State of play in amyotrophic lateral sclerosis genetics[END_REF]. Disease heterogeneity is also the result of a complex pathophysiology in which glutamate excitotoxicity, neuroinflammation, mitochondrial dysfunction, protein misfolding and aggregation, defect in axonal transport, disturbances of the DNA/RNA machinery, and oxidative stress interact with each other in motoneuron death processes [START_REF] Van Es | Amyotrophic lateral sclerosis[END_REF]. It is currently obvious that ALS is not determined by a unique cause but, conversely, occurs is the consequence of a combination of complex cellular, molecular and genetic interactions which trigger and sustain motoneuron death [START_REF] Brown | Amyotrophic Lateral Sclerosis[END_REF][START_REF] Van Es | Amyotrophic lateral sclerosis[END_REF].
Evolving clinical and research approaches to ALS therapeutics have led to numerous clinical trials which allowed emergence of new therapeutic options. Over the last years, numerous trials have been run in ALS testing either old drugs in a new indication or compounds directed at new pathological pathways responsible for motoneuron death (Fig. 1).
Here we present an overview of clinical trials, emphasizing the main therapeutic approaches currently developed in ALS.
NEW INDICATION FOR OLD DRUGS
Certain drugs prescribed for decades in non-neurological indications can also present potential therapeutic effects on neuronal function and survival. These unknown properties might have meaningful effects in neurological disorders.
TUDCA-ALS
Tauro-ursodeoxycholic acid (TUDCA), indicated in chronic cholestatic liver and gallstone diseases is well-known to gastroenterologists. TUDCA is orally available and passes through the blood-brain barrier.
It has been shown that TUDCA is neuroprotective in motor neuron-neuroblastoma hybrid cells expressing mutant SOD1 (mutations A4V and G93A) against NO toxicity [START_REF] Hofmann | The continuing importance of bile acids in liver and intestinal disease[END_REF].
Moreover, the interest for TUDCA in ALS came forward due to its potent inhibition of apoptosis via interference with the mitochondrial pathway of cell death, resulting in an inhibition of oxygen-radical production, and reducing endoplasmic reticulum stress and caspase activation [START_REF] Amaral | Bile acids: regulation of apoptosis by ursodeoxycholic acid[END_REF]. Two phase II studies showed promising results in ALS which justified initiating a phase III trial. The first showed that UDCA is well tolerated, measured at a meaningful serum level after oral intake of an appropriate dose, and detected in cerebrospinal fluid [START_REF] Parry | Safety, tolerability, and cerebrospinal fluid penetration of ursodeoxycholic acid in patients with amyotrophic lateral sclerosis[END_REF]. The second study was a proof-of-concept phase II trial which showed that the slope of progression of the ALSFRS-r scale was 7 points/year smaller in the group treated by riluzole+TUDCA vs riluzole [START_REF] Elia | Tauroursodeoxycholic acid in the treatment of patients with amyotrophic lateral sclerosis[END_REF]. Following these promising results, a European double-blinded placebo-controlled phase III study was designed to carry on these studies (NCT03878654). This trial is in progress and aims to determine whether TUDCA at the dose of 2g daily will slow disease progression. 440 patients are expected to be enrolled.
TAMOXIFEN
Tamoxifen has been approved for the treatment of breast cancer for a long time. Its mode of action involves a selective modulation of estrogen receptors. In addition to this anti-neoplastic effect, tamoxifen also shows neuroprotective properties by modulating inflammatorymediated damage and promoting autophagy [START_REF] Chen | Tamoxifen for amyotrophic lateral sclerosis: a randomized double-blind clinical trial[END_REF]. A randomized, double-blind, placebocontrolled phase II study was conducted in 18 ALS patients followed for 12 months. The primary endpoint was disease duration. The results have just been published: in the tamoxifen group (40 mg/d), disease progression was slightly decreased but not significantly [START_REF] Chen | Tamoxifen for amyotrophic lateral sclerosis: a randomized double-blind clinical trial[END_REF].
LEVOSIMENDAN
Levosimendan was first developed in cardiology for the treatment of heart failure. The mechanism of action of this drug relies on a selective binding to troponin C and subsequent sensitization of fast and slow skeletal muscles [START_REF] Haikala | Troponin C-mediated calcium sensitization by levosimendan accelerates the proportional development of isometric tension[END_REF]. Levosimendan improves submaximal contraction of diaphragm fibers (both slow and fast muscle fibers) by around 20% in patients with chronic obstructive pulmonary disease [START_REF] Van Hees | Levosimendan improves human diaphragm function[END_REF]. In addition, levosimendan prescribed intravenously, was shown to improve the neuromechanical efficiency and contractility of human diaphragm function in healthy subjects [START_REF] Doorduin | The calcium sensitizer levosimendan improves human diaphragm function[END_REF]. Owing to these compelling findings on diaphragm muscle function, a randomized, double-blind, placebo-controlled, crossover, threeperiod study with six months open-label follow-up was conducted in ALS. The primary endpoint was the slow vital capacity (SVC) percentage measured in the sitting position. This study did not show a positive effect of levosimendan on SCV. Since the drug was well tolerated, and considering the conclusions of the post-hoc analysis in favor of a possible dosedependent therapeutic effect on supine SCV, a phase III study has been conducted to assess the efficacy of levosimendan at the dose of 1-2 mg daily after 48 weeks of treatment on SCV (NCT03948178) [START_REF] Al-Chalabi | Levosimendan in amyotrophic lateral sclerosis: a phase II multicentre, randomised, doubleblind, placebo-controlled trial[END_REF]. The open-labeled phase of the study is expected to be completed in the next weeks and the results should be published soon.
MASITINIB
Masitinib is an oral tyrosine kinase inhibitor available for the treatment of mastocytosis. This compound has exhibited promising results in animal (rat) models of ALS (SOD1-G93A). The interest in ALS came from its neuroprotective effect on neurons thanks to immunomodulatory properties which target in particular activated microglia and mast cell activity, in both central and peripheral nervous systems [START_REF] Dubreuil | Masitinib (AB1010), a potent and selective tyrosine kinase inhibitor targeting KIT[END_REF][START_REF] Davis | Comprehensive analysis of kinase inhibitor selectivity[END_REF]. Among all the publications focused on this compound in ALS, the results of the double-blind study, placebo-controlled randomized phase II/III study which enrolled 394 patients receiving either riluzole (100mg/d) plus placebo or masitinib at 4.5 or 3.0 mg/kg/d fostered the proposal of a larger study. The primary endpoint was the slope of decline of the ALSFRS-R scale during the 48 weeks of the study [START_REF] Mora | Masitinib as an addon therapy to riluzole in patients with amyotrophic lateral sclerosis: a randomized clinical trial[END_REF]. First, this study showed that masitinib was well tolerated and safe at both dosages. Second, there was a positive effect of masitinib in ALS highlighted by a gain of 3.4 points (9.2 vs. 12.6, p=0.016) on the slope of the ALSFRS-r score after 48 weeks: this corresponds to a 27% slowing in the rate of functional decline. Of note, the positive effect was more meaningful in the fast progressor group (slope of progression ≥1.1 /month) at the dose of 4.5 mg/kg/d. A multicentric, randomized, double-blind, placebocontrolled, parallel groups, phase III study is expected to start soon in Europe and North America simultaneously: almost 500 ALS patients are planned to be enrolled in this study which should last 24 months.
DEFERIPRONE
This trial stems from literature in favor of a role for iron in neurodegenerative diseases and mainly in ALS [START_REF] Veyrat-Durebex | Iron metabolism disturbance in a French cohort of ALS patients[END_REF]. Iron is a cofactor of several enzymes involved in motoneuron machinery such as mitochondria aerobic metabolism. Moreover, serum iron, ferritin and transferrin were shown to be increased in ALS patients compared to controls [START_REF] Veyrat-Durebex | Iron metabolism disturbance in a French cohort of ALS patients[END_REF]. Of note, postmortem studies which showed iron accumulation in the central motor tract in ALS patients made the hypothesis of the role of iron in ALS strong enough to allow a clinical trial focused on iron metabolism [START_REF] Adachi | usefulness of SWI for the detection of iron in the motor cortex in amyotrophic lateral sclerosis[END_REF]. Finally, the neuroprotective effect and the longer survival in ALS mouse models treated by iron chelators strongly supported initiation of a clinical trial in human ALS [START_REF] Wang | Prevention of motor neuron degeneration by novel iron chelators in SOD1 G93A transgenic mice of amyotrophic lateral sclerosis[END_REF].
FAIRALS is a randomized, double-blind, placebo-controlled study which aims to assess the efficacy of an iron chelator, deferiprone, in ALS. This study conducted by the French ALS Centre of Lille is currently ongoing. 240 participants are expected to be included, half of whom will receive 600 mg/d of deferiprone taken over 12 months (NCT03293069).
FASUDIL
Rho kinase (ROCK) recently became a matter of interest in ALS. This serine/threonine kinase has two isoforms whose isoform ROCK2 is highly expressed in the central nervous system, more importantly with age [START_REF] Komagome | Postnatal changes in Rho and Rho-related proteins in the mouse brain[END_REF]. ROCK plays a key role in triggering the signal of axonal degeneration once it is activated by the binding of axonal growth inhibitory molecules to their specific receptors [START_REF] Komagome | Postnatal changes in Rho and Rho-related proteins in the mouse brain[END_REF]. ROCK inhibitors have already been assessed in neurodegenerative diseases and demonstrated both a neuroprotective and pro-regenerative effect in Parkinson's disease (PD) models [START_REF] Lingor | ROCK-ALS: Protocol for a Randomized, Placebo-Controlled, Double-Blind Phase IIa Trial of Safety, Tolerability and Efficacy of the Rho Kinase (ROCK) Inhibitor Fasudil in Amyotrophic Lateral Sclerosis[END_REF].
ROCK inhibitors significantly improved survival and motor function in the SOD1(G93A) mouse model of ALS when initiated at a presymptomatic stage [START_REF] Tönges | Rho kinase inhibition modulates microglia activation and improves survival in a model of amyotrophic lateral sclerosis[END_REF].
Fasudil is a small ROCK-inhibitor molecule first developed for the treatment of vasospasm following subarachnoid hemorrhage (SAH). In the central nervous system, the effects of Fasudil were assessed in a phase III trial in patients with acute ischemic stroke, which showed a significant improvement of clinical outcome [START_REF] Shibuya | Effects of fasudil in acute ischemic stroke: Results of a prospective placebo-controlled double-blind trial[END_REF].
Although the underlying mechanisms remain to be understood, Fasudil currently emerges as a promising drug prompting proposal of a clinical trial in ALS. A phase IIa, multiple-center, randomized, double-blind, controlled, prospective, dose-finding, exploratory, interventional study is expected to start soon in ALS. Fasudil will be administrated intravenously twice daily for 20 treatment days. There will be three arms: 30 mg/d, 60 mg/d and a placebo group. The primary objective of this study will be the assessment of tolerability and safety.
Non-chemotherapeutic trials
Would it be possible to improve the survival of ALS patients with suitable nutrition?
Over the last 20 years, there were numerous lines of evidence stressing that malnutrition is a major pejorative factor in ALS. Following the publication of the French team of Limoges which highlighted 20 years ago that malnutrition increased by seven the risk of death in a period of seven months [START_REF] Desport | Nutritional Status Is a Prognostic Factor for Survival in ALS Patients[END_REF], a huge body of literature confirmed that malnutrition and low body mass index are both correlated with shorter survival in ALS [START_REF] Desport | Nutritional Status Is a Prognostic Factor for Survival in ALS Patients[END_REF]. On the other hand, a high-fat high calorie intake diet (HFCD) was shown to improve the evolution of ALS in mouse models [START_REF] Wills | Hypercaloric enteral nutrition in patients with amyotrophic lateral sclerosis: a randomised, double-blind, placebo-controlled phase 2 trial[END_REF]. Two European trials have been focused on diet in ALS: the French study named NUTRALS (NCT02152449) is still ongoing and deals with the effect of oral nutritional supplementation in the functional status of ALS patients.
The second trial, LIPCAL-ALS (NCT02306590), was conducted in Germany and the results have been published recently [START_REF] Ludolph | Effect of High-Caloric Nutrition on Survival in Amyotrophic Lateral Sclerosis[END_REF]. This trial evaluated the efficacy of HFCD (addition of 45g of fat intake and 405 Kcal daily) on disease duration. Two hundred and one patients were enrolled and followed for 18 months. There was no evidence of a positive effect of HFCD in this population but post-hoc analysis would suggest a prolonged survival in fast-progressor patients after 18 months: the survival probability was 0.62 in the HFCD group vs 0.38 in the control group: this meant a significant hazard ratio of 0.5 (Cox proportional hazard regression model, p=0.02).
BIOTECHNOLOGY HUMANIZED MONOCLONAL ANTIBODIES
Prescription of monoclonal antibodies is obvious in dysimmune disorders, which seems, at first sight, unsuitable in neurodegenerative diseases and, in fine in ALS.
Interest for monoclonal antibodies, like ravulizumab, in ALS came from the huge body of literature on neuroinflammation in ALS. This opened new ways for therapeutic research targeting the microglia and the complement cascade.
There is a growing literature showing that the innate immune complement system drives neuroinflammation in ALS. Besides microglia activation, neuroinflammation is also involved in ALS because of the role of the complement system both in the onset and the progression of motor neuron death in ALS: this has been also matter of important reports which highlighted an impairment of the control processes regulating complement cascade activation in the central nervous system: this led to focus on promising therapeutic targets in ALS via an action on the immune system. One key target of the complement cascade in ALS is C5. This relies on an accumulation of evidence over the last 20 years of a key role for this component in the pathophysiology of ALS: both the deletion of C5aR1 (a specific C5 signaling receptor) and the injection of a C5aR1 antagonist improved survival in a SOD1 mouse model of ALS, strongly supporting a key role for this protein in motoneuron death in ALS [START_REF] Parker | Revisiting the role of the innate immune complement system in ALS[END_REF]. Furthermore, functional and survival benefits of C5 signaling blockade have been demonstrated in animal models of ALS. These findings strongly support the inhibition of C5 of the complement complex to be a promising avenue for the treatment of ALS.
Ravulizumab is a recombinant, humanized monoclonal antibody with high specificity against human C5 component. It acts by blocking the complement activation and is currently approved in paroxysmal nocturnal hemoglobinuria and atypical hemolytic uremic syndrome.
In ALS, a phase III, randomized, double-blind, placebo-controlled, multiple-center study to evaluate the safety and efficacy of ravulizumab should start recruitment soon in France. An open labeled extension will complete the 50 weeks of the randomized controlled trial. The primary objective of this study is to evaluate the effect of ravulizumab compared with placebo on ALS functional rating scale-revised (ALSFRS-R) score (NCT04248465).
SPECIFIC ANTIBODIES
Protein aggregation is an aberrant process due to misfolding which promotes self-association of a protein. This self-association leads to oligomerization and then to fibrils by polymerization. Due to misfolding, the hydrophobic sites of the protein are accessible to the hydrophobic cellular environment.
Misfolded protein is one of the hallmarks of neuropathological findings in ALS with the presence of TDP-43, C9orf72 dipeptide repeats, phosphorylated neurofilaments, FUS and SOD1 inclusions in motoneurons [START_REF] Parakh | Protein Folding Alterations in Amyotrophic Lateral Sclerosis[END_REF]. These pathological deposits probably exert a toxic effect on neurons. Over the last years, therapeutic strategies have focused on blockade of this abnormal process. Several trials target the misfolding process by specific antibodies to misfolded SOD1 protein (α-miSOD1), fragment of antibodies directed to a specific region of TDP-43, to vaccines against unfolded SOD1 and small molecules which enhance the expression of heat shock proteins and, consequently, promote autophagy, such as colchicine and arimoclomol (review in 38).
ANTISENS-OLIGONUCLEOTIDES
Gene therapies have made remarkable progress over the last decade, leading to changes in the course of numerous diseases which were incurable until now. These therapeutic approaches perfectly fit with what clinicians were searching for: a targeted treatment based, in this situation, on the genetic status.
The rationale of this approach relies on the fact that lowering the burden of mutated SOD1 protein in ALS linked to SOD1 mutation may improve the disease prognosis. First trials on antisense oligonucleotides (ASOs) have been launched more than 10 years ago. Since 2010, numerous phase I and phase II trials have been conducted in ALS. ASOs stimulate the degradation of RNA by specific targeting and/or can also correct splicing defects or block miRNA [START_REF] Schoch | Antisense oligonucleotides: translation form mouse models to human neurodegenerative diseases[END_REF].
Following the impressive action of nusinersen in spinal muscular atrophy, major hopes have been put in the development of ASO directed to SOD1 and C9orf72 mutants. A phase III trial is currently underway assessing the efficacy, safety, tolerability, pharmacokinetics and pharmacodynamics of tofersen in ALS adults with a SOD1 mutation (NCT02623699). The study is expected to be completed in July 2021. In parallel, a phase I study that will evaluate the long-term safety and tolerability of BIIB078 in ALS patients with abnormal C9orf72 expansion will also start soon (NCT04288856). This study is expected to be achieved in July 2023.
REGENERATING THERAPEUTICS
Transplantation of stem cells gives the opportunity to bring neurotrophic factors into the central nervous system. This might make it possible to re-settle regions affected by the motoneuron degenerative process. To date, there are too many issues that need to be resolved before this approach could appear promising in ALS: among the many uncertainties, the type of cells, the mode of administration, and the number of cells to transplant are still under debate [START_REF] Goutman | Stem cell treatments for amyotrophic lateral sclerosis (ALS): A critical overview of early phase trials[END_REF].
Conclusion: How to proceed for success?
We are currently in an enthusiastic period for ALS clinical trials: there are numerous trials that are ongoing or expected to start soon, evaluating promising drugs tested either in a new indication or specifically developed for ALS. This hopeful period does not have to mask that many issues need to be resolved before we will be able to develop a curative drug for ALS.
As pointed out by numerous clinicians, many elements remain problematic: the design of clinical trials would probably be usefully focused on homogenous (phenotypically? pathophysiologically? genetically?) groups of patients. We also have to define more relevant and suitable primary endpoints in order to evaluate efficacy; we have to use more accurate parameters than survival or delay from symptoms until use of NIV 22h daily.
does not perfectly fit with humans ALS on neuropathological aspects: this probably explains a large proportion of failures for drugs that show efficacy on animal models. We also need to reconsider trials in animals that started before the onset of the disease; it is clearly impossible to transpose the results to humans who will be treated once the disease has caused weakness and amyotrophy.
In 2018, a conference was held in Airlie House. The revised Airlie House ALS Clinical Trials Consensus Guidelines published after this meeting should serve to improve clinical trial design and accelerate the development of effective treatments for patients with ALS (41).
Figure 1 .
1 Figure 1. Mode of action of the candidate drugs presented in this review. |
03565808 | en | [
"chim.mate",
"chim.poly",
"spi.nrj",
"spi.nano"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03565808/file/S0266353821001998.pdf | Yan Zhang
email: [email protected]
Laurence Seveyrat
Laurent Lebrun
Correlation between dielectric, mechanical properties and electromechanical performance of functionalized graphene / polyurethane nanocomposites
Keywords: Graphene and other 2D-materials, Polyurethane, Polymer-matrix composites (PMCs), Electrical properties, Electromechanical behavior
Introduction
Electroactive polymers (EAPs) are used in a wide range of applications such as artificial muscles, sensors, microfluids and robotics, for their lightness, easy elaboration, and flexibility [START_REF] Carpi | Standards for dielectric elastomer transducers[END_REF][START_REF] Chen | Electronic Muscles and Skins: A Review of Soft Sensors and Actuators[END_REF][START_REF] Pelrine | Dielectric Elastomer Artificial Muscle Actuators: Toward Biomimetic Motion[END_REF][START_REF] Bar-Cohen | Electroactive polymer actuators and sensors[END_REF]. However, due to their relatively small dielectric constant, the application of EAPs are limited by the high driven electric field required. For this reason, research aiming at improving the electromechanical performance is of great interest [START_REF] Romasanta | Increasing the performance of dielectric elastomer actuators: A review from the materials perspective[END_REF][START_REF] Panahi-Sarmad | Graphene-based composite for dielectric elastomer actuator: A comprehensive review[END_REF][START_REF] Wongtimnoi | Improvement of electrostrictive properties of a polyether-based polyurethane elastomer filled with conductive carbon black[END_REF][START_REF] Carpi | Silicone-poly(hexylthiophene) blends as elastomers with enhanced electromechanical transduction properties[END_REF]. Among the electronic EAPs, polyurethane (PU) is a promising candidate thanks to its low cost as well as balanced properties among which can be mention the electric field-induced strain, mechanical modulus and temperature behavior. The studied PU elastomer presented a specific structure with a mix of hard and soft segments. Thermodynamic incompatibility of the two types of segments led to a phase separation favoring a high-level electromechanical performance [START_REF] Wongtimnoi | Improvement of electrostrictive properties of a polyether-based polyurethane elastomer filled with conductive carbon black[END_REF][START_REF] Diaconu | Properties of polyurethane thin films[END_REF][START_REF] Diguet | Physical modeling of the electromechanical behavior of polar heterogeneous polymers[END_REF].
The total strain S can be expressed in terms of the Maxwell effect due to the electrostatic interaction between charged electrodes, and electrostriction, which is linked to the change in material properties with strain. The strain is related to the applied electric field E with a quadratic equation:
= + = (1)
A previous study on this polyurethane showed that the Maxwell effect is the main mechanism involved in the total strain [START_REF] Zhang | On a better understanding of the electromechanical coupling in electroactive polyurethane[END_REF]. Its electromechanical performance thus depends directly on the ratio of the dielectric constant (εr') to the mechanical Young's modulus (Y), so the electromechanical coefficient M31 can be expressed as [START_REF] Kracovsky | A few remarks on the electrostriction of elastomers[END_REF]:
= (2)
Consequently, increasing the dielectric constant is a convenient way to improve the performance of a PU actuator. Several approaches can be employed for this, including the incorporation of fillers. In the case of insulating fillers, large quantities are required which in turn gives rise to a significant increase in Y, to the detriment of M31. For conductive fillers and especially nanosized ones, a lower content can be sufficient giving rise to a moderate mechanical reinforcement. Many authors linked the beneficial effect of the conductive fillers to Maxwell-Wagner-Sillars interfacial polarization [START_REF] Fook | Transparent flexible polymer actuator with enhanced output force enabled by conductive nanowires interlayer[END_REF][START_REF] Tsangaris | Electric modulus and interfacial polarization in composite polymeric systems[END_REF][START_REF] Sun | Interfacial polarization and dielectric properties of aligned carbon nanotubes/polymer composites: The role of molecular polarity[END_REF]. Consequently, it would be interesting to increase the quantity of interfaces by raising the filler content. The main issue is to preserve the insulating characteristic of the composite for which functionalization of the fillers can offer a suitable solution.
Research on composites of carbon nanotubes, graphene, carbon black have been reported for actuation applications [START_REF] Panahi-Sarmad | Graphene-based composite for dielectric elastomer actuator: A comprehensive review[END_REF][START_REF] Wongtimnoi | Improvement of electrostrictive properties of a polyether-based polyurethane elastomer filled with conductive carbon black[END_REF][START_REF] Park | Actuating single wall carbon nanotube-polymer composites: Intrinsic unimorphs[END_REF][START_REF] Li | Large dielectric constant of the chemically functionalized carbon nanotube/polymer composites[END_REF]. Graphene is of particular interest, due to the extensive electron mobility and a large specific surface area in favor of interfacial polarization. Moreover, the functionalization of fillers favors the dispersion as well as the cohesion with the polymer matrix [START_REF] Ramanathan | Functionalized graphene sheets for polymer nanocomposites[END_REF]. Therefore, in this work, oxygen-functionalized graphene nanoplatelets (OFG) were chosen to prepare PU/OFG composites.
An investigation was performed of the microstructural, mechanical and dielectric properties of PU/OFG nanocomposite films, in order to better understand how these properties affect the electromechanical coefficient M31. Solution tape casting was used for the film elaboration. OFG weight contents were varied in the range 0-14.60 wt%. The impact of nanoplatelets on the PU microstructure was investigated by SEM and DSC analyses, and the dielectric and mechanical properties were characterized. Frequency, temperature and electric field ranges were chosen so as to separate the various mechanisms involved in the polyurethane properties, while considering the conditions for driving actuators. The electromechanical coefficients were evaluated by bender displacement measurements, and compared to the values calculated by equation [START_REF] Chen | Electronic Muscles and Skins: A Review of Soft Sensors and Actuators[END_REF].
Their difference was discussed, especially in terms of conductivity/MWS relaxation contribution and the different dielectric behavior of PU and PU/OFG composites under a high electric field.
Experimental
Materials and preparation
Polyurethane granules (PU, Estane 58887 NAT 038, 1.13 g cm -3 , 87 shore A) and oxygenfunctionalized graphene nanoplatelets (OFG, Graphene Supermarket, oxygen HDPlas TM , planar size 0.3-5 µm, thickness <50 nm) were used to prepare PU/OFG composite films. The PU used in this work was a block copolymer, consisting of 4,4-methylene diphenyl diisocyanate (MDI), 1,4-butanediol (BDO) as hard segments, and poly(teramethylene) oxide (PTMO) as soft segments. Firstly, the PU granules were preheated at 80°C for 3 h following the recommendation of the supplier. Suitable content of OFG nanoplatelets were dispersed in N,Ndimethylformamide (DMF, Honeywell D158550) using an ultrasonic processor with a 7-mm sonotrode (Hielscher UP400S, amplitude 0.7, cycle 1, 40 min). Next, PU granules were dissolved in a DMF/OFG mixture with mechanical stirring and were heated at 80°C for 3h under reflux. The obtained homogenous solution (20 wt% PU in DMF) was left aside during 24 h after which the solution was applied to a glass plate by tape casting, followed by overnight heating at 60°C. A second treatment at 125°C for 3h was applied to remove the residual solvent and ensure the same thermal history. Films with a thickness of around 100 µm and a ratio of OFG to PU ranging from 0 to 14.60 wt% were finally obtained.
Microstructural characterization
The morphology of the PU/OFG composites was observed with a Zeiss Supra Scanning Electron Microscope (SEM) under high vacuum, at a low accelerating voltage of 1 kV.
Observations were performed on cryofractured (cross-section) surfaces of the films.
Thermal properties were obtained with Differential Scanning Calorimetry (Setaram DSC 131
Evo) under a nitrogen atmosphere. A sample around 20 mg was encapsulated in a 120-µL aluminum crucible. It was first cooled down to -120 °C, then heated to 220 °C, and finally cooled down to ambient temperature. Heating and cooling ramps were carried out at a rate of 10 °C min -1 .
Mechanical characterization
The mechanical properties of the films were characterized by a setup consisting of a Newport translation table, a motion microcontroller (XPS), and a 10-N force sensor (Doerler Mesures LC102TC). Films with dimensions of 60 mm in length and 10 mm in width were held on one side with a fixed clamp and on the other side with a mobile clamp, with and without the application of an electric field (function generator, amplifier trek 10/10). A low strain of 1 % was applied to the samples at a speed of 0.4 % s -1 . The strain as well as the stress response of the samples was detected by the motion controller and the force sensor. The Young's modulus of the composite films (Y) was determined on the linear part of the curve from the ratio of stress to strain.
Dielectric characterization
Gold electrodes (20 mm diameter, 26 nm thickness) were coated on both surfaces of the films by sputtering (Cressington 208HR). Dielectric spectroscopy was carried out using a Schlumberger Solartron 1255 impedance/gain phase analyzer and a 1296 dielectric interface.
The dielectric constant and conductivity were determined in the frequency range of 0.1Hz -1MHz.
Temperature measurements were performed under liquid nitrogen using a cryostat (Optistat DN2 Oxford Instruments) and a temperature controller (Oxford ITC503). Measurements were performed in a temperature range of 180-350 K and a frequency range of 0.1 Hz -1MHz, at 1 VRMS.
The dielectric constant was also recorded at high levels of electric field in the range 0.7-10 MV m -1 at 0.1 Hz. A unipolar AC voltage was generated by the impedance/gain phase analyzer, amplified by a Trek 10/10 amplifier, and applied to the sample.
Electromechanical characterization
The composite films were coated with a 26-nm gold electrode on both surfaces, and then attached to a rigid substrate (Mylar, RS 785-0792, 100 µm) with double-sided tape (3M, ATG 969, 100 µm). The electromechanical coefficient M31 of the obtained cantilever was evaluated with the help of a laboratory-built characterization bench (Fig. 1).
Fig. 1. Setup for the measurement of the electromechanical coefficient M31.
One end of the unimorph cantilever structure was clamped in the device, with the other end free to move. The active area was 40 mm x 10 mm x 0.1 mm. A unipolar AC electric field of 10
MVm -1 at 0.1 Hz was applied to the sample with the help of a function generator (Agilent 33220A) and a high-voltage amplifier (Trek 609-6). When the electric field was applied, a transverse elongation was generated in the polymer film, forcing the cantilever to bend. The tip displacement δ of the free end was measured by a laser sensor (3RG7056-3CM00-PF Pepperl+Fuchs), and the field-induced strain S31 could then be calculated from the measured displacement δ, with consideration taken to the configuration geometry and material properties.
The M31 coefficient was deduced from S31. Details of the calculations can be found in [START_REF] Zhang | On a better understanding of the electromechanical coupling in electroactive polyurethane[END_REF].
At least four samples of each composition were used for the dielectric, mechanical and electromechanical measurements, except for the thermally dependent dielectric measurements. The planar size (l) and thickness (d) of the OFG nanoplatelets were determined statistically from the SEM analysis. A planar size of 1.56± 0.64 µm and a thickness of 115± 83 nm were obtained. The planar size was in the range given by the supplier, while the thickness was greater than 50 nm, in agreement with the observed stacked platelets. The mean aspect ratio (Af= l/d) was 14.
Results and discussions
Microstructure of PU and PU/OFG composites
Fig. 3 depicts an example of DSC thermograms for PU and a PU/OFG composite. The thermal properties of all the compositions are given in Table 1.
- Four main thermal events could be observed during the temperature evolution. The first thermal event at -40/-50°C was related to the glass transition of the soft segments (SS) in PU, giving an indication of the separation state between soft and hard segments (HS). Globally, the Tg values did not vary considerably with the OFG content: only a moderate increase was noted at high contents. This increase could be an indication of a higher quantity of HS dissolved in the soft domains and thus a higher degree of HS-SS mixing [START_REF] Martin | The effect of average soft segment length on morphology and properties of a series of polyurethane elastomers. I. Characterization of the series[END_REF]. The endotherm in the range 40-80°C, which can be associated to local restructuring of HS units within the hard micro-domains [START_REF] Koberstein | Simultaneous SAXS-DSC study of multiple endothermic behavior in polyether-based polyurethane block copolymers[END_REF] was not affected by the presence of OFG even if a slight modification was observed with OFG contents of 12.63 wt% and 13.13 wt%.
There were two possible interpretations for the bimodal endotherm (Tm1 and Tm2) in the range 130-180°C. The first one was the micro-mixing of non-crystalline or semi-crystalline hard and soft phases followed by the fusion of crystalline HS [START_REF] Koberstein | Simultaneous SAXS-DSC study of multiple endothermic behavior in polyether-based polyurethane block copolymers[END_REF]. The second was the fusion of crystalline HS with the two peaks related to two lengths of HS [START_REF] Martin | The effect of average soft segment length on morphology and properties of a series of polyurethane elastomers. I. Characterization of the series[END_REF]. In both cases, the increase of the crystallinity was linked to an increase of Tm1 and Tm2 and the global melting enthalpy ΔHf.
In our case, the crystallinity could be considered to be unchanged, since the ΔHf was almost constant versus the OFG content, and no significant change of the Tm1 nor Tm2 value was observed. Due to the nature of the OFG fillers (nanoplatelets) and its random dispersion, the Halpin-Kardos model was chosen to predict the Young's modulus according to (3):
Mechanical properties
= ( ! " # ) ( %! " # ) + & ( ! ' # ) ( %! ' # ) (3)
where Y and Ym are the Young's Modulus of respectively the composites and the matrix, Φf is the volume fraction of fillers, which was calculated assuming a density of 2.2 g cm -3 for OFG and 1.13 g cm -3 for PU, Af is the aspect ratio, whose average value was determined to be 14 earlier in this work, and µL and µT are geometry factors, according to (4) and [START_REF] Romasanta | Increasing the performance of dielectric elastomer actuators: A review from the materials perspective[END_REF].
( ) = % ⁄ ⁄ (4) ( + = % ⁄ ⁄ (5)
Here, Yf is the Young's modulus of the fillers. This model assumes that a perfect adhesion between polymer and filler.
Both the experimental and predicted values increased linearly with the introduction of OFG, and the predicted values were reasonably close to the experimental ones. The increase in Young's modulus with the OFG content was moderate. For example, an increase of 30 % was obtained with 10.25 wt% OFG loading. As there was an uncertainty regarding the Yf due to the fact that the Young's modulus usually decreases with the number of graphene layers due to the slippage between the layers [START_REF] Lee | Elastic and frictional properties of graphene[END_REF], 3 values were employed for the model: 1 TPa for a monolayer [START_REF] Lee | Measurement of the elastic properties and intrinsic strength of monolayer graphene[END_REF], 350 GPa as published for a 10-layer graphene sample [START_REF] Gong | Optimizing the reinforcement of polymer-based nanocomposites by graphene[END_REF] and for the lowest value of 4.5 GPa corresponding to the Young's modulus of graphite [START_REF] Pierson | Handbook of Carbon, Graphite, Diamonds and Fullerenes[END_REF]. Among the possible physical values for Yf, the best fit was obtained for the lowest value.
Possible reasons can be the decrease of Yf induced by the functionalization, as reported in literature [START_REF] Zheng | Effects of functional groups on the mechanical and wrinkling properties of graphene sheets[END_REF], and the moderate interfacial adhesion between the polymer and the nanoplatelets, which is in good agreement with the SEM micrographs. This effect was indeed not considered by the model, and consequently, the mechanical reinforcement was moderate, in favor of a large electric field-induced strain. conduction, Maxwell Wagner Sillars (MWS) polarization and/or electrode polarization [START_REF] Carpi | Silicone-poly(hexylthiophene) blends as elastomers with enhanced electromechanical transduction properties[END_REF][START_REF] Li | Large dielectric constant of the chemically functionalized carbon nanotube/polymer composites[END_REF][START_REF] Dang | Giant dielectric permittivities in functionalized carbon-nanotube/ electroactive-polymer nanocomposites[END_REF];
Dielectric properties
and the second one, less pronounced, around 10 4 Hz, generally attributed to dipolar relaxation.
Figure 5(b) depicts the variation of the loss tangent (ratio of the imaginary part εr" to the real part εr' of the dielectric constant) versus frequency for various OFG contents. As for the pure PU films, as the frequency increased, the nanocomposites exhibited a large decrease in loss tangent followed by a moderate increase. The decrease observed for low frequencies was linked to the end of the contribution of electrode polarization, MSW polarization and/or conduction to the dielectric properties whereas the increase at medium frequencies was due to other dipolar relaxations. For frequencies below 10 Hz, lower loss tangent values were obtained for all composites as compared to pure PU.
Generally, conductive fillers increase the loss tangent of the composites [START_REF] Wongtimnoi | Improvement of electrostrictive properties of a polyether-based polyurethane elastomer filled with conductive carbon black[END_REF][START_REF] Park | Actuating single wall carbon nanotube-polymer composites: Intrinsic unimorphs[END_REF][START_REF] Li | Large dielectric constant of the chemically functionalized carbon nanotube/polymer composites[END_REF]. Only very few papers have reported on lower dielectric losses with conductive fillers [START_REF] Fredin | Substantial recoverable energy storage in percolative metallic aluminum-polypropylene nanocomposites[END_REF]. This decrease could certainly be associated to the functionalization of graphene. The insulating layer formed by the plasma functionalization may have played an effective role since the oxygen functionalization created carbonyl, hydroxyl and other functionalities on the surface of the conductive platelets.
Fig. 5(c) presents the variation of the dielectric constant versus OFG content, at 0.1 Hz. The dielectric constant increased with the OFG content and this occurred to a greater extent after 10.25%. The attained value of the dielectric constant was about four times higher than that of the PU film.
Below the percolation threshold, the experimental data can be modeled by [START_REF] Bouazzaoui | Nonuniversal percolation exponents and broadband dielectric relaxation in carbon black loaded epoxy composites[END_REF]:
, -= , . -(1 - . . 1 ) %2 (6)
Here, εrm' is the dielectric constant of the polymer matrix, mc is the percolation threshold, and q is the critical exponent.
At 0.1 Hz, a percolation threshold of 13.34 wt% and a q exponent of 0.32 were obtained. This percolation threshold was situated in the upper part of the percolation rate range reported for composites with conductive fillers [START_REF] Marsden | Electrical percolation in graphene -polymer composites[END_REF]. According to the literature, typical values for the q exponent are 1, 1.3 and 0.7 for 1D, 2D and 3D space dimensions [START_REF] Stauffer | Introduction to percolation theory[END_REF][START_REF] Myroshnychenko | Effective complex permittivity of two-phase random composite media: A test of the two exponent phenomenological percolation equation[END_REF][START_REF] Grannan | Critical behavior of the dielectric constant of a random compostie near the percolation threshold[END_REF][START_REF] Nan | Physics of inhomogeneous inorganic materials[END_REF], respectively. q values below 0.5 have also been reported and attributed to a 3D network of fillers [START_REF] Bouazzaoui | Nonuniversal percolation exponents and broadband dielectric relaxation in carbon black loaded epoxy composites[END_REF]. The q value of 0.32 obtained in this work can be related to a 3D distribution of OFG nanoplatelets, which was confirmed by the SEM micrograph.
Fig. 6(a) presents the real part of the conductivity (σ') of PU and PU/OFG composites as a function of the angular frequency (ω). The curve of conductivity of both PU and PU/OFG composites consisted of two parts, namely, a low-frequency plateau, followed by a monotonously increasing part at higher frequency. In this higher frequency range, the conductivity increased when raising the OFG content, which was consistent with higher conduction values of the graphene fillers. For concentrations beyond 14.60%, the global conductivity exhibited a sharp increase in accordance with a percolating system.
According to Jonscher's law [START_REF] Jonscher | The "universal" dielectric response[END_REF], the conductivity σ' can be described as:
σ'= σDC + Aω n ( 7
)
where A is a constant and n is the power exponent. The DC conductivity σDC corresponds to the value measured for the plateau. The n exponent was obtained using the slope of the curve representing log (σ' -σDC) as a function of log ω (data not shown). The n exponent was in the range 1 to 1.1, except for 14.60 wt% with n near 0.8.
Despite that the exponent is limited to n≤ 1 in the original Jonscher's law, reports on n values higher than unity exist in polymers with ionic-like conduction mechanisms [START_REF] Li | Trap modulated charge carrier transport in polyethylene/graphene nanocomposites[END_REF][START_REF] Papathanassiou | Universal frequency-dependent ac conductivity of conducting polymer networks[END_REF]. For a higher OFG content near the percolation threshold, the n exponent value diminishes which can indicate the contribution of electronic hopping conduction.
Fig. 6(b) depicts the evolution of σDC versus the OFG content. The observed trend was similar to that obtained for the loss tangent: an initial decrease for low and moderate OFG contents followed by an increase for higher contents in the vicinity of the percolation threshold. The evolution was therefore quite different from what has been described in several papers on EAP composites with carbon fillers [START_REF] Li | Large dielectric constant of the chemically functionalized carbon nanotube/polymer composites[END_REF][START_REF] Khurram | Correlation of electrical conductivity, dielectric properties, microwave absorption, and matrix properties of composites filled with graphene nanoplatelets and carbon nanotubes[END_REF], which reported a monotonic increase of DC conductivity when increasing the content of conductive fillers. Since σDC is related to the transport of free charges from one electrode to another, it can be assumed that the OFG would behave as traps for the charged mobile species with an efficiency correlated to the distance between them. In addition, the isolating layer created by the functionalization can also contribute to the decrease of the conductivity, as it did for loss tangent. For a moderate OFG content, the distance between two neighboring traps was high enough to disturb the charge transport but this capability decreased as the distance was reduced, thus leading to new paths for the charge displacement.
Near the percolation threshold, an electronic conduction can have an important role through the tunnel effect.
Temperature-dependent experiments were conducted for a more thorough interpretation of the charge carrier transport in these materials. It was important to pay particular attention to these features since they occur near room temperature and can help with the interpretation of the electromechanical performance of such EAPs. Fig. 7 presents the evolution of the imaginary part of the dielectric constant ɛr" and the modulus M" in the temperature range from -100 to 80 °C. The dielectric modulus M" formalism [START_REF] Mccrum | Anelastic and dielectric effects in polymeric solids[END_REF] (equation ( 8)) can be used to highlight relaxation phenomena not observed with ɛr", especially at low frequencies when high dielectric constant and conductivity values can mask the phenomena. and are not affected by fillers. In addition, the temperature range is below that used for actuators.
For all these reasons, attention was paid to the last peak in the vicinity of room temperature. This peak was highly dependent on OFG content, with the position of the maximum varying from 0 to 25 °C. A mix of interfacial polarization (MWS), conduction and/or electrode polarization was associated to this range. To go deeper in the interpretation and obtain information on the type of charges involved in this process, two studies were conducted: the variation of DC conductivity with temperature, and the treatment of the relaxation phenomena in the range from -10°C to 35°C after subtracting the σDC part.
Fig. 8(a) presents the evolution of the log of σDC versus 1/T. In the temperature range -10 °C to 35 °C, log σDC exhibited a near linear relation with 1/T. The temperature range was chosen so as not to consider other phenomena such as the α relaxation. The linear evolution agreed with the Arrhenius law:
-5678 9: (;
) = 5678 < + = > ? @ + (9)
where σ0 is the pre-exponential factor, Ea is the activation energy of DC conduction, and KB is the Boltzman constant. Relaxation peaks were visible in the corrected spectra, revealing the MWS relaxation. They were fitted using the double stretched Havriliak-Negami relaxation function [START_REF] Havriliak | A complex plane analysis of α-dispersions in some polymer systems[END_REF]:
, * (B) = , C + ∆E FGH(IJK) L M N (10)
where , C is the dielectric constant at high angular frequency far from the relaxation peak, ∆, is the dielectric dispersion, τ is the characteristic relaxation time, and α and β are related to the asymmetry and broadness of the spectra, respectively. The Havriliak-Negami fitting results on conduction-free loss relaxation are presented in Fig. 10 at ambient temperature (298 K). The dielectric dispersion ∆ɛ increased non-linearly with the OFG content, going from 2 for the pure PU to 5-6 for low and moderate OFG contents, and then abruptly grew for contents in the vicinity of the percolation threshold. This was evidence of a MWS polarization contribution nonproportional to the OFG content, and it was consequently indirectly correlated to the number of interfaces. ɛ∞ presented a slight increase versus OFG content, indicating that contrarily to what is generally assumed other relaxations, such as α and β relaxations, may have contributed. This point is currently under investigation.
The relaxation time τ remained in the range 1.0-1.5 s, and was not significantly influenced by the OFG content. The parameter α was equal to 1 for all the compositions, evidencing that the distribution of the relaxation time was symmetric, whereas the β parameter lay in the range 0.4-0.65, displaying a significant broadness of the distribution. Both for PU and the PU/OFG composites, the MWS polarization did not relax at a single frequency. From the temperature dependence of the relaxation time (data not shown), the activation energy was determined to be in the range 0.1-0.35 eV. The Ea was not greatly affected by the OFG content and the low values obtained indicated a high dipole mobility. The predicted M31-1V increased with the filler content, first at a moderate level and then more dramatically near the percolation threshold. This is in good agreement with the large increase of the dielectric constant, which is more important than the increase of Young's modulus. The evolution of the measured M31 exhibited a different trend. The measured M31 values of the composites were lower than the ones of pure PU, and it decreased slightly with the OFG content.
Electromechanical properties
The measured M31 values of pure PU were highly dispersed. It is noteworthy that the measurement conditions in terms of temperature and frequency were also those where a very large variation of dielectric properties was observed. The inability of the ɛr'/Y law to predict the measured data can also be linked to the fact that the dielectric constant and the Young's modulus can depend on the electric field. Given that EAPs work at electric fields from a few to hundreds of MV m -1 , it was essential to investigate the dielectric and mechanical properties under high electric fields. Fig. 12(b) presents the evolution of the real part of the dielectric constant as a function of applied electric field for PU and two composites at 0.1 Hz. Their dielectric behavior was different: the dielectric constant of the pure PU was three-fold of that at moderate electric fields before stabilizing at a value near 20 at 10 MV m -1 . The increase was lower for PU-10.25wt%-OFG, and for PU-12.63wt%-OFG, a decrease was observed. Fig. 12(c) presents an example of a strain-stress curve, measured with and without a 10-MV m -1 electric field for pure PU. The curves without and with an electric field were parallel, indicating that the electric field had no influence on the Young's modulus.
The evolution of the measured M31 coefficients and new predicted values of M31 from data measured at 10 MV m -1 (namely M31-E) were then plotted and can be seen in Fig. 12(d). The M31-E coefficient was successful in predicting the higher M31 of pure PU compared to the composites, while M31-1V did not. For the PU/OFG composites, neither M31-1V nor M31-E were able to predict the measured coefficients, or the decrease in measured M31 versus OFG content. The difference between the experimental and calculated coefficients increased with the OFG content. This observation can be correlated with the increase in ∆ɛ and ɛ∞ parameters due to the MWS relaxation. The depolarizing field induced by the interfacial polarization could indeed diminish the applied field, leading to a competition between the increase in dielectric constant and the effective electric field seen by the polymer.
In addition, fluctuations in the temperature range harboring the maximum of the relaxation peak could produce strong variations in dielectric properties, leading to a dispersion of the measured M31. As discussed previously in this paper, the moderate mechanical interfacial adhesion could also affect the strain transmission and reduce the macroscopic electric field-20 induced strain.
Conclusions
The mechanical, dielectric and electromechanical properties of PU/OFG composites with various OFG content were investigated, with attention paid to the properties at low frequency and temperatures between -20 and 35°C, where most EAP actuators and energy harvesters work.
The 3D distribution of nanoplatelets in the composites was confirmed by SEM analysis as well as the percolation law. The mechanical reinforcement remained modest probably due to a moderate adhesion between polymer and graphene nanoplatelets, highlighted by SEM and in agreement with over-estimated values predicted from the Halpin-Kardos model. DSC experiments revealed that the microstructure of polyurethane was not really affected by the presence of functionalized graphene, indicating the extrinsic role of the fillers on the properties.
As excepted, a large increase of the dielectric constant was obtained near the percolation threshold, as well as a decrease versus frequency. The variation in DC conductivity versus OFG content supported the fact that the OFG behaved as traps for the charged mobile species with the efficiency correlated to the distance between traps. The obtained activation energy lay in a range 0.7-0.9 eV, regardless of the OFG content, pointing at an ionic origin of the conduction.
A detailed study was carried out on the dielectric relaxation peak at low frequency, near room temperature, after removing the contribution of the DC conduction. Havriliak-Negami fitting indicated a Maxwell Wagner Sillars polarization contribution. It was however not proportional to the OFG content, and consequently not directly correlated to the number of interfaces.
A comparison of the experimentally determined M31 and the calculated M31 revealed the importance of taking the dielectric constant values under the same conditions as those used to drive actuators. The dielectric constant measured at high electric fields gives a better prediction of the electromechanical coefficient M31 of pure PU at low frequency, but does not completely explain the decreasing M31 of the composites. A possible explanation for this could be that the interfacial polarization decreased the electric field seen by the composites and counterbalanced the increase in dielectric constant. Other factors, such as a moderate mechanical interfacial adhesion as well as temperature fluctuations around the maximum of the relaxation peak, could also explain the values of the M31 coefficients.
Fig. 2
2 Fig.2presents an example from the SEM analysis of the composite containing 10.25 wt% of
Fig. 2 .Fig. 2 (
22 Fig. 2. SEM micrographs of a cryofractured (cross section) surface of the PU-10.25 wt%-OFG.
Fig. 3 .
3 Fig. 3. DSC thermograms of PU and PU-12.63 wt%-OFG.
Fig. 4 (Fig. 4 .
44 Fig.4(a) gives an example of a stress-strain curve of the PU/OFG composites. Fig.4(b)
Fig. 5 (
5 Fig. 5(a) presents the evolution of the real part of the dielectric constant (εr') for different OFG
Fig. 5 .
5 Fig. 5. (a) Variation of the real part of the dielectric constant ɛr' and (b) the loss factor tanδ as a
Fig. 6 .
6 Fig. 6. (a) Conductivity of PU/OFG composites versus frequency, (b) DC conductivity versus
Fig. 7 .
7 Fig. 7. Dielectric relaxations of PU/OFG composites at 0.1 Hz (a) imaginary part of the dielectric
Fig. 8 .Fig. 9 (
89 Fig. 8. Evolution of the DC conductivity of PU and the PU/OFG composites: (a) log σDC versus
Fig. 9 .
9 Fig. 9. Evolution of (a) the imaginary part of the dielectric constant εr" and (b) εr" without the
Fig. 10 .
10 Fig. 10. Havriliak-Nagami fitting parameters versus OFG content for the MWS relaxation at ambient temperature (from DC conduction-free dielectric data): (a) dielectric dispersion ∆ɛ, (b) high frequency dielectric constant ɛ∞, (c) relaxation time τ, (d) spectra broadness parameter β.
Figure 11 Fig. 11 .
1111 Figure 11 depicts the variation of S31 versus the square of electric field amplitude for PU and
Fig. 12 .
12 Fig. 12. (a) M31 of PU/OFG composites measured at 10 MV m -1 , 0.1 Hz and M31 calculated with
Table 1 .
1 Thermal properties of PU-OFG films from DSC experiments.
wt% TgSS ΔCp Tm1-2 ΔHf
OFG (°C) (mW) (°C) (J/g)
0 -50.0 1.2 147-169 12.8
2.77 -45.2 1.3 147-169 13.1
5.40 -48.9 1.3 150-168 11.6
10.25 -47.5 1.1 147-170 12.6
12.63 -45.8 1.2 150-169 11.2
13.13 -41.1 1.1 147-173 11.2
Acknowledgement
The Authors thank the Centre Lyonnais de Microscopy (CLYM) for the SEM analysis. Yan Zhang thanks the China Scholarship Council for financial support during her PhD studies. |